id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
255215547
|
pes2o/s2orc
|
v3-fos-license
|
A New Tetracyclic Bromopyrrole-Imidazole Derivative through Direct Chemical Diversification of Substances Present in Natural Product Extract from Marine Sponge Petrosia (Strongylophora) sp.
Chemical diversification of substances present in natural product extracts can lead to a number of natural product-like compounds with a better chance of desirable bioactivities. The aim of this work was to discover unprecedented chemical conversion and produce new compounds through a one-step reaction of substances present in the extracts of marine sponges. In this report, a new unnatural tetracyclic bromopyrrole-imidazole derivative, rac-6-OEt-cylindradine A (1), was created from a chemically diversified extract of the sponge Petrosia (Strongylophora) sp. We also confirmed that 1 originated from naturally occurring (-)-cylindradine A (2) via a new reaction pattern. Moreover, (-)-dibromophakellin (3) and 4,5-dibromopyrrole-2-carboxylic acid (4), as well as 2, were reported herein for the first time in this genus. Studies on the possible reaction mechanism and bioactivities were also conducted. The results indicate that the direct chemical diversification of substances present in natural product extracts can be a speedy and useful strategy for the discovery of new compounds.
Introduction
Natural products (NPs) are considered to be significant sources of novel scaffolds and bioactive molecules. According to recent studies, approximately half of all approved drugs for the treatment of human diseases are based on NPs [1]. Additionally, over 100 NPs and NP-derived compounds have been used in clinical trials or in registration for various diseases, particularly as anticancer and anti-infective agents [2]. NPs exhibit considerable structural complexity and diversity that differs from synthetic small-molecule libraries in different aspects, such as higher molecular mass, larger number of sp 3 carbon and oxygen atoms, higher hydrophilicity, and greater molecular rigidity [3]. Although the successful records and advantages of NPs in drug discovery are widely recognized, NP research has largely scaled back from the pharmaceutical industry. To further drug discovery research, humanity needs a more diverse group of compounds in terms of chemical space. Several research groups have developed strategies to explore novel compounds with desirable biological activities [4]. One of the most promising approaches is direct chemical diversification of substances present in NP extract, with mostly uncharacterized composition, by a simple chemical reaction to reach a number of NP-like compounds in one step. Previously, the studies using this concept have been reported. The chemical reactions, such as ammonolysis [5,6], oxidation [7,8], sulfonylation [9], bromination [10], and fluorination [11], could transform chemical groups that are highly common in NPs to chemical groups rarely found in nature. This can be applied to various types of compounds for modifying chemical groups in NPs and even remodeling molecular skeletons [12][13][14]. Although the previous research provided successful results, there are currently not enough methodologies for chemical diversification of substances present in NP extracts.
The aim of this research is to discover unprecedented chemical conversion and produce new compounds through a one-step reaction of substances present in the extracts of marine organisms. Marine sources have been receiving increasing attention, as they can produce chemically novel bioactive metabolites [15,16]. As part of our ongoing research on the chemical diversification of substances present in several marine NP extracts from sponges and sponge-derived fungi using various heat-treated solvents, including EtOH, 1,4-dioxane, acetonitrile, and p-xylene, we found one successive reaction of the substance present in the extract from the marine sponge Petrosia (Strongylophora) sp. (Petrosiidae, Haplosclerida). The marine sponges of the genus Petrosia contain a variety of bioactive metabolites. Of the isolated compounds reported between 1978 and 2020, polyacetylenes (53%), meroterpenoids (19%), and sterols (16%) were the most frequently found, while alkaloids (6%), fatty acids (3%), peptides (2%), and saponins (1%) were not very common [17]. In this paper, we report the successful novel chemical diversification of the substances present in the extract of the marine sponge by using heat-treated EtOH. Introduction of ethoxy into a natural molecule led to the discovery of a new, unexpected derivative of alkaloid via a new reaction pattern, which differs from previous ones. Isolation, structure elucidation, studies of possible reaction mechanisms, and bioactivity evaluation of a new unnatural derivative of alkaloid together with naturally occurring compounds (Figure 1) were described.
Chemical Diversification of Substances Present in NP Extract and Reversed-Phase Liquid Chromatography/Mass Spectrometry (RP-LCMS) Detection
The marine sponge Petrosia (Strongylophora) sp. (dry weight 50 g) was extracted using MeOH (400 mL × 7 times) at room temperature to obtain a crude MeOH extract (7 g). The crude MeOH extract was subjected to prefractionation using reversed-phase medium pressure liquid chromatography (RP-MPLC) which was eluted with 30%, 50%, 70%, 80%, and 100% MeOH in H 2 O to provide each corresponding fraction (2.5, 0.2, 0.2, 0.1, and 0.3 g, respectively). Prefractionation of the crude MeOH extracts could be useful to reduce the complexity of the diversified extracts and to the following chemical reactions due to better solubility. A portion of each fraction was dissolved in either EtOH, 1,4-dioxane, acetonitrile, or p-xylene (conc. 1 mg/3 mL) and heated at 80, 105, 85, and 140 • C, respectively, under N 2 atmosphere. After 72 h, the reaction mixtures were concentrated under reduced pressure to obtain diversity-enhanced extracts. Comparison of RP-LCMS chromatograms before and after the chemical diversification showed interesting changes in the reaction mixture of the 50%MeOH fraction with heat-treated EtOH ( Figure 2). The reaction was repeated to check whether there is no problem with reproducibility. A peak at retention time (RT) 5.7 min of rac-6-OEt-cylindradine A (1, m/z 432 [M + H] + , 434, 436) could be detected, whereas a peak at RT 4.8 min of (-)-cylindradine A (2, m/z 388 [M + H] + , 390, 392) significantly decreased after the reaction. This finding raised the possibility that 2 was converted into 1. Moreover, the molecular mass of 1 was 44 mass units larger than that of 2, corresponding to the introduction of ethoxy into a natural molecule through the exchange of one hydrogen atom. Base peak chromatogram (positive ion mode, reversed-phase liquid chromatography/mass spectrometry (RP-LCMS)) of natural product (NP) extracts before (black line) and after chemical diversification (red line). The retention time is shown on the x-axis, going from 0 to 15 min, and the base peak intensity is shown on the y-axis, going from 0 to 9.0 × 10 7 . Peaks of rac-6-OEt-cylindradine A (1) at retention time (RT) 5.7 min and (-)-cylindradine A (2) at RT 4.8 min are pointed out by red and black arrows, respectively.
To obtain sufficient quantity to determine the structure of 1 and confirm the possibility that 1 was converted from 2 after the chemical reaction, a larger amount of the marine sponge (dry weight 203 g) was extracted using MeOH to obtain a crude MeOH extract (28 g), and then prefractionation by the same procedure. Purification of the 50% MeOH fraction (0.8 g) by RP-MPLC which was eluted with 20% MeOH in 0.1% trifluoroacetic acid (TFA)·H 2 O provided 2 (221 mg, 28% yield), then half of the amount of 2 was submitted to chemical treatment as described above. The reaction mixture (106.0 mg) was purified by RP-MPLC which was eluted with 25% MeOH in 0.1% TFA·H 2 O to obtain 1 (19.1 mg, 18% yield). This result certainly confirmed that the new unnatural derivative of alkaloid 1 originated from 2 (Scheme 1). Scheme 1. rac-6-OEt-cylindradine A (1), was detected in a diversity-enhanced extract after direct chemical diversification of the substances present in the natural product extract from marine sponge Petrosia (Strongylophora) sp. Then, it was confirmed that 1 originated from (-)-cylindradine A (2). Abbreviations: RP-MPLC, reversed-phase medium pressure liquid chromatography; TFA, trifluoroacetic acid. [18], indicating that they possess the same pyrrole ring conjugated with a carbonyl group. The infrared (IR) absorption bands (MeOH) exhibited at 1663 (carbonyl group and C=N bond), and 3200, 3410 (amino group) cm −1 . The 1 H and 13 C nuclear magnetic resonance (NMR) spectra data and HMBC correlation of 1 in DMSO-d 6 are shown in Table 1. The 13 C NMR spectrum disclosed thirteen carbons, consisting of an amide carbonyl (δ C 157.2), five sp 2 quaternary carbons (δ C 156.0, 131.5, 114.0, 106.6, 95.5), two sp 3 quaternary carbons (δ C 86.7, 86.0), four methylenes (δ C 61.3, 44.3, 34.3, 19.2), and a methyl carbon (δ C 14.8). By comparing 1 H and 13 C NMR data of 1 with those of 2, it was suggested that they possessed the same skeleton that has a 3-carbamoylpyrrole ring A, a pyrrolidine ring C, and an imidazoline ring D [18]. However, the 13 C NMR of 1 differed from that of 2 by the signal of C-6, which shifted to downfield due to the connection of oxygen, as well as the presence of a methylene carbon and a methyl carbon at δ C 61.3 and 14.8, respectively. In addition, the methine proton (δ H 5.28, H-6) in 2 was not observed in 1.
Experiments to Investigate the Possible Reaction Mechanisms
We conducted experiments to gain insight into possible reaction mechanisms. As expected, no reaction was observed by RP-LCMS upon heating 3 with EtOH (Scheme 2). This indicates a nitrogen of 3-carbamoylpyrrole ring A might be responsible for the intro-duction of the angular ethoxy group at C-6. EtOH acts as a nucleophile that displaces a hydrogen atom in 2 to form the product 1. We also confirmed that compound 4 did not react with EtOH. To clarify whether the reaction proceeds through a radical mechanism, the radical scavenger galvinoxyl was introduced in the reaction of 2 in a stoichiometric amount; however, galvinoxyl did not affect the product yield of the reaction (same base peak intensity of 1 between reactions with and without galvinoxyl, Figure S12). Our reaction seems unlikely to involve radicals.
In addition, the reactions of 2 with EtOH were carried out at 25, 50, and 80 • C. The results showed that no production of 1 was observed at 25 • C, whereas the reaction at 50 • C yielded 1 in a lower amount compared to that from the reaction at 80 • C ( Figure S13). It indicated that the preparation of 1 was markedly sensitive to temperature.
Chemical conversions of 2 with various aliphatic alcohols were conducted (Scheme 3). Treatment of 2 with MeOH at its boiling point (65 • C) did not lead to the formation of 6-OMe-cylindradine A, whereas treating 2 with isopropanol and n-propanol at higher temperatures (boiling points 83 • C and 97 • C, respectively) led to the introduction of a propoxy group into the structure, which could be detected as m/z 446 [M + H] + using RP-LCMS. We believe that it is due to the appropriate reaction temperature. Scheme 3. Chemical conversion of (-)-cylindradine A (2) with various aliphatic alcohols at their corresponding boiling temperatures. Mass detection of reaction products was analyzed by RP-LCMS.
Biological Activities
Pyrrole-imidazole alkaloids are a group of intriguing NPs with a range of significant pharmacological and ecological bioactivities, such as anticancer, antimicrobial, and feeding deterrent activities [23]. Compound 2 was reported as a moderate inhibitor against the P388 murine leukemia cell line (50% inhibitory concentration (IC 50 ) 20 µM) [18]. In our study, all four compounds were evaluated for anticancer activity against four cancer cell lines, and antimycobacterial activities ( Table 2). The changes in chemical structure were expected to translate into changes in biological activities; however, 1 and 2 were not active against any of the tested cancer cells or mycobacteria (IC 50 > 100 µM for anticancer activity; minimum inhibitory concentration (MIC) > 100 µM for antimycobacterial activity). Due to rare type structure, scarcity in natural sources, stability, and a challenging total synthesis of 2, there have limited studies to evaluate its bioactive potential [24]. Further investigation in biological activities remains to be explored.
Discussion: Further Consideration of Our Finding
For the purpose of drug discovery, direct chemical diversification of substances present in NP extracts is an effective methodology for producing new compounds. It is in the early stages and rapidly progressing. The dramatic improvements in speed of discovery of new compounds might allow NP research to be more competitive with synthetic compound screening. This approach is also essential to retain the usefulness of NPs and their derivatives in pharmaceutical research; however, it remains an extremely challenging task due to the high composition complexity with several hundreds of compounds. Unexpected side reactions and the decomposition of substances present in NP extract cause complications in the reaction system, resulting in difficulties identifying the products from chemically diversified extracts. In addition, we suggest that marine-derived organisms can serve as good candidates for direct chemical diversification of substances present in NP extracts [25][26][27]. In order to expand the chemical diversity of marine NP extracts, other types of reactions should be applied.
General Experimental Procedures
NMR spectra were measured on a Varian Inova 600-II NMR spectrometer (Varian, Inc., CA, USA). Chemical shifts were referenced to the residue solvent peaks: DMSO-d 6 (δ H/C 2.50/39.5) and CD 3 OD (δ H/C 3.31/49.0). HRMALDIMS spectra were recorded on a SpiralTOF™ JEOL JMS-S3000 mass spectrometer (JEOL Ltd., Tokyo, Japan). A melting point was determined on a Buchi B-545 apparatus (Flawil, Switzerland) or MP-J3 micro melting point apparatus (Anatec Yanaco Corp., Kyoto, Japan). IR spectra were recorded on an IRAffinity-1 or IRAffinity-1S (Shimadzu Corporation, Kyoto, Japan). Optical rotations were measured on a Jasco P-1020 polarimeter (JASCO Corporation, Tokyo, Japan) in MeOH. RP-LCMS experiments were performed on an Acquity UPLC system coupled to a Quattro Premier XE mass spectrometer fitted with an electrospray ionization (ESI) interface. Separations were performed on an Acquity UPLC BEH C18, 2.1 × 150 mm, 130 Å, 1.7 µm (Waters Corporation, MA, USA). The mobile phase was composed of MeCN and 0.1% HCOOH·H 2 O. The flow rate was 0.25 mL/min at 40 • C, and the injections were carried out through a 5 µL-loop. UV data were collected on a UV photodiode array detector. The RP-LCMS data was processed using a MZmine 2.53 (MZmine Development team). RP-MPLC was conducted using a dual channel automated Smart Flash EPCLC-W-Prep 2XY system (Yamazen Corporation, Osaka, Japan). The medium-pressure ODS (C18) chromatographic column (Yamazen Corporation, universal column size L: 3.0 × 16.5 cm; M: 2.3 × 12.3 cm; S: 1.8 × 11.4 cm, pore size 120 Å, particle size 30 or 50 µM) was conditioned by first eluting with 100% MeOH or MeCN, then equilibrating with a suitable initial mobile phase. After dissolving the extract in the initial mobile phase, the solution was loaded in the ODS (C18) inject column (Yamazen Corporation, size M: 2.0 × 7.5 cm; S: 1.5 × 4.4 cm; SS: 1.3 × 3.1 cm) and separated by the gradient elution program. The UV wavelength detection was at 230 nm. The fractions were collected automatically based on time. The purity of each collected fraction was determined by analytical reversed-phase high pressure liquid chromatography (RP-HPLC). When the purity of the target compound was less than 90%, it was submitted to further purification. An analytical RP-HPLC system was composed of a LC-20AD pump, a DGU-20A3R degasser, a CTO-20AC column oven, and an SPD-M20A diode array detector (Shimadzu Corporation, Kyoto, Japan). Separations were performed on a Xbridge ® C18 (4.6 × 150 mm, 130Å, 3.5 µm) coupled with a XBridge ® BEH C18 Vanguard ® Cartridge (3.9 × 5 mm, 130 Å, 3.5 µm, Waters Corporation), unless otherwise explained. The mobile phase was composed of either MeOH or MeCN and 0.1% TFA·H 2 O degassed by sonication. The flow rate was 0.5 mL/min at 25 • C, and the injections were carried out through a 20 µL-loop. Data analysis was performed by Labsolutions (Shimadzu Corporation).
Sponge Material
The marine sponge Petrosia (Strongylophora) sp. was collected by SCUBA divers at a depth of 10-15 m at Lembeh Island, Indonesia in August 1999. It was cut into small pieces and air dried at the collecting place. The sponge was identified by Dr. Nicole J. de Voogd. The voucher specimens have been deposited at the National Museum of Natural History, Leiden, The Netherlands, and the Laboratory of Natural Products for Drug Discovery, Graduate School of Pharmaceutical Sciences, Osaka University (under the code number 9930H12).
Chemical Diversification of Substances Present in NP Extracts to Prepare the Diversity-Enhanced Extracts
Initially, 1 mg of the obtained fractions was transferred to 15 mL test tubes (15 × 150 mm), and then dissolved in 3 mL of either EtOH, 1,4-dioxane, acetonitrile, or p-xylene. The test tubes were placed in the ChemiStation PPM-5512 apparatus (EYELA, Tokyo, Japan) whose temperature setting was stabilized. Temperatures were set to 80, 105, 85, and 140 • C for EtOH, 1,4-dioxane, acetonitrile, and p-xylene, respectively. After 72 h under N 2 atmosphere, the reaction mixtures were cooled to room temperature and concentrated by rotary evaporation to obtain diversity-enhanced extracts. The diversity-enhanced extracts were submitted to RP-LCMS analysis and biological evaluation. The mixtures, in which any changes in either chemical structure or bioactivity after chemical diversification were observed, were repeated to check reproducibility.
Chemical diversification condition of 50% MeOH fraction with EtOH was optimized by varying concentration (0.2-1.0 mg/mL), temperature (25,50, and 80 • C), and reaction time (24 h-7 days). The best reaction conditions were concentration 1 mg/3 mL, at 80 • C and for 72 h (Figure 2). The reactions of 1 mg of 50% MeOH fraction were conducted in 42 test tubes, then the reaction mixtures were pooled to obtain an overall 44 mg of diversity-enhanced extracts; from this, we succeeded in isolating the new product, after chemical treatment of substances present in NP extract, and obtained its 1 H NMR data. The structure of this compound was identified to 1, from matching it with the product obtained in the experiment in 3.6, which was developed later.
Larger-Scale Extraction, Prefractionation of the MeOH Crude Extract, and Purification of Naturally Occurring Compounds 2-4
To obtain sufficient quantity to determine the structure of 1 as well as to confirm that 1 was converted from 2 after the chemical reaction, larger-scale extraction of the sponge was conducted in the same procedure. The sponge (dry weight 203 g) was macerated in MeOH (1.5 L × 7 times) to obtain a crude MeOH extract (28 g). Prefractionation of the MeOH extract using RP-MPLC which was eluted with 50%MeOH in H 2 O provided 0.8 g of 50% MeOH fraction. Subsequently, the 50% MeOH fraction was purified by RP-MPLC, which was eluted with 20% MeOH in 0.1% TFA·H 2 O to afford 2 (221 mg) and 3 (23 mg). Furthermore, isolation of the 50% MeOH fraction by RP-MPLC which was eluted with 90% MeOH in 0.1% TFA·H 2 O also gave 4 (323 mg).
Chemical Conversion of 2 with Heat-Treated EtOH and Purification of 1
The chemical conversion of 2 in EtOH was gradually scaled up from 1 mg to 40 mg (conc. 1 mg/3 mL). The reaction flask was placed in an oil bath at 80 • C, and the mixture was heated for 72 h under N 2 atmosphere. We examined the production of 1 by RP-LCMS in every gradual scaling up steps to ensure that the reactions proceeded well without any trouble. Then, the reaction mixtures were pooled (overall 106 mg) before purification.
Experiments to Investigate the Possible Reaction Mechanisms
Compound 3 (1 mg) or 4 (1 mg) was dissolved in EtOH (3 mL) and heated at 80 • C for 72 h under N 2 atmosphere. The reaction mixtures were analyzed by RP-LCMS.
To clarify whether the reaction proceeds through a radical mechanism, chemical conversions of 2 with heat-treated EtOH were conducted in the presence of galvinoxyl free radical (Tokyo Chemical Industry, Tokyo, Japan). Compound 2 (1 mg) was dissolved in EtOH (3 mL) in the presence and absence of 5% w/w galvinoxyl and then heated at 80 • C for 72 h under N 2 atmosphere. The reaction mixtures were analyzed by RP-LCMS ( Figure S12).
To study an effect of temperature on the production of 1, chemical conversions of 2 with EtOH were conducted at various temperatures. Compound 2 (1 mg) was dissolved in EtOH (3 mL) and then heated at 25, 50, and 80 • C for 72 h under N 2 atmosphere. The reaction mixtures were analyzed by RP-HPLC ( Figure S13).
Moreover, chemical conversions of 2 (1 mg) with various aliphatic alcohols (3 mL), including MeOH, EtOH, isopropanol, and n-propanol, at their corresponding boiling temperatures were conducted for 72 h under N 2 atmosphere. The reaction mixtures were analyzed by RP-LCMS (Scheme 3).
Anticancer Assay
The antiproliferative activity against 4 cancer cell lines was investigated by a WST-8 based assay. Human Jurkat leukemia and HT-29 colon cells were cultured in Roswell Park Memorial Institute (RPMI)-1640 medium (Nacalai tesque, Kyoto, Japan) and Hyclone TM McCoy's 5A medium (Cytiva, MA, USA), respectively, while PANC-1 pancreas and HeLa cervical cells were incubated in Dulbecco's Modified Eagle's Medium (DMEM, Nacalai tesque). All culture media contained 10% fetal bovine serum (FBS, Equitech-Bio Inc., TX, USA) and 50 µg/mL kanamycin (Fujifilm Wako Pure Chemical Corporation, Osaka, Japan). Each cell line was plated in a 96-well flat bottom plate at a concentration of 2000 cells/100 µL/well and treated with serially diluted compound for 4 days. Compounds to be tested were dissolved in DMSO for stock solution (20 mM) and freshly diluted into the corresponding medium so that the final concentration of DMSO was not more than 0.5%. Cells were incubated in a humidified incubator at 37 • C in an atmosphere of 5% CO 2 , then incubation with WST-8 (Nacalai tesque) up to 4 h. The absorbance of the formazan products was measured at 450 nm using an iMark TM microplate reader (Bio-Rad, CA, USA). The mean IC 50 values ± standard deviations (SD) were obtained from three independent experiments. The IC 50 value of each experiment was determined using GraphPad Prism software. Cisplatin (Fujifilm Wako Pure Chemical Corporation) was used as a positive control.
Antimycobacterial Assay
The MIC of the tested compounds was determined by 3-(4,5-Dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay. Mycobacterium smegmatis and M. bovis BCG grew in the corresponding 7H9 broth under aerobic condition. M. smegmatis and M. bovis BCG were inoculated into a 96-well flat bottom plate at concentrations of 10 5 and 10 6 CFU/mL, respectively, and treated with serially diluted compound. Stock solutions of the compound in DMSO (20 mM) were diluted into the corresponding broth so that the final concentration of DMSO was not more than 0.5%. After incubation of M. smegmatis and M. bovis BCG at 37 • C for 1 and 6 days, respectively, 10 µL of MTT solution (10 mg/mL) was added to each well and further incubation was continued at 37 • C at least 12 h. The MTT formazan products were solubilized with the 50 µL of sodium dodecyl sulphate (SDS)-dimethylformamide (DMF) solution (10% SDS in 25% DMF·H 2 O), then the plate was placed at room temperature for 2-4 h. The optical density was measured at 560 nm using the SpectraMax ® M5 Microplate Reader (Molecular Devices, CA, USA). Isoniazid (Sigma-Aldrich, MO, USA) was used as a positive control.
Conclusions
To create and discover new chemical conversion and a new unnatural compound from NP extract, we first prepared a crude MeOH extract of marine sponge Petrosia (Strongylophora) sp. and prefractionated it by silica-gel (ODS) column chromatography. Secondly, the crude extract was chemically diversified by the selected solvent and optimized chemical conditions. Thirdly, the diversity-enhanced extract was analyzed by RP-LCMS. A new compound after chemical diversification was distinguished from the original NP extract. Lastly, the new unnatural component was scaled up, purified by column chromatography, and identified by various spectroscopy techniques. This study has provided a simple and rapid chemical diversification of the substances present in the NP extract with heat-treated EtOH which produced a new, unexpected derivative of alkaloid 1 from naturally occurring alkaloid 2 via alkoxylation at C-6. The reaction mechanism has not been clarified; however, the results suggest that a nitrogen of 3-carbamoylpyrrole ring A might be important for introduction of the angular ethoxy group, and the preparation of 1 was markedly sensitive to temperature. Moreover, two known natural compounds (3 and 4) were identified. None of them have been previously reported from this genus.
|
2022-12-29T16:16:43.708Z
|
2022-12-24T00:00:00.000
|
{
"year": 2022,
"sha1": "bb451099e53d31be560d6246e808e04c284015cc",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/28/1/143/pdf?version=1671874527",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cb66976370ad54a4a8f81d5b8ab41cd4478c3566",
"s2fieldsofstudy": [
"Chemistry",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
174799386
|
pes2o/s2orc
|
v3-fos-license
|
A Resource-Free Evaluation Metric for Cross-Lingual Word Embeddings Based on Graph Modularity
Cross-lingual word embeddings encode the meaning of words from different languages into a shared low-dimensional space. An important requirement for many downstream tasks is that word similarity should be independent of language—i.e., word vectors within one language should not be more similar to each other than to words in another language. We measure this characteristic using modularity, a network measurement that measures the strength of clusters in a graph. Modularity has a moderate to strong correlation with three downstream tasks, even though modularity is based only on the structure of embeddings and does not require any external resources. We show through experiments that modularity can serve as an intrinsic validation metric to improve unsupervised cross-lingual word embeddings, particularly on distant language pairs in low-resource settings.
Typically the quality of cross-lingual word embeddings is measured with respect to how well they improve a downstream task.However, sometimes it is not possible to evaluate embeddings for a specific downstream task, for example a future task that does not yet have data or on a rare language that does not have resources to support traditional evaluation.In such settings, it is useful to have an intrinsic evaluation metric: a metric that looks at the embedding space itself to know whether the embedding is good without resorting to an extrinsic task.While extrinsic tasks are the ultimate arbiter of whether cross-lingual word embeddings work, intrinsic metrics are useful for low-resource languages where one often lacks the annotated data that would make an extrinsic evaluation possible.
However, few intrinsic measures exist for crosslingual word embeddings, and those that do exist require external linguistic resources (e.g., sensealigned corpora in Ammar et al. (2016)).The requirement of language resources makes this approach limited or impossible for low-resource languages, which are the languages where intrinsic evaluations are most needed.Moreover, requiring language resources can bias the evaluation toward words in the resources rather than evaluating the embedding space as a whole.
Our solution involves a graph-based metric that considers the characteristics of the embedding space without using linguistic resources.To sketch the idea, imagine a cross-lingual word embedding space where it is possible to draw a hyperplane that separates all word vectors in one language from all vectors in another.Without knowing anything about the languages, it is easy to see that this is a problematic embedding: the representations of the two languages are in distinct parts of the space rather than using a shared space.While this example is exaggerated, this characteristic where vectors are clustered by language often appears within smaller neighborhoods of the embedding space, we want to discover these clusters.
To measure how well word embeddings are mixed across languages, we draw on concepts from network science.Specifically, some cross- lingual word embeddings are modular by language: vectors in one language are consistently closer to each other than vectors in another language (Figure 1).When embeddings are modular, they often fail on downstream tasks (Section 2).Modularity is a concept from network theory (Section 3); because network theory is applied to graphs, we turn our word embeddings into a graph by connecting nearest-neighbors-based on vector similarity-to each other.Our hypothesis is that modularity will predict how useful the embedding is in downstream tasks; low-modularity embeddings should work better.
We explore the relationship between modularity and three downstream tasks (Section 4) that use cross-lingual word embeddings differently: (i) cross-lingual document classification; (ii) bilingual lexical induction in Italian, Japanese, Spanish, and Danish; and (iii) low-resource document retrieval in Hungarian and Amharic, finding moderate to strong negative correlations between modularity and performance.Furthermore, using modularity as a validation metric (Section 5) makes MUSE (Conneau et al., 2018), an unsupervised model, more robust on distant language pairs.Compared to other existing intrinsic evaluation metrics, modularity captures complementary properties and is more predictive of downstream performance despite needing no external resources (Section 6).
Background: Cross-Lingual Word Embeddings and their Evaluation
There are many approaches to training crosslingual word embeddings.This section reviews the embeddings we consider in this paper, along with existing work on evaluating those embeddings.
Cross-Lingual Word Embeddings
We focus on methods that learn a cross-lingual vector space through a post-hoc mapping between independently constructed monolingual embeddings (Mikolov et al., 2013a;Vulić and Korhonen, 2016).Given two separate monolingual embeddings and a bilingual seed lexicon, a projection matrix can map translation pairs in a given bilingual lexicon to be near each other in a shared embedding space.A key assumption is that cross-lingually coherent words have "similar geometric arrangements" (Mikolov et al., 2013a) in the embedding space, enabling "knowledge transfer between languages" (Ruder et al., 2017).We focus on mapping-based approaches for two reasons.First, these approaches are applicable to low-resource languages because they do not requiring large bilingual dictionaries or parallel corpora (Artetxe et al., 2017;Conneau et al., 2018).2Second, this focus separates the word embedding task from the cross-lingual mapping, which allows us to focus on evaluating the specific multilingual component in Section 4.
Evaluating Cross-Lingual Embeddings
Most work on evaluating cross-lingual embeddings focuses on extrinsic evaluation of downstream tasks (Upadhyay et al., 2016;Glavas et al., 2019).However, intrinsic evaluations are crucial since many low-resource languages lack annotations needed for downstream tasks.Thus, our goal is to develop an intrinsic measure that correlates with downstream tasks without using any external resources.This section summarizes existing work on intrinsic methods of evaluation for cross-lingual embeddings.
One widely used intrinsic measure for evaluating the coherence of monolingual embeddings is QVEC (Tsvetkov et al., 2015).Ammar et al. (2016) extend QVEC by using canonical correlation analysis (QVEC-CCA) to make the scores comparable across embeddings with different dimensions.However, while both QVEC and QVEC-CCA can be extended to cross-lingual word embeddings, they are limited: they require external annotated corpora.This is problematic in cross-lingual settings since this requires annotation to be consistent across languages (Ammar et al., 2016).
Other internal metrics do not require external resources, but those consider only part of the embeddings.Conneau et al. (2018) and Artetxe et al. (2018a) use a validation metric that calculates similarities of cross-lingual neighbors to conduct model selection.Our approach differs in that we consider whether cross-lingual nearest neighbors are relatively closer than intra-lingual nearest neighbors.Søgaard et al. (2018) use the similarities of intralingual neighbors and compute graph similarity between two monolingual lexical subgraphs built by subsampled words in a bilingual lexicon.They further show that the resulting graph similarity has a high correlation with bilingual lexical induction on MUSE (Conneau et al., 2018).However, their graph similarity still only uses intra-lingual similarities but not cross-lingual similarities.
These existing metrics are limited by either requiring external resources or considering only part of the embedding structure (e.g., intra-lingual but not cross-lingual neighbors).In contrast, our work develops an intrinsic metric which is highly correlated with multiple downstream tasks but does not require external resources, and considers both intraand cross-lingual neighbors.
Related Work A related line of work is the intrinsic evaluation measures of probabilistic topic models, which are another low-dimensional representation of words similar to word embeddings.Metrics based on word co-occurrences have been developed for measuring the monolingual coherence of topics (Newman et al., 2010;Mimno et al., 2011;Lau et al., 2014).Less work has studied evaluation of cross-lingual topics (Mimno et al., 2009).Some researchers have measured the overlap of direct translations across topics (Boyd-Graber and Blei, 2009), while Hao et al. (2018) propose a metric based on co-occurrences across languages that is more general than direct translations.
Approach: Graph-Based Diagnostics for Detecting Clustering by Language
This section describes our graph-based approach to measure the intrinsic quality of a cross-lingual embedding space.
Embeddings as Lexical Graphs
We posit that we can understand the quality of cross-lingual embeddings by analyzing characteristics of a lexical graph (Pelevina et al., 2016;Hamilton et al., 2016).The lexical graph has words as nodes and edges weighted by their similarity in the embedding space.Given a pair of words (i, j) and associated word vectors (v i , v j ), we compute the similarity between two words by their vector similarity.We encode this similarity in a weighted adjacency matrix A: However, nodes are only connected to their knearest neighbors (Section 6.2 examines the sensitivity to k); all other edges become zero.Finally, each node i has a label g i indicating the word's language.
Clustering by Language
We focus on a phenomenon that we call "clustering by language", when word vectors in the embedding space tend to be more similar to words in the same language than words in the other.For example in Figure 2, the intra-lingual nearest neighbors of "slow" have higher similarity in the embedding space than semantically related cross-lingual words.This indicates that words are represented differently across the two languages, thus our hypothesis is that clustering by language degrades the quality of cross-lingual embeddings when used in downstream tasks.
Modularity of Lexical Graphs
With a labeled graph, we can now ask whether the graph is modular (Newman, 2010).In a crosslingual lexical graph, modularity is the degree to which words are more similar to words in the same language than to words in a different language.This is undesirable, because the representation of words is not transferred across languages.
If the nearest neighbors of the words are instead within the same language, then the languages are not mapped into the cross-lingual space consis-tently.In our setting, the language l of each word defines its group, and high modularity indicates embeddings are more similar within languages than across languages (Newman, 2003;Newman and Girvan, 2004).In other words, good embeddings should have low modularity.Conceptually, the modularity of a lexical graph is the difference between the proportion of edges in the graph that connect two nodes from the same language and the expected proportion of such edges in a randomly connected lexical graph.If edges were random, the number of edges starting from node i within the same language would be the degree of node i, d i = j A ij for a weighted graph, following Newman (2004), times the proportion of words in that language.Summing over all nodes gives the expected number of edges within a language, where m is the number of edges, g i is the label of node i, and 1 [•] is an indicator function that evaluates to 1 if the argument is true and 0 otherwise.Next, we count the fraction of edges e ll that connect words of the same language: Given L different languages, we calculate overall modularity Q by taking the difference between e ll and a 2 l for all languages: Since Q does not necessarily have a maximum value of 1, we normalize modularity: (a 2 l ).
(4) The higher the modularity, the more words from the same language appear as nearest neighbors.Figure 1 shows the example of a lexical subgraph with low modularity (left, Q norm = 0.143) and high modularity (right, Q norm = 0.672).In Figure 1b, the lexical graph is modular since "firefox" does not encode same sense in both languages.
Our hypothesis is that cross-lingual word embeddings with lower modularity will be more successful in downstream tasks.If this hypothesis holds, then modularity could be a useful metric for cross-lingual evaluation.
Experiments: Correlation of Modularity with Downstream Success
We now investigate whether modularity can predict the effectiveness of cross-lingual word embeddings on three downstream tasks: (i) cross-lingual document classification, (ii) bilingual lexical induction, and (iii) document retrieval in low-resource languages.If modularity correlates with task performance, it can characterize embedding quality.
Data
To investigate the relationship between embedding effectiveness and modularity, we explore five different cross-lingual word embeddings on six language pairs (Table 1).
Monolingual Word Embeddings All monolingual embeddings are trained using a skip-gram model with negative sampling (Mikolov et al., 2013b).The dimension size is 100 or 200.All other hyperparameters are default in Gensim ( Řehůřek and Sojka, 2010).News articles except for Amharic are from Leipzig Corpora (Goldhahn et al., 2012).For Amharic, we use documents from LORELEI (Strassel and Tracey, 2016).MeCab (Kudo et al., 2004) 2014) maps two monolingual embeddings into a shared space by maximizing the correlation between translation pairs in a seed lexicon.Conneau et al. (2018, MUSE) use languageadversarial learning (Ganin et al., 2016) to induce the initial bilingual seed lexicon, followed by a refinement step, which iteratively solves the orthogonal Procrustes problem (Schönemann, 1966;Artetxe et al., 2017), aligning embeddings without an external bilingual lexicon.Like MSE+Orth, vectors are unit length and mean centered.Since MUSE is unstable (Artetxe et al., 2018a;Søgaard et al., 2018), we report the best of five runs.Artetxe et al. (2018a, VECMAP) induce an initial bilingual seed lexicon by aligning intra-lingual similarity matrices computed from each monolingual embedding.We report the best of five runs to address uncertainty from the initial dictionary.
Modularity Implementation
We implement modularity using random projection trees (Dasgupta and Freund, 2008) to speed up the extraction of k-nearest neighbors,6 tuning k = 3 on the German Rcv2 dataset (Section 6.2).
Task 1: Document Classification
We now explore the correlation of modularity and accuracy on cross-lingual document classification.We classify documents from the Reuters Rcv1 and Rcv2 corpora (Lewis et al., 2004).Documents have one of four labels (Corporate/Industrial, Economics, Government/Social, Markets).We follow Klementiev et al. (2012) The DAN had better accuracy than averaged perceptron (Collins, 2002) in Klementiev et al. (2012).
Results
We report the correlation value computed from the data points in 2), reflected by its low modularity.
Error Analysis A common error in EN → JA classification is predicting Corporate/Industrial for documents labeled Markets.One cause is documents with 終値 "closing price"; this has few market-based English neighbors (Table 3).As a result, the model fails to transfer across languages.
Task 2: Bilingual Lexical Induction (BLI)
Our second downstream task explores the correlation between modularity and bilingual lexical induction (BLI).We evaluate on the test set from Conneau et al. (2018), but we remove pairs in the seed lexicon from Rolston and Kirchhoff (2016).The result is 2,099 translation pairs for ES, 1,358 for IT, 450 for DA, and 973 for JA.We report precision@1 (P@1) for retrieving cross-lingual nearest neighbors by cross-domain similarity local scaling (Conneau et al., 2018, CSLS).
Results Although this task ignores intra-lingual nearest neighbors when retrieving translations, modularity still has a high correlation (ρ = −0.785)with P@1 (Figure 4).MUSE and VECMAP beat the three supervised methods, which have the lowest modularity (Table 4).P@1 is low compared to other work on the MUSE test set (e.g., Conneau et al. (2018)) because we filter out translation pairs which appeared in the large training lexicon compiled by Rolston and Kirchhoff (2016), and the raw corpora used to train monolingual embeddings (Table 1) are relatively small compared to Wikipedia.JA) along with the average modularity of the crosslingual word embeddings trained with different methods.VECMAP scores the best P@1, which is captured by its low modularity.
Task 3: Document Retrieval in Low-Resource Languages
As a third downstream task, we turn to an important task for low-resource languages: lexicon expansion (Gupta and Manning, 2015;Hamilton et al., 2016) for document retrieval.Specifically, we start with a set of EN seed words relevant to a particular concept, then find related words in a target language for which a comprehensive bilingual lexicon does not exist.We focus on the disaster domain, where events may require immediate NLP analysis (e.g., sorting SMS messages to first responders).
We induce keywords in a target language by taking the n nearest neighbors of the English seed words in a cross-lingual word embedding.We manually select sixteen disaster-related English seed words from Wikipedia articles, "Natural hazard" and "Anthropogenic hazard".Examples of seed terms include "earthquake" and "flood".Using the extracted terms, we retrieve disaster-related documents by keyword matching and assess the coverage and relevance of terms by area under the precision-recall curve (AUC) with varying n.
Test Corpora As positively labeled documents, we use documents from the LORELEI project (Strassel and Tracey, 2016) containing any disaster-related annotation.There are 64 disasterrelated documents in Amharic, and 117 in Hungarian.We construct a set of negatively labeled documents from the Bible; because the LORELEI corpus does not include negative documents and the Bible is available in all our languages (Christodouloupoulos and Steedman, 2015), we take the chapters of the gospels (89 documents), which do not discuss disasters, and treat these as non-disaster-related documents.
Results Modularity has a moderate correlation with AUC (ρ = −0.378,Table 5).While modularity focuses on the entire vocabulary of cross-lingual word embeddings, this task focuses on a small, specific subset-disaster-relevant words-which may explain the low correlation compared to BLI or document classification.
Use Case: Model Selection for MUSE
A common use case of intrinsic measures is model selection.We focus on MUSE (Conneau et al., 2018) since it is unstable, especially on distant language pairs (Artetxe et al., 2018a;Søgaard et al., 2018;Hoshen and Wolf, 2018) and therefore requires an effective metric for model selection.MUSE uses a validation metric in its two steps: (1) the language-adversarial step, and (2) the refinement step.First the algorithm selects an optimal mapping W using a validation metric, obtained from language-adversarial learning (Ganin et al., 2016).Then the selected mapping W from the language-adversarial step is passed on to the refinement step (Artetxe et al., 2017) to re-select the optimal mapping W using the same validation metric after each epoch of solving the orthogonal Procrustes problem (Schönemann, 1966).Normally, MUSE uses an intrinsic metric, CSLS of the top 10K frequent words (Conneau et al., 2018, CSLS-10K).Given word vectors s, t ∈ R n from a source and a target embedding, CSLS is a cross-lingual similarity metric, CSLS(W s, t) = 2 cos(W s, t)−r(W s)−r(t) (5) where W is the trained mapping after each epoch, and r(x) is the average cosine similarity of the top 10 cross-lingual nearest neighbors of a word x.
What if we use modularity instead?To test modularity as a validation metric for MUSE, we compute modularity on the lexical graph of 10K most frequent words (Mod-10K; we use 10K for consistency with CSLS on the same words) after each
Family
Lang.
CSLS-10K
Mod-10K epoch of the adversarial step and the refinement step and select the best mapping.
The important difference between these two metrics is that Mod-10K considers the relative similarities between intra-and cross-lingual neighbors, while CSLS-10K only considers the similarities of cross-lingual nearest neighbors. 7xperiment Setup We use the pre-trained fast-Text vectors (Bojanowski et al., 2017) to be comparable with the prior work.Following Artetxe et al. (2018a), all vectors are unit length normalized, mean centered, and then unit length normalized.We use the test lexicon by Conneau et al. (2018).We run ten times with the same random seeds and hyperparameters but with different validation metrics.Since MUSE is unstable on distant language pairs (Artetxe et al., 2018a;Søgaard et al., 2018;Hoshen and Wolf, 2018), we test it on English to languages from diverse language families: Indo-European languages such as Danish (DA), German (DE), Spanish (ES), Farsi (FA), Italian (IT), Hindi (HI), Bengali (BN), and non-Indo-European languages such as Finnish (FI), Hungarian (HU), Japanese (JA), Chinese (ZH), Korean (KO), Arabic (AR), Indonesian (ID), and Vietnamese (VI).Results Table 6 shows P@1 on BLI for each target language using English as the source language.Mod-10K improves P@1 over the default validation metric in diverse languages, especially on the average P@1 for non-Germanic languages such as JA (+18.00%) and ZH (+5.74%), and the best P@1 for KO (+1.85%).These language pairs include pairs (EN-JA and EN-HI), which are difficult for MUSE (Hoshen and Wolf, 2018).Improvements in JA come from selecting a better mapping during the refinement step, which the default validation misses.For ZH, HI, and KO, the improvement comes from selecting better mappings during the adversarial step.However, modularity does not improve on all languages (e.g., VI) that are reported to fail by Hoshen and Wolf (2018).
Analysis: Understanding Modularity as an Evaluation Metric
The experiments so far show that modularity captures whether an embedding is useful, which suggests that modularity could be used as an intrinsic evaluation or validation metric.Here, we investigate whether modularity can capture distinct information compared to existing evaluation measures: QVEC-CCA (Ammar et al., 2016), CSLS (Conneau et al., 2018), and cosine similarity between translation pairs (Section 6.1).We also analyze the effect of the number of nearest neighbors k (Section 6.2).
Ablation Study Using Linear Regression
We fit a linear regression model to predict the classification accuracy given four intrinsic measures: QVEC-CCA, CSLS, average cosine similarity of translations, and modularity.We ablate each of the four measures, fitting linear regression with standardized feature values, for two target languages (IT and DA) on the task of cross-lingual document classification (Figure 3).We limit to IT and DA because aligned supersense annotations to EN ones (Miller et al., 1993), required for QVEC-CCA are only available in those languages (Montemagni et al., 2003;Martínez Alonso et al., 2015;Martınez Alonso et al., 2016;Ammar et al., 2016).We standardize the values of the four features before training the regression model.
Omitting modularity hurts accuracy prediction on cross-lingual document classification substantially, while omitting the other three measures has smaller effects (Figure 5).Thus, modularity complements the other measures and is more predictive of classification accuracy.
Hyperparameter Sensitivity
While modularity itself does not have any adjustable hyperparameters, our approach to constructing the lexical graph has two hyperparameters: the number of nearest neighbors (k) and the number of trees (t) for approximating the k-nearest neighbors using random projection trees.We conduct a grid search for k ∈ {1, 3, 5, 10, 50, 100, 150, 200} and t ∈ {50, 100, 150, 200, 250, 300, 350, 400, 450, 500} using the German Rcv2 corpus as the held-out language to tune hyperparameters.
The nearest neighbor k has a much larger effect on modularity than t, so we focus on analyzing the effect of k, using the optimal t = 450.Our earlier experiments all use k = 3 since it gives the highest Pearson's and Spearman's correlation on the tuning dataset (Figure 6).The absolute correlation between the downstream task decreases when setting k > 3, indicating nearest neighbors beyond k = 3 are only contributing noise.This work focuses on modularity as a diagnostic tool: it is cheap and effective at discovering which embeddings are likely to falter on downstream tasks.Thus, practitioners should consider including it as a metric for evaluating the quality of their embeddings.Additionally, we believe that modularity could serve as a useful prior for the algorithms that learn cross-lingual word embeddings: during learning prefer updates that avoid increasing modularity if all else is equal.Nevertheless, we recognize limitations of modularity.Consider the following cross-lingual word embedding "algorithm": for each word, select a random point on the unit hypersphere.This is a horrible distributed representation: the position of words' embedding has no relationship to the underlying meaning.Nevertheless, this representation will have very low modularity.Thus, while modularity can identify bad embeddings, once vectors are well mixed, this metric-unlike QVEC or QVEC-CCA-cannot identify whether the meanings make sense.Future work should investigate how to combine techniques that use both word meaning and nearest neighbors for a more robust, semisupervised cross-lingual evaluation.
Figure 1 :
Figure 1: An example of a low modularity (languages mixed) and high modularity cross-lingual word embedding lexical graph using k-nearest neighbors of "eat" (left) and "firefox" (right) in English and Japanese.
Figure 2 :
Figure 2: Local t-SNE (van der Maaten and Hinton, 2008) of an EN-JA cross-lingual word embedding, which shows an example of "clustering by language".
Figure 5 :
Figure5: We predict the cross-lingual document classification results for DA and IT from Figure3using three out of four evaluation metrics.Ablating modularity causes by far the largest decrease (R 2 = 0.814 when using all four features) in R 2 , showing that it captures information complementary to the other metrics.
Figure 6 :
Figure 6: Correlation between modularity and classification performance (EN→DE) with different numbers of neighbors k.Correlations are computed on the same setting as Figure 3 using supervised methods.We use this to set k = 3.
7
Discussion: What Modularity Can and Cannot Do
Table 1 :
Dataset statistics (source and number of tokens) for each language including both Indo-European and non-Indo-European languages.
Artetxe et al. (2016))tences.Mean-squared error (MSE)Mikolov et al. (2013a)minimize the mean-squared error of bilingual entries in a seed lexicon to learn a projection between two embeddings.We use the implementation byArtetxe et al. (2016).
Table 2 :
, except we use all EN training documents and documents in each target Average classification accuracy on (EN → DA, ES, IT, JA) along with the average modularity of five cross-lingual word embeddings.MUSE has the best accuracy, captured by its low modularity.
language (DA, ES, IT, and JA) as tuning and test data.After removing out-of-vocabulary words, we split documents in target languages into 10% tuning data and 90% test data.Test data are
Table 3 :
Nearest neighbors in an EN-JA embedding.Unlike the JA word "market", the JA word "closing price" has no EN vector nearby.
Table 4 :
Average precision@1 on (EN → DA, ES, IT, Bold values are mappings that are not shared between the two validation metrics.Mod-10K improves the robustness of MUSE on distant language pairs.
|
2019-06-05T10:34:56.000Z
|
2019-06-05T00:00:00.000
|
{
"year": 2019,
"sha1": "46150ac82c49f5a7994ea831f88a25a9cde96d90",
"oa_license": "CCBY",
"oa_url": "https://www.aclweb.org/anthology/P19-1489.pdf",
"oa_status": "HYBRID",
"pdf_src": "ArXiv",
"pdf_hash": "a8e278ff95668a7afd3e4b701753aba1f05ee62b",
"s2fieldsofstudy": [
"Computer Science",
"Linguistics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
257248737
|
pes2o/s2orc
|
v3-fos-license
|
Population Dynamics of Insect Pests and Beneficials on Different Snap Bean Cultivars
Simple Summary A study was conducted to survey the populations of insect pests and beneficials on different cultivars of snap bean in Georgia, USA. It is important to conserve the beneficials and to understand both the abundance and diversity of beneficials and pests in crops. The population dynamics of insect pests, pollinators, and natural enemies were evaluated on 24 snap bean cultivars weekly for six weeks. The number of sweetpotato whitefly eggs was lowest on cultivar ‘Jade’, whereas cultivars ‘Gold Mine’, ‘Golden Rod’, ‘Long Tendergreen’, and ‘Royal Burgundy’ supported the fewest whitefly nymphs. Cultivars ‘Greencrop’ and ‘PV-857′ harbored fewer adult potato leafhoppers and tarnished plant bugs. The population peaks of adults were observed in Week 1 (25 days after plants emerged) for whitefly and Mexican bean beetle; Week 3 for cucumber beetle, kudzu bug, and potato leafhopper; Weeks 3 and 4 for thrips; Week 4 for tarnished plant bugs; and Weeks 5 and 6 for bees. Temperature and relative humidity correlated with the populations of whitefly, Mexican bean beetle, bees, and predator ladybird beetle. These results contribute crucial information to the agricultural community for the management of insect pests on snap beans. Abstract Snap bean is an important crop in the United States. Insecticides are commonly used against pests on snap bean, but many pests have developed resistance to the insecticides and beneficials are threatened by the insecticides. Therefore, host plant resistance is a sustainable alternative. Population dynamics of insect pests and beneficials were assessed on 24 snap bean cultivars every week for six weeks. The lowest number of sweetpotato whitefly (Bemisia tabaci) eggs was observed on cultivar ‘Jade’, and the fewest nymphs were found on cultivars ‘Gold Mine’, ‘Golden Rod’, ‘Long Tendergreen’, and ‘Royal Burgundy’. The numbers of potato leafhopper (Empoasca fabae) and tarnished plant bug (Lygus lineolaris) adults were the lowest on cultivars ‘Greencrop’ and ‘PV-857′. The highest numbers of adults were found in Week 1 (25 days following plant emergence) for B. tabaci and Mexican bean beetle (Epilachna varivestis); Week 3 for cucumber beetle, kudzu bug (Megacopta cribraria), and E. fabae; Weeks 3 and 4 for thrips; Week 4 for L. lineolaris; and Weeks 5 and 6 for bees. Temperature and relative humidity correlated with B. tabaci, E. varivestis, bee, and predator ladybird beetle populations. These results provide valuable information on the integrated pest management of snap beans.
Introduction
Snap bean (Phaseolus vulgaris L. (Fabales: Fabaceae)) is an economically important vegetable in the United States, including the state of Georgia [1,2]. The production of snap bean for all uses averaged 330,693 tons during 2018-2020 in the United States [2]. The 2017 United States Census of Agriculture reported Florida, Georgia, Tennessee, California, Texas, North Carolina, New Jersey, Ohio, and New York as the leading fresh-market snap
Seeds of all cultivars were sown on 26 August 2020 for the fall season and on 6 April 2021 for the spring season. The seeds were sown following the local cultural practices, with a 3.8 cm sowing depth and 7.6 cm in-row spacing. The row was 3.0 m in length and the row spacing was 0.9 m. Forty-one plants were sown per row [13].
In both seasons, S-metolachlor herbicide (trade name: Dual Magnum; Syngenta Crop Protection, Inc., Greensboro, NC, USA) was applied to the experimental field at a rate of 1167.90 mL/ha the next day after sowing the seeds. No insecticide was used in the experimental field during the two seasons. Fertilizers were not applied in the experimental field during the 2020 fall season. For the 2021 spring season, the field was fertilized with N-P-K (nitrogen-phosphorus-potassium) (19-19-19) at 224 kg/ha using a farming fertilizer spreader before sowing was initiated. The experimental field was only irrigated by rainfall during the 2020 fall season. Beyond rainfall, the only irrigation that was provided to the plots was overhead irrigation once using a farm irrigation sprinkler system on the fourth week of sampling in the 2021 spring season. The cultural practices were reported in our previous study [13].
Experiment Design and Layout
The experiment was set up as a randomized complete block design in both seasons (2020 fall and 2021 spring). Three blocks were included in each season. Each block was comprised of two big columns, with 12 experimental plots per big column and four rows per experimental plot ( Figure 1). Each experimental plot had 164 snap bean plants in an area of 8.1 m 2 . In both seasons, each of the 24 snap bean cultivars was randomly assigned to one of 24 experimental plots within each block [13].
The experimental field was situated at least 9.1 m (30 feet) from the adjacent fields ( Figure 1). The inter-block distance was 5.49 m (18 feet) ( Figure 1). The distance between the two big columns within each block was 2.44 m (8 feet) ( Figure 1). The layout map of the experimental field in fall of 2020 was included to indicate the experimental setup ( Figure 1). In the 2020 fall season, there was a peach (Prunus persica (L.) (Rosales: Rosaceae)) orchard on the east of the experimental field, a cotton (Gossypium hirsutum (L.) (Malvales: Malvaceae)) field to the north, a soybean (Glycine max (L.) Merrill (Fabales: Fabaceae)) field to the south, and a princess tree (Paulownia tomentosa (Steud.) (Lamiales: Paulowniaceae)) orchard to the west ( Figure 1). The experiment was conducted in the same field using a similar layout map during the 2021 spring season except that there was only a peach orchard to the east and a princess tree orchard to the west.
Weather Information
During the experimental periods in both seasons, all climatic data (temperature, relative humidity, and rainfall) were acquired from a Georgia weather station at Fort Valley State University (Fort Valley, GA, USA) located approximately 3.2 km from the experimental field. Data on daily climatic variables (minimum temperature ( • C), average temperature ( • C), maximum temperature ( • C), minimum relative humidity (%), average relative humidity (%), maximum relative humidity (%), and rainfall (mm)) were averaged for each sampling week (Table 1) [13].
Figure 1.
Layout of the experimental field in the 2020 fall season. Notes: When the experiment was conducted, the unit of 'foot' was used to measure the field, such as '15′' meaning '15 feet' (4.57 m). When the seeds of different snap bean cultivars were sown in the field, the number was randomly assigned to each snap bean cultivar, such as 'C11' meaning 'Cultivar 11'.
The experimental field was situated at least 9.1 m (30 feet) from the adjacent fields ( Figure 1). The inter-block distance was 5.49 m (18 feet) (Figure 1). The distance between the two big columns within each block was 2.44 m (8 feet) (Figure 1). The layout map of the experimental field in fall of 2020 was included to indicate the experimental setup (Figure 1). In the 2020 fall season, there was a peach (Prunus persica (L.) (Rosales: Rosaceae)) Figure 1. Layout of the experimental field in the 2020 fall season. Notes: When the experiment was conducted, the unit of 'foot' was used to measure the field, such as '15 ' meaning '15 feet' (4.57 m). When the seeds of different snap bean cultivars were sown in the field, the number was randomly assigned to each snap bean cultivar, such as 'C11' meaning 'Cultivar 11'.
Insect Samplings
In each season, four sampling methods (leaf-turn method [35], pan traps, yellow sticky cards, and sweep nets) were employed to collected insects from the experimental field. Insect samplings using the above four methods occurred once a week for six weeks, starting at 25 DAE (days after plant emergence) and ending at 60 DAE (Table 1). Samplings were initiated in the mornings (between 8:00 am and 10:30 am) to standardize evaluation. To reduce the impact of adjacent plots on the results, the samples were taken from the middle two rows of each plot.
The leaf-turn method was used to monitor the number of B. tabaci adults. Five upper leaves and five lower leaves on a plant were sampled to evaluate the number of B. tabaci adults, then detached and taken back to the laboratory [13]. The numbers of B. tabaci eggs and nymphs were then checked under a dissecting microscope (Leica EZ4 W; Leica Microsystems Inc., Buffalo Grove, IL, USA) [13].
Pan traps of three colors (blue, yellow, and white) were utilized to sample pollinators (bees, moths, and wasps). Each pan trap (top diameter: 15 cm; bottom diameter: 8.8 cm; and height: 4.0 cm) was glued on the ring hoop of a metal plant support stake (92 cm high). In each experimental plot, three pan traps with three colors (blue, yellow, and white) were randomly placed in the inter-row spaces between the first and second rows, between the second and third rows, and between the third and fourth rows. A soapy water solution at a ratio of 2.5 mL:3.785 L (soap and water) was prepared weekly, and approximately 150 mL of soapy water was added to each pan trap. The pan traps were kept in the field for 24 h. After that, all the soapy water in the three pan traps from one experimental plot was collected into a labeled plastic container (diameter: 6.9 cm; height: 8.4 cm). The plastic containers with soapy water were taken back to the laboratory. All the insects (mainly pollinators) in each plastic container were transferred into a labeled 8-dram glass vial (diameter: 25 mm; height: 95 mm) within 24 h of collection from the field. Then, about 15 mL of 70% ethanol was added into each glass vial to preserve the insect samples. The insect samples in the glass vials were stored in the laboratory for later identification.
A yellow sticky card (12.7 cm × 7.6 cm) fixed on a stake (30 cm high) was placed in the center of each experimental plot for 24 h. Thereafter, all the yellow sticky cards were covered on both sides using the plastic membrane and labeled with information on the block and cultivar. The yellow sticky cards were later taken back to the laboratory and stored in an incubator (Percival PGC-9/2; Percival Scientific, Fontana, WI, USA) at 4 • C for future identification of trapped insects.
Four sweeps were conducted using insect sweep nets (hoop: 38 cm; handle length: 91 cm) to collect insects in each experimental plot. Insects collected from one experimental plot were placed in a labeled interlocking seal plastic bag. All the bags were taken to the laboratory and stored at 4 • C in the incubator for further identification of the insect collected.
Numbers of Insects
The numbers of the same insects captured on the yellow sticky cards and collected by sweep nets were pooled together for data analysis. Thus, the units of the insects included pan trap, YSC (captured on one yellow sticky card), and YSCS (YSC and collected from four sweeps (S)). In each season, data were analyzed by fitting a generalized linear mixed model to the numbers of each insect per unit. The numbers of each insect per unit were modeled by a poison or negative binomial distribution. The linear predictor involved related random effects and fixed effects (sampling weeks (Weeks 1-6), treatments (24 snap bean cultivars), and two-way interactions).
Over-dispersion was evaluated using the maximum-likelihood-based fit statistic Pearson Chi-Square/DF [36]. No evidence of over-dispersion was identified. The final statistical model used for inferences was fitted using residual pseudo-likelihood. The statistical model was fitted by the PROC GLIMMIX procedure in SAS software [37]. To avoid inflations of type I errors, comparisons were conducted by Tukey or Bonferroni adjustments.
Correlations between the Numbers of Each Insect and Climatic Factors
In each season, the Spearman correlation analysis method was used to conduct correlation analyses between the numbers of each insect and the selected climatic factors in SAS software.
Fall Season Experiment in 2020
During Weeks 1 and 2, the snap bean plants were in a vegetative state, then they blossomed in Weeks 3 and 4, reaching their peak blooming and producing pods in Weeks 5 and 6. The population dynamics of B. tabaci eggs, nymphs, and adults on different snap bean cultivars were documented in our previous study [13]. In brief, the number of adults per leaf was significantly higher in Week 1 and was not significantly different among the 24 snap bean cultivars [13]. Overall, the cultivar 'Jade' supported the lowest number of eggs, whereas cultivars 'Gold Mine', 'Golden Rod', 'Long Tendergreen', and 'Royal Burgundy' harbored lower numbers of B. tabaci nymphs [13]. The peaks of eggs and nymphs were in Week 2 and Week 4, respectively [13].
For the 2020 fall season, there were no significant interactions between sampling weeks and snap bean cultivars for the numbers of adult cucumber beetle per YSCS, M. cribraria per pan trap, E. varivestis per YSCS, thrips per YSC, and L. lineolaris per YSCS ( Table 2). The numbers of adult cucumber beetle per YSCS, M. cribraria per pan trap, E. varivestis per YSCS, thrips per YSC, and L. lineolaris per YSCS, were not significantly different among snap bean cultivars (Table 2). However, there were significant differences among sampling weeks regarding the numbers of adult cucumber beetle per YSCS, M. cribraria per pan trap, E. varivestis per YSCS, thrips per YSC, and L. lineolaris per YSCS ( Table 2). The number of adult cucumber beetle per YSCS was significantly higher in Week 2 than in Weeks 5 and 6 ( Figure 2A). The number of M. cribraria adults per pan trap was significantly higher in Week 3, followed by Weeks 4 and 5 ( Figure 2B). There was a significantly higher number of adult E. varivestis per YSCS in Week 1 ( Figure 2C). The numbers of adult thrips per YSC in Weeks 3 and 4 were significantly higher than in other weeks ( Figure 2D). The number of adult L. lineolaris per YSCS in Week 4 was significantly higher than in Weeks 1-3 and 6 ( Figure 2E).
A significant interaction between sampling weeks and snap bean cultivars was identified regarding the number of E. fabae adults per YSCS in the 2020 fall season ( Table 2). The number of adult E. fabae per YSCS was significantly different among snap bean cultivars and among sampling weeks ( Table 2) (Table 3). (Table Overall, for each snap bean cultivar, the number of adult E. fabae per YSCS w highest in Week 3, followed by Weeks 2 and 4-6 ( Table 3). Overall, for each snap bean cultivar, the number of adult E. fabae per YSCS was the highest in Week 3, followed by Weeks 2 and 4-6 ( Table 3).
Pollinators
The major pollinators observed in the 2020 fall season experiment were (1) The interaction between sampling weeks and snap bean cultivars was not significant for the number of bees per pan trap (F = 0.71; df = 115, 286; p = 0.98). The number of bees per pan trap was not significantly different among snap bean cultivars (F = 1.15; df = 23, 286; p = 0.29). However, there were significant differences among sampling weeks regarding the number of bees per pan trap (F = 6.34; df = 5, 286; p < 0.0001). The number of bees per pan trap was significantly higher in Week 6 than in other weeks (Figure 3).
Natural Enemies
The primary natural enemies found in snap bean plots in the 2020 fall season experiment were (1) predators, including adult Delphastus spp. and predator ladybird beetles (e.g., Coccinella septempunctata (L.) and Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae)); (2) parasitoid wasps, including adult Encarsia spp. and Eretmocerus spp. (Hymenoptera: Aphelinidae); and (3) adult minute pirate bugs, Orius spp. (Hemiptera: Anthocoridae). There were no significant interactions between sampling weeks and snap bean cultivars regarding the numbers of all the discovered natural enemies above (p > 0.05). No significant differences were detected among sampling weeks or among snap bean cultivars in the numbers of all the discovered natural enemies above (p > 0.05). The mean numbers of adult Delphastus, predator ladybird beetles, parasitoid wasps, and Orius per YSCS over the six sampling weeks ranged from 0.024 to 0.39, 0.00 to 0.13, 0.00 to 0.36, and 0.00 to 0.42, respectively. The mean numbers of adult Delphastus, predator ladybird beetle, parasitoid wasps, and Orius per YSCS on the 24 snap bean cultivars ranged from 0.049 to 0.32, 0.00 to 0.20, 0.00 to 0.28, and 0.00 to 0.46, respectively.
Natural Enemies
The primary natural enemies found in snap bean plots in the 2020 fall season experiment were (1) predators, including adult Delphastus spp. and predator ladybird beetles (e.g., Coccinella septempunctata (L.) and Harmonia axyridis (Pallas) (Coleoptera: Coccinellidae)); (2) parasitoid wasps, including adult Encarsia spp. and Eretmocerus spp. (Hymenoptera: Aphelinidae); and (3) adult minute pirate bugs, Orius spp. (Hemiptera: Anthocoridae). There were no significant interactions between sampling weeks and snap bean cultivars regarding the numbers of all the discovered natural enemies above (p > 0.05). No significant differences were detected among sampling weeks or among snap bean cultivars in the numbers of all the discovered natural enemies above (p > 0.05). The mean numbers of adult Delphastus, predator ladybird beetles, parasitoid wasps, and Orius per YSCS over the six sampling weeks ranged from 0.024 to 0.39, 0.00 to 0.13, 0.00 to 0.36, and 0.00 to 0.42, respectively. The mean numbers of adult Delphastus, predator ladybird beetle, parasitoid wasps, and Orius per YSCS on the 24 snap bean cultivars ranged from 0.049 to 0.32, 0.00 to 0.20, 0.00 to 0.28, and 0.00 to 0.46, respectively.
Correlations between the Numbers of Insects and Climatic Factors
For the 2020 fall season, negative correlations were detected for the number of adult E. varivestis × minimum relative humidity (r = −0.89, p = 0.02), the number of adult predator ladybird beetles × minimum relative humidity (r = −0.81, p = 0.05), the number of adult predator ladybird beetles × maximum relative humidity (r = −0.81, p = 0.05), and the number of adult predator ladybird beetles × rain (r = −0.93, p = 0.0077). Correlations were non-significant (p > 0.05) between the climatic factors and the numbers of other insects.
Spring Season Experiment in 2021
The snap bean crops had a similar crop phenology as that for the 2020 fall season. In the initial two weeks (Weeks 1 and 2), the snap bean plants underwent a vegetative phase. Afterward, they blossomed in the subsequent two weeks (Weeks 3 and 4), attaining their maximum flowering and yielding pods in Weeks 5 and 6.
Insect Pests
In the 2021 spring season, the principal insect pests included B. tabaci, cucumber beetle, E. varivestis, L. lineolaris, and E. fabae. The populations of E. fabae adults were the most abundant in the field.
The population dynamic of B. tabaci eggs, nymphs, and adults on different snap bean cultivars was reported in an earlier study [13]. Briefly, no significant interactions were found between sampling weeks and snap bean cultivars regarding the numbers of adults, eggs, or nymphs per leaf [13]. There were no significant differences among sampling weeks or among snap bean cultivars in the numbers of B. tabaci adults, eggs, or nymphs per leaf [13].
No significant interactions between sampling weeks and snap bean cultivars were observed for the numbers of adult cucumber beetle, E. varivestis, L. lineolaris, and E. fabae per YSCS in the 2021 spring season ( Table 4). The numbers of adult cucumber beetle and E. varivestis per YSCS were not significantly different among snap bean cultivars (Table 4). There were significant differences regarding the numbers of adult cucumber beetle and E. varivestis per YSCS among sampling weeks (Table 4). There were significant differences regarding the numbers of adult L. lineolaris and E. fabae per YSCS among sampling weeks and among snap bean cultivars ( Table 4). The number of cucumber beetle adults per YSCS was significantly higher in Week 3 than in other weeks ( Figure 4A). The number of E. varivestis adults per YSCS was significantly higher in Week 1, followed by Week 3 ( Figure 4B). The number of L. lineolaris adults per YSCS was significantly higher in Weeks 3 and 4 ( Figure 4C)
Pollinators
The main pollinators identified in the 2021 spring season experiment were bees (honeybees and bumble bees), moths (the ailanthus webworm moth), and wasps (the yellowjacket wasp and Sphecidae wasp). There were no significant interactions between sampling weeks and snap bean cultivars regarding the numbers of bees per pan trap (F = 0.83;
Natural Enemies
The critical natural enemies discovered in the 2021 spring season experiment included adult predator ladybird beetles and Orius spp. No significant interactions between sampling weeks and snap bean cultivars were identified regarding the numbers of adult predator ladybird beetles and Orius per YSCS (p > 0.05). There were no significant differences among sampling weeks or among snap bean cultivars (p > 0.05) in the numbers of adult predator ladybird beetles and Orius per YSCS. The mean numbers of adult predator ladybird beetles and Orius per YSCS over the six sampling weeks ranged from 0.00 to 0.15 and 0.15 to 0.81, respectively. The mean numbers of adult predator ladybird beetles and Orius per YSCS on the 24 snap bean cultivars ranged from 0.00 to 0.17 and 0.084 to 0.72, respectively.
Correlations between the Numbers of Insects and Climatic Factors
In the 2021 spring season, positive correlations were identified for B. tabaci egg infestation × minimum temperature, B. tabaci nymph infestation × minimum temperature, B. tabaci nymph infestation × maximum temperature, B. tabaci nymph infestation × minimum relative humidity, and B. tabaci nymph infestation × average relative humidity [13]. Positive correlations were also detected for the number of bees × minimum temperature (r = 1.00, p < 0.0001), the number of bees × average temperature (r = 0.89, p = 0.02), and the number of bees × maximum temperature (r = 0.94, p = 0.0048). Negative correlations were detected for the number of adult predator ladybird beetles × minimum temperature (r = −0.85, p = 0.034), the number of adult predator ladybird beetles × average temperature (r = −0.85, p = 0.034), and the number of adult predator ladybird beetles × maximum temperature (r = −0.85, p = 0.034). Correlations were non-significant (p > 0.05) between the climatic factors and the numbers of other insects.
Discussion
This is the first study to determine the population dynamics of multiple pests, pollinators, and natural enemies on 24 local and commercially available bush snap bean cultivars in the southern United States. Previous studies concentrated on determining the susceptibility of different snap bean cultivars to only one major pest, such as B. tabaci [11][12][13]30], E. fabae [15,38], or L. lineolaris [39,40]. Therefore, this study might contribute to the knowledge of the population dynamics of insect pests and beneficials as impacted by the cultivation of snap bean cultivars. The information derived from this study will help elucidate differences in the susceptibility of snap bean cultivars to different pests, and interactions among pests, pollinators, and natural enemies.
The implementation of host plant resistance has been regarded as a critical approach in pest management programs on snap bean [13,15,17]. The main pests on snap bean we identified during the two seasons included B. tabaci, cucumber beetle, M. cribraria, E. varivestis, thrips, L. lineolaris, and E. fabae. The susceptibility of the 24 local and commercially available snap bean cultivars to B. tabaci has been discussed in our previous study [13].
Floral resource availability during growing seasons mediates the population dynamics of pollinators [41]. In our study, the bee population in the 2020 fall season increased over the sampling weeks and reached its peak in Week 6. The populations of bees and wasps in the 2021 spring season had a similar trend and reached their peaks in Week 5 for bees and Week 6 for wasps. The population dynamics of the pollinators may be attributed to the corresponding plant phenology of the snap bean, because the snap bean plants were in vegetative growth in Weeks 1 and 2, bloomed in Weeks 3 and 4, and reached the peak of blooming and podded in Weeks 5 and 6.
Pollinators play a crucial role in the reproduction of most angiosperms [18]. Although common beans are partially autogamous, several studies have demonstrated that cross-pollination provided by insect pollinators could increase seed production by reducing abortion rates [42][43][44]. Moreover, different plant cultivars may possess various levels of attractiveness to pollinators [19][20][21][22]. For instance, two common bean varieties of 'Carioca' and 'Trout' with higher nectar and larger flowers could have more floral attractiveness to pollinators [21]. It was noted that the flower attractiveness to pollinators might vary among different soybean varieties cultivated in Brazil [22]. However, we did not find any significant differences regarding the numbers of pollinators among the 24 snap bean cultivars.
Other beneficial arthropods include natural enemies such as spiders, which are predators and play an important role in insect pest management [45,46]. Surprisingly, we only observed a few spiders using the current sampling tools (pan trap, yellow sticky cards, and sweep nets). In the future, other sampling tools may be employed to better trap spiders.
Climatic factors (e.g., temperature, relative humidity, and rainfall) are known to have a great influence on the population dynamics of pests [13,[47][48][49], pollinators [50,51], and natural enemies [52][53][54]. The correlation of B. tabaci infestations with climatic factors has been discussed previously [13]. As for the correlations of other identified insect pests with climatic factors, we only found that the population of adult E. varivestis was negatively correlated with relative humidity. Other studies have directly demonstrated that the population of E. varivestis was greatly affected by rainfall, relative humidity, and temperature [55][56][57][58][59]. However, we are unaware of any studies assessing correlations between field populations of E. varivestis and climatic factors. As the dominant pest in the 2021 spring season, the population of E. fabae adults was not correlated with any climatic factors, which agrees with a previous study [47]. It was observed that the population of cotton leafhopper, Amrasca biguttula biguttula (Ishida) (Hemiptera: Cicadellidae), has non-significant correlations with all weather variables [47]. However, most previous studies reported significant correlations between leafhopper populations and climatic factors [60][61][62]. Regarding the correlations of pollinators with climatic factors, we detected that the populations of bees were positively correlated with temperature, which may be because more bees from other fields could locate the resources in our experimental field by increasing their searching ability when temperature increased. This discovery confirms the finding of a previous study indicating that the number of bees that collected nectar had a positive association with air temperature [63]. However, several previous studies detected negative correlations between the number of bees and temperature [64][65][66]. As for the correlations between the populations of natural enemies and climatic factors, the populations of predatory ladybird beetles were negatively correlated with all the selected weather variables in our experiment, which partially agrees with the findings from earlier studies [67,68]. It was reported that the populations of predatory ladybird beetles showed a positive correlation with temperature and negative correlations with relative humidity and rainfall [67].
Conclusions
In summary, cultivar 'Jade' harbored the lowest number of B. tabaci eggs, whereas cultivars 'Gold Mine', 'Golden Rod', 'Long Tendergreen', and 'Royal Burgundy' supported a significantly lower number of B. tabaci nymphs. Moreover, cultivars 'Greencrop' and 'PV-857' had the lowest numbers of E. fabae adults, while cultivars 'BA0958', 'Barron', 'Bronco', 'Caprice', 'Colter', 'Gold Mine', 'Golden Rod', 'Greenback', 'Greencrop', 'Long Tendergreen', 'Maxibel', 'Prevail', 'PV-857', and 'Roma II' harbored significantly lower numbers of L. lineolaris adults. Thus, the lowest numbers of both E. fabae and L. lineolaris adults were found on cultivars 'Greencrop' and 'PV-857'. Cultivars 'Gold Mine', 'Golden Rod', and 'Long Tendergreen' had the lowest numbers of both B. tabaci nymphs and L. lineolaris adults. These snap bean cultivars could be a good option for local vegetable growers in the southern United States. The utilization of these snap bean cultivars might (1) reduce B. tabaci, E. fabae, and L. lineolaris populations; (2) decrease plant damage; (3) reduce insecticide use and enhance insecticide use efficiency, consequently protecting human health and the environment; (4) lower control costs; (5) minimize yield losses; and (6) increase the profitability and competitiveness of local vegetable producers in domestic and international marketplaces. In addition, the peaks for adult cucumber beetle, M. cribraria, and E. fabae were observed in Week 3, for E. varivestis in Week 1, for thrips in Weeks 3 and 4, and the peak for L. lineolaris occurred in Week 4. To obtain optimal pest management, vegetable growers could take management measures to target different pests at the corresponding peak times. The bee population reached its peak in Weeks 5 and 6, while the wasps did so in Weeks 2 and 6. Thus, vegetable growers should consider protecting pollinators when they apply pest control measures during Weeks 2, 5, and 6. Temperature, relative humidity, and rain were negatively correlated with predator ladybird beetle populations, which demonstrated that predator ladybird beetle populations will decrease, and other pest control methods need to be guaranteed when the temperature, relative humidity, and rain increase. Temperature was positively correlated with bee populations, which implied that when the temperature increases, the bee populations will increase, and vegetable growers may not need to release supplemental pollinators for pollination services.
|
2023-03-01T16:09:21.061Z
|
2023-02-25T00:00:00.000
|
{
"year": 2023,
"sha1": "9eefe0077fd54927c711997f158c5b4ea7c99c26",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-4450/14/3/230/pdf?version=1677633963",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2e7ec00d846d64286d840a3ddd50e7ccd8df9a64",
"s2fieldsofstudy": [
"Agricultural And Food Sciences",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
211828898
|
pes2o/s2orc
|
v3-fos-license
|
Skewed X-Chromosome Inactivation and Compensatory Upregulation of Escape Genes Precludes Major Clinical Symptoms in a Female With a Large Xq Deletion
In mammalian females, X-chromosome inactivation (XCI) acts as a dosage compensation mechanism that equalizes X-linked genes expression between homo- and heterogametic sexes. However, approximately 12–23% of X-linked genes escape from XCI, being bi-allelic expressed. Herein, we report on genetic and functional data from an asymptomatic female of a Fragile X syndrome family, who harbors a large deletion on the X-chromosome. Array-CGH uncovered that the de novo, terminal, paternally originated 32 Mb deletion on Xq25-q28 spans 598 RefSeq genes, including escape and variable escape genes. Androgen receptor (AR) and retinitis pigmentosa 2 (RP2) methylation assays showed extreme skewed XCI ratios from both peripheral blood and buccal mucosa, silencing the abnormal X-chromosome. Surprisingly, transcriptome-wide analysis revealed that escape and variable escape genes spanning the deletion are mostly upregulated on the active X-chromosome, precluding major clinical/cognitive phenotypes in the female. Metaphase high count, hemizygosity concordance for microsatellite markers, and monoallelic expression of genes within the deletion suggest the absence of mosaicism in both blood and buccal mucosa. Taken together, our data suggest that an additional protective gene-by-gene mechanism occurs at the transcriptional level in the active X-chromosome to counterbalance detrimental phenotype effects of large Xq deletions.
INTRODUCTION
For dosage compensation of X-linked genes expression between hetero-(XY males) and homogametic (XX females) sexes, mammalian females have evolved a complex epigenetic mechanism to transcriptionally silence all but one X-chromosome per diploid set, called Xchromosome inactivation (XCI). In this process, which occurs in early embryogenesis, parental X-chromosomes have the same probability for random inactivation, giving rise to an overall 1:1 ratio of cells that express either the paternal or the maternal X-chromosome. Once XCI has occurred, the inactive X-chromosome (Xi) is stably transmitted through subsequent mitosis. Nonetheless, non-random or skewing of XCI can arise by chance or due either to primary nonrandom choice or to secondary stochastic or genetic processes (Fieremans et al., 2016). In primary skewing, variants in genes participating from the XCI process itself (i.e., XIST) preclude the cell from silencing the X-chromosome carrying the mutation before the XCI starts. Alternatively, secondary skewing generally takes place in post-inactivation cell selection, acting for or against cells carrying the active Xchromosome (Xa) or the Xi (Morey and Avner, 2011). So, secondary XCI skewing often occurs in females with a structurally abnormal X-chromosome, such as large deletions, duplications, and unbalanced X/autosome translocations, in a manner that preserves the normal X-chromosome and autosomal dosage (Schmidt et al., 1991). Conversely, in balanced X/autosome rearrangements, the normal Xchromosome is usually inactive, in order to keep functional euploidy (McMahon and Monk, 1983).
Cumulative evidence also estimates that 12-23% of X-linked genes in humans escape from XCI, being expressed from both the Xa and Xi (Carrel and Willard, 2005;Balaton et al., 2015;Tukiainen et al., 2017). XCI escape genes are distributed in clusters, mainly located on the short arm of the X-chromosome, possibly as a reflection of their distance from the XCI center (Xic) (Disteche, 1999;Tsuchiya et al., 2004;Carrel and Willard, 2005). Besides, one intrigant cluster of Xi-expressed genes maps in a gene-rich region at Xq28 (Carrel and Willard, 2005).
It is noteworthy, however, that genes located on the human X-chromosome seem to be expressed in few tissues or are specific for a subset of tissues, e.g., brain (Hurst et al., 2015). Furthermore, there is an excess of XCI escape genes involved in neurocognitive function (Zhang et al., 2013), which could explain some of the somatic abnormalities seen in females and males with sex chromosome aneuploidies like Turner or Klinefelter syndromes, even in the presence of only one Xa. Moreover, intellectual disability (ID) is a common phenotypic component among females harboring mutations on escape genes and XCI skewing (Fieremans et al., 2015;Snijders Blok et al., 2015;Fieremans et al., 2016;Reijnders et al., 2016).
Herein, we report a female with a de novo heterozygous deletion at Xq25-q28 associated with an extreme XCI skewing pattern against the deleted X-chromosome. The patient was evaluated due to the presence of Fragile X syndrome (FXS; MIM# 300624) in her nephew. Surprisingly, transcriptome analysis revealed an upregulation compensatory mechanism of X-linked genes within the deleted region that escape or variable escape XCI, including ID genes, preventing her from having ID and/or other major clinical features, but premature ovarian failure (POF). Altogether, our data suggest that, at least for some XCI escape genes, structural hemizygosis caused by large X-chromosome deletions may be transcriptionally counterbalanced, avoiding functional haploinsufficiency.
Study Participants
The research protocols adhered to the ethical principles for medical research involving human subjects and received approval from the Institutional Ethics Committee. The index family was referred to the Human Genetics Laboratory at the State University of Rio de Janeiro (Rio de Janeiro, Brazil) in 2016, because of an idiopathic history of ID and autism in the propositus, compatible with FXS. The three-generation family comprised five members available for testing (individuals I.1, I.2, II.2, II.3, III.1), including the asymptomatic aunt of the proband (individual II.3), who was tested as part of a routine genetic counseling procedure for FXS ( Figure 1A).
Karyotype and Array-Comparative Genomic Hybridization (array-CGH)
In parallel to FXS interrogation, cytogenetic evaluation was performed on cultured peripheral blood lymphocytes from the proband, by standard methods to exclude chromosome aberrations linked to ID. As the proband's aunt (II.3) expressed the intention of becoming pregnant, standard karyotype analysis was also performed in her peripheral blood cells.
With the purpose of delineating an Xq deletion detected in individual II.3 karyotype ( Figures 1B, C), array-CGH was conducted in gDNA extracted from her peripheral blood using a 180 K whole-genome platform (Agilent Technologies, Santa Clara, CA, USA). Samples were labeled with Cy3-and Cy5deoxycytidine triphosphates by random priming. Purification, hybridization, washing, image scanning, and data analysis were carried out as previously reported (Santos-Rebouças et al., 2015).
Microsatellite Genotyping and X-Chromosome Inactivation Assay
To assess the parental origin of the Xq deletion, six polymorphic microsatellite repeat markers along the X-chromosome were interrogated in all family members available by quantitative fluorescence PCR using fluorochrome-labeled primers and separating the amplimers by high-resolution capillary electrophoresis, as previously described (Ogilvie et al., 2005). Both blood and buccal mucosa DNA samples were genotyped. Three heterozygous microsatellites within the Xq deletion were informative to confirm the parental origin of the deletion.
Besides, the extent of XCI was estimated by determining the Xa/Xi ratios in DNA from blood, and buccal mucosa of individual II.3 using the methylation-sensitive restriction enzyme indirect AR/RP2 biplex assay previously reported (Machado et al., 2014). Allele profiles and areas under the curve for each allele were determined on an ABI3130 Genetic Analyzer (Thermo Fisher Scientific Inc., MA, USA) and data were analyzed by GeneScan Analysis 3.7 and Genotyper 3.7 software (Thermo Fisher Scientific Inc.). Fluorescent peak areas representing true alleles were normalized for the existence of stutter products, and the XCI ratios were estimated as previously described (Busque et al., 2009;Machado et al., 2014).
RNA-Seq
Blood RNA samples from individual II.3 and an age and sexmatched control were subsequently analyzed by RNA-Seq in Illumina platform (Genone Biotechnologies, Rio de Janeiro, Brazil). Total RNA was purified using poly-T oligo-attached magnetic beads with rRNA removal. The resulting directional RNA-Seq NEB libraries were sequenced in paired-end format. Image analysis and per-cycle base calling were performed with Illumina Real-Time Analysis software (RTA1.9) (Illumina). Conversion to FastQ read format was obtained by CASAVA-1.8 (Illumina) and sequenced reads were quality-checked with FastQC (Andrews, 2010). Sequence adaptors were removed with cutadapt v1.2.1 (Martin, 2011), and reads were aligned to the human reference genome (GRCh37/hg19) with STAR (Dobin et al., 2013). BAM files were visualized by using the Integrative Genomics Viewer (IGV) (Robinson et al., 2011).
For single nucleotide polymorphisms (SNPs) and insertions/ deletions (indels) analyses, Samtools and Picard were used to sort the reads according to the genome coordinates, followed by screening out repeated reads. Finally, GATK3 (McKenna et al., 2010) was used to carry out SNP and indel calling. ANNOVAR Circle or square with a black dot represents an unaffected carrier female or male, respectively. "N" indicates no FMR1 expansion. A heterozygous Xq25-q28 deletion is present in the individual II.3; (B) Partial G banded karyotype from the individual II.3 and ideogram of the Xq deleted region. (C) Pictures of the individual II.3 that harbors the Xq25-q28 deletion; (D) X-chromosome array-CGH analysis plot from the individual II.3. Cy3-labeled DNA of the individual II.3 was co-hybridized with Cy5-labeled DNA from a control onto the array. The double arrow points to the deletion of subsequent probes. Note that the deletion is seen as an increased Cy5/ Cy3 ratio. was applied for annotation and variants were reported according to the Human Genome Variation Society (HGVS) guidelines for cDNA sequence variants (GRCh37/hg19).
For differential expression analysis, HTSeq v0.6.1 (Anders et al., 2015) was used to count the read numbers mapped for each gene and Fragments Per Kilobase of transcript sequence per Millions base pairs sequenced (FPKM) were used to estimate gene expression levels, taking into consideration the effects of both sequencing depth and gene length on the counting of fragments (Mortazavi et al., 2008). Subsequently, read counts were adjusted by TMM, then differential expression analysis was performed by using the EdgeR R package (Robinson et al., 2010). In the absence of biological replicates, adjusted p-value or q value < 0.005 and absolute fold change of 1 were set as the threshold for significant differential expression. The distribution of the differentially expressed genes was depicted using Volcano plots.
TFCat (Fulton et al., 2009) was used for searching transcription factors (TF) for the differential expressed genes. Besides, identification of oncogenes and their annotation was done by searching the Catalogue Of Somatic Mutations In Cancer (COSMIC) database (Tate et al., 2019) using the differentially expressed genes.
To evaluate the expression of genes within the Xq deletion, we performed differential expression (DE) analysis using only genes that were expressed at a mean level above 10 counts per million (CPM) at least in one sample (individual II.3 or control). For comparing more accurately expression levels of blood-expressed genes within the deletion and minimize transcriptional variance between individuals, we searched for additional RNA-Seq experiments from healthy controls on Sequence Read Archive (SRA; https://www.ncbi.nlm.nih.gov/sra), using "control", "blood", "HiSeq 2500", "RNA-Seq", "paired", as search terms for type of samples, tissue, instrument, assay-type, and library layout, respectively. Four female (SRR3745154, SRR3745158, SRR3745160, SRR3745166) and one male (SRR3745151) samples from adult individuals residing in the same geographic area of individual II.3 and control (Rio de Janeiro, Brazil) were selected [Bioproject: PRJNA327986; (de Araujo et al., 2016); personal communication]. Besides, we included two RNA-Seq samples from additional healthy males (SRR3389246, SRR3390437) described on the Bioproject PRJNA316578 (no associated publication). Two DE comparisons were performed: individual II.3 versus males (SRR3745151, SRR3389246, SRR3390437) (group 1) and individual II.3 versus females (our control, SRR3745154, SRR3745158, SRR3745160, SRR3745166) (group 2). Only genes within the deletion with a CPM over ten (individual II.3 or our control) were evaluated in these latter DE comparisons, using the same pipeline described above.
RESULTS
A paternal large deletion was identified in the terminal part of the long arm of the X-chromosome (Xq25-q28) in the aunt of the proband with Fragile X syndrome. POF is the unique apparent phenotype in this female. Using methylation assays in blood and buccal mucosa, we showed that extreme XCI skewing resulted in the silencing of the structurally abnormal X-chromosome. Besides, focusing on the genes located within the deletion, transcriptome analysis of blood samples from this female in comparison to matched controls revealed that genes annotated as escape or variable escape genes are upregulated, preventing major clinical phenotypes in this individual. The application of different assays described below excluded the possibility of mosaicism.
FMR1 Analysis
mPCR in the family of the female harboring the Xq deletion confirmed a fully methylated expansion in the FMR1 gene on her nephew (proband III.1; > 200 CGG repeats), compatible with the FXS phenotype. As expected, segregation analysis in the mother of the proband (individual II.2) showed a FMR1 methylated premutation (normal allele: 29 CGGs; expanded allele: 71 CGGs), that was inherited from her father, individual I.I, who has a smaller unmethylated premutated allele (62 CGGs). Subsequent FMR1 gene CGG repeat number evaluation in the proband's aunt (individual II.3), the female explored in details in this study, revealed the presence of only one unmethylated allele with 29 repeats, suggesting homozygosity for the FMR1 CGG triplets ( Figure 1A; Supplementary Figure S1).
Cytogenetic and Array-CGH Findings
Cytogenetic evaluation in the aunt (individual II.3) for genetic counseling purposes by standard karyotype revealed a large terminal deletion on the long arm of X-chromosome (Xq25-q28) ( Figures 1B, C). One hundred metaphases were analyzed, which excludes the hypothesis of mosaicism. Oligo array-CGH revealed a hemyzygous deletion of at least 32,450,808 bp (chrX:122,757,437-155,208,244; hg19), comprising 598 NCBI RefSeq curated genes, pseudogenes and microRNAs ( Figure 1D). According to the array-CGH, the proximal breakpoint of the terminal deletion is within the THOC2 gene, and no other potential pathogenic CNV was found. The rearrangement reported in this study has been submitted to ClinVar (https://www.ncbi.nlm.nih.gov/ clinvar/) with accession number SCV000897650.
Individual II.3 Phenotype
Individual II.3 was first evaluated at 34 years old. She is the second daughter of a nonconsanguineous couple and her hallmark developmental milestones did not point to delayed cognitive functioning or unexpected adaptive skills abnormalities. She holds a Bachelor's degree in Biological Science, and she currently works in the administrative sector of a private enterprise, with no apparent mild cognitive impairment or other major clinical condition ( Figure 1C). Cranial Magnetic Resonance imaging performed in 2016 presented normal results. After the detection of the Xq25q28 deletion, she searched for an IVF for reproductive assistance. During the process, she began having irregular menses, and routine biochemical tests revealed abnormal anti-Mullerian hormone (<0.001 ng/ml), folliclestimulating hormone (73.4 mUI/ml), and luteinizing hormone (33.6 mUI/ml) levels, compatible with early menopause. Videohysteroscopy showed endocervical polyps, normal uterine cavity, and atrophic endometrial. Her family history is negative for either infertility or premature ovarian insufficiency/ failure (POI/POF). Currently, she is considering in vitro fertilization with egg donation.
Parental Origin of the Xq Deletion and XCI Patterns
Parental origin of the abnormal X-chromosome in the family, assessed with linkage analysis with highly polymorphic microsatellite loci along this chromosome, showed that the Xq25-q28 deletion in individual II.3 occurred in the germline of her father (I.1). Both blood and buccal mucosa DNA samples showed complete hemizygosity for the DNA markers within the deletion (Supplementary Table S1).
Methylation-sensitive restriction enzyme typing with the AR/ RP2 biplex assay proxy of XCI revealed extreme skewing (>90%) for both AR and RP2 gene markers in blood and buccal mucosa (Supplementary Figure S2 and Table S2). The preferential XCI turned off the abnormal X-chromosome (236 bp allele for AR and 374 bp allele for RP2), inherited from his father (individual I.1).
RNA-Seq
Blood RNA-Seq data quality summary is found on Supplementary Table S3. Reads across four highly polymorphic and high-quality SNPs within the deletion demonstrated monoallelic expression, suggesting no detectable mosaicism and confirming the near to complete XCI skewing observed in the blood sample of individual II.3 (Supplementary Table S4). No blood-expressed indels were found from the proximal array-CGH breakpoint until the end of Xchromosome (Supplementary Table S5).
Transcriptome-wide analysis in individual II.3 uncovered 1,026 differentially blood-expressed genes, as compared with the matched control sample (Supplementary Figure S3). From the 598 RefSeq genes mapping within the X-chromosome deletion, 241 genes were expressed on blood according to our RNA-Seq analysis. From these, 117 transcripts have more than 10 counts in at least in one of the samples (individual II.3 or control) (Supplementary Table S6). Only three genes within the deletion (GPR112, SLC6A8, and FUNDC2) showed statically significant adjusted p-values and | log2(FoldChange)|, for which GPR112 (log2 fold change value = -7,72; q value = 0,0002), was underexpressed and SLC6A8 (log2 fold change value = 1,88; q value = 0,001), and FUNDC2 (log2 fold change value = 1,81; q value = 0,002), were overexpressed in individual II.3, in comparison to control. No differential expression was found for THOC2 gene, located on the proximal breakpoint. Additional analysis of X-linked genes outside the deletion revealed 15 other genes differentially expressed (Supplementary Table S7).
Human Disease Ontology analysis showed that the differentially expressed genes are enriched in auditory system disease, proteinuria, primary ciliary dyskinesia, and idiopathic generalized epilepsy. None of these conditions are present in individual II.3 (Supplementary Figure S4). TFCat and COSMIC databases did not disclose any oncogene or transcription factor associated with the differentially expressed genes mapping within the Xq25-q28 deletion.
Global GO enrichment analysis revealed significant values for the three classes. The terms with the best scores (adjusted p-value < 0.01 and at least ten genes) for each category scored by p-value were represented in Supplementary Figure S5. Enriched Biological Processes were mainly related to homophilic cell adhesion via plasma membrane adhesion molecules, cell-cell adhesion via plasma-membrane adhesion molecules, membrane depolarization during action potential, synapse organization, extracellular matrix organization, extracellular structure organization, action potential, multicellular organismal signaling, regulation of membrane potential, and sensory perception of sound. Molecular Functions enriched terms encompassed motor activity, extracellular matrix structural constituent, actin binding, transmembrane receptor protein tyrosine kinase activity, calmodulin binding, dynein light chain binding, transmembrane receptor protein kinase activity, ATP-dependent microtubule motor activity, minus-end-directed, dynein intermediate chain binding, and actin filament binding, whereas the main GO terms for cell component category retrieved were proteinaceous extracellular matrix, extracellular matrix component, apical part of cell, basement membrane, collagen trimer, sarcomere, myofibril, contractile fiber part, myosin complex, and contractile fiber.
Differential alternative splicing in the RNA-Seq data from Xchromosome identified one significant alternative 5' splice site (A5SS) involving the HSD17B10 gene and two events of skipped exon on XIST and IDS genes (Supplementary Table S8). No other events such as alternative 3' splice site (A3SS), mutually exclusive exons (MXE) and retained intron (RI) events were identified on X-chromosome.
For minimizing a possible bias associated with the use of only one matched control in the DE analysis, we included additional healthy control RNA-Seq samples obtained from SRA database. Expression comparison for genes within the deletion on group 1 (individual II.3 versus males) showed significant overexpression for MCF2, SLC6A8, FUNDC2, and VBP1 genes in the individual II.3, whereas on group 2 (individual II.3 versus females), only FUNDC2 gene showed significant values, being also overexpressed on individual II.3. No gene exhibited a significantly decreased value in both comparison groups ( Supplementary Table S9). Besides, 167 from the 1,026 DEgenes identified in the first transcriptome-wide analysis (individual II.3 versus our matched control sample) were replicated (padj < 0.05), when we included more female controls (group 2) (Supplementary Table S10). The RNA-seq data (raw and processed files) for the individual II.3 and the matched control was deposited on GEO database (accession GSE141766).
DISCUSSION
Different strategies have evolved for equalizing X-chromosome expression between sexes in different organisms (Gelbart and Kuroda, 2009). In humans, XCI is characteristically incomplete, with a subset of 12-23% genes known to be also expressed from the Xi, called XCI escape genes (Carrel and Willard, 2005;Talebizadeh et al., 2006;Yasukochi et al., 2010;Cotton et al., 2013;Lister et al., 2013;Szelinger et al., 2014;Balaton et al., 2015;Cotton et al., 2015;Wainer-Katsir and Linial, 2016;Tukiainen et al., 2017). Human genes that escape from XCI tend not to be expressed to the same levels that are observed from the Xa . Usually, an XCI escape gene shows ≥10% expression from the Xi allele compared with the Xa allele (Carrel and Willard, 2005). Some of the XCI escape genes are members of X-Y gene pairs with a paralogue on the Y chromosome, where they can have the same function as the X paralogue. Other XCI escape genes have lost their Y paralogue, or their Y paralogue has evolved a distinct, often testis-specific, role (Jegalian and Page, 1998;Deng et al., 2014) and highly conserved dosage-sensitive X/ Y paralogs that escape from XCI in females are candidates for being responsible for embryo survival (Bellott et al., 2014).
Moreover, the number of XCI escape genes is bigger on the evolutionarily more recent strata of the X-chromosome (Ross et al., 2005;Balaton and Brown, 2016). Beyond the pseudoautosomal regions (PARs), one of the gene clusters expressed from Xi maps to the gene-rich region Xq28, where the expression level may reach 50% (Carrel and Willard, 2005). So, irrespective of whether mutations in XCI escape genes are located on the Xa or Xi, they could be detrimental (Fieremans et al., 2015).
In our study, we report on a female with a large Xq25-q28 deletion and extreme XCI skewing towards the altered paternal Xchromosome on blood and buccal mucosa. Regardless of the skewed XCI, the deletion forces the structural hemizygosis of XCI escape and variable escape genes. Within the Xq deletion, there are at least 16 fully XCI escape genes, 27 variable escape genes and a considerable number of additional genes with unknown XCI statuses [combined status described on (Tukiainen et al., 2017); Supplementary Table S6]. However, individual II.3 presented significant differential gene expression only for three bloodexpressed genes spanning the deletion (GPR112, SLC6A8, FUNDC2) on transcriptome-wide analysis in comparison to the matched control. While FUNDC2 is known to be subject to XCI, GPR112 escapes XCI and SLC6A8 has its XCI status yet unknown (Tukiainen et al., 2017). Surprisingly, SLC6A8, required for the uptake of creatine in muscles and brain (Fezai et al., 2014), and FUNDC2, that supports platelet survival via AKT signaling pathway (Ma et al., 2019), are overexpressed in individual II.3, in comparison to the matched control. The significant overexpression for SLC6A8 and FUNDC2 genes were corroborated by an additional DE analysis with male and female control samples obtained from the SRA database. Although GPR112 did not demonstrate significant decreased expression on individual II.3 in such analysis, it could probably be due to methodological differences among the studies, concerning mainly RNA isolation and library preparation procedures. The observed equalized expression of most XCI escape and variable escape genes on Xa suggests that in this female occurs transcriptional upregulation of genes lost in the structurally abnormal X-chromosome, avoiding their functional haploinsufficiency.
X-chromosome is enriched in genes related to cognitive function (Zechner et al., 2001), and there is an excess of XCI escape genes associated with ID (Zhang et al., 2013), which is consistent with the presence of learning impairment in phenotypes associated to X-chromosome aneuploidies (Rooman et al., 2002). Moreover, the Xq25-q28 region is well known to be a hotspot for ID. Several deletions of the Xq25-q28 region in females with ID partly overlapping that seen in individual II.3 have been reported on the Decipher database (Firth et al., 2009). The consequences of such deletions can result in deregulation of the affected genes and may also reflect transacting effects on other chromosomal loci or even more global genomic alterations. Usually, the larger the deletion is, the more phenotypically detrimental it is, pointing to a cumulative effect. Notwithstanding, the hallmark in this female patient is the great extension of the deletion, including hotspot regions for ID and premature ovarian failure.
The proximal breakpoint of the Xq deletion according to array-CGH resides on THOC2, a gene subject to XCI that was previously associated to neurodevelopmental disorders in males (Kumar et al., 2015) and also in a female with a de novo missense variant (p.Tyr517Cys) and no available XCI status data (Kumar et al., 2018). The absence of significant differential expression for this gene suggests that individual II.3 was protected for presenting THOC2 deleterious effects due to extreme XCI skewing. As recent transcriptome analysis suggests that XCI is generally uniform across human tissues (Tukiainen et al., 2017), we could speculate that the same X-chromosome was preferentially inactivated in different tissues other than blood and buccal mucosa.
Among the escape and variable escape genes within the Xq deletion, there are genes, whose mutations were previously associated with ID with clinical manifestation also in females, including NAA10 (Gupta et al., 2019). Moreover, there are additional genes with fully/variable escape patterns or female bias profile related to essential biological functions or clinical conditions, such as IKBKG, associated to Incontinentia Pigmenti (Supplementary Table S6). Although there is still some divergence about the escape statuses of X-chromosome genes on the literature, our transcriptome results suggest that the individual II.3 compensated the expression, at transcription levels, for some bloodexpressed XCI escape and variable escape genes within the deletion.
The unique apparent phenotype in individual II.3 is the presentation of POF at 34-years old. Two POF susceptibility regions have been identified: POF1 extends from Xq21-qter, including FMR1 gene, whereas POF2 spreads from Xq13.3 to Xq21.1 (Lacombe et al., 2006). Indeed, terminal deletions at Xq were reported as part of a workup for infertility or POI and also in women screening for FMR1 premutation (Yachelevich et al., 2011). Individual II.3 has no family history for POI/POF, despite the segregation of FMR1 premutations in her family. Ovarian function in this female may be impaired by monosomy for genes required in double amount after X-chromosome reactivation for germ-cell development (Rossetti et al., 2004). Besides the deletion involving POF1 region, individual II.3 presented a significant differential expression for POF1B gene (log2 fold change value = -7,28; q value = 0,001), which is located at POF2 region (Xq21.1) and is proposed to escape from XCI. POF1B may act as an anti-apoptosis factor, slowing down the process of germ cell loss, so that POF1B loss of function mutations could lead to exaggerated germ cell apoptosis and POF (Lacombe et al., 2006). Recent advances have also demonstrated the importance of XCI escape genes in sexually dimorphic risk, particularly cancer (Balaton and Brown, 2016;Arnold and Disteche, 2018). Nonetheless, the COSMIC database did not disclose any oncogene among the differential expressed genes within the Xq deletion, yet a future clinical outcome cannot be eliminated. Besides POF1B, four autosomal differential expressed genes related to the term "premature ovarian failure" (HP:0008209) in the Human Phenotype Ontology (HPO) database were found and could have influenced in the only apparent phenotype of the patient: CEP290 (log2 fold change value = 1,75; q value = 0,004), HFM1 (log2 fold change value = -7,34; q value = 0,001), STAG3 (log2 fold change value = -5,26; q value = 0,00001), and NPHP4 (log2 fold change value = -8,65; q value = 0,0000003). Three of these autosomal genes (CEP290, STAG3, NPHP4) were replicated (padj < 0.05), when we added more female controls (group 2), exhibiting similar log2 fold change trends (positive or negative) (Supplementary Table 10). We should remark that although POF is the unique apparent phenotype in individual II.3, we cannot discard future clinical outcomes in the patient, mainly associated to the diseases, biological processes, molecular functions and cellular component enriched in the GO analysis for the global differentially expressed genes.
The most viable explanation for the absence of major clinical symptoms in the individual II.3 would be a post-zygotic mosaicism event, involving the concomitant presence of 46, XX, and Xq25-q28 deletion cells. Except for rs572013, all the other monoallelic blood-expressed SNPs within the deletion (rs859577, rs8965, rs1059703) are highly polymorphic in GnomAD Browser (Karczewski et al., 2019) with frequencies of 0.63, 0.52, and 0.68, respectively. The hemizygosity for these markers, in addition to the high metaphases, count in karyotype analysis, as well as blood and buccal mucosa hemizygosity concordance for microsatellite markers within the deletion argues against of the occurrence of mosaicism, at least in these different embryonic tissues. Altogether, the data also confirm the near to complete XCI skewing. Even that the skewed XCI may occur as a purely stochastic event and can vary between tissues and with age, XCI patterns in blood and buccal mucosa are accepted as a representative for the pattern in the brain and other tissues (Bittel et al., 2008).
The presence of an adjustable compensation mechanism on individual II.3 can demonstrate that gene-by-gene upregulation likely occurred on X-chromosome to reduce deleterious dosage imbalance. Indeed, two major types of X-chromosome dosage compensation can be recognized. One balances X-chromosome gene expression between sexes (achieved by XCI in mammals), and the other equalizes gene expression throughout the genome by changing the relative expression of X-linked genes versus autosomal genes and vice-versa (Disteche, 2016). While Xchromosome upregulation relative to autosomes is evident in flies, resulting from a combination of homeostatic gene-by-gene regulation and chromosome-wide regulation (Chen and Oliver, 2015), it is still controversial in mammals (Gupta et al., 2006;Nguyen and Disteche, 2006;Xiong et al., 2010;Deng et al., 2011;Chen and Zhang, 2015). In general, genes compensatory responses include (a) buffering or passive absorption of gene dose perturbation by inherent system properties, (b) feedback or gene-specific sensing and adjustment of levels, which can result in overexpression, and (c) feedforward responses representing systems, such as the male X-chromosome in Drosophila (Zhang et al., 2010;Disteche, 2016). These mechanisms may act individually or, more likely, in combination. Exploring experimentally these hypotheses/mechanisms in depth is, however, beyond the purpose of our study.
According to the recent literature, dosage upregulation in individual II.3 is presumably due to positive feedbacks mediated by enhanced transcription initiation, improved mRNA stability and epigenetic changes favoring expression, mechanisms already described in Drosophila, yeast, and mammals (Deng et al., 2013;Deng et al., 2014;Disteche, 2016). However, we could not exclude the participation of additional compensatory mechanisms at posttranscriptional level (e.g., modulation by non-coding RNAs as miRNAs and lnRNAs) and translational/posttranslational levels (e.g., increased ribosome density/decreased proteolysis) (Deng et al., 2014;Disteche, 2016). It should be noted that Xchromosome is particularly flexible to gene-by-gene dosage compensation, since increased transcription levels and RNA stability have independently evolved to upregulate individual Xlinked genes after they lost their Y copy (Deng et al., 2013;Deng et al., 2014). Thus, X-linked transcripts appear to have a longer half-life than autosomal transcripts (Yin et al., 2009;Disteche, 2016) and gene-by-gene upregulation is known to differentially regulate subsets of ancestral and acquired X-linked genes to rich a balance with autosomes (Deng et al., 2013;Deng et al., 2014). Similar gene-by-gene compensation mechanisms were also described for other chromosomes. Imprinted genes in mice appear to be upregulated, alleviating deleterious effects at monoallelically expressed genes (Zaitoun et al., 2010). Although no imprinted gene has been identified on the human Xchromosome, there is an important overlap between XCI and such mechanism, since both are regulated by DNA methylation, histone modification, long non-coding RNAs and nuclear positioning. Furthermore, gene-by-gene downregulation was demonstrated in patients with Down syndrome (DS;MIM# 190685), in which 56% of the chromosome 21 transcripts are compensated for the gene-dosage effect, having mRNA levels similar to those of disomic genes (Aït Yahya-Graison et al., 2007).
One significant alternative 5' splice site (A5SS) involving the HSD17B10 gene and two events of skipped exon on XIST and IDS genes alternative splicing were also identified in individual II.3. The role of these events is not clear, since they involve X-linked genes outside the deletion. Nonetheless, we cannot exclude that they might be associated with longrange effects of the aberration. Furthermore, it is worth mentioning that the presence of two different rare mutations (meiotic del Xq and FMR1 expansion) in the same family is very unusual. The same paternal origin of the abnormal chromosomes led us to suspect that a common mechanism was responsible for the premutation allele in the mother (individual II.2) of the proband and the deleted Xchromosome in his aunt (individual II.3).
CONCLUSIONS
Dosage compensation mechanisms associated with sex chromosomes demonstrate uncovered intricacies. Altogether, our data suggest that besides preferential inactivation of the structurally abnormal X-chromosome, an additional protective gene-by-gene mechanism occurs at the transcriptional level in the Xa to counterbalance detrimental effects of large Xq deletions, which can have high impact in genetic counseling. Further functional investigations in similar cases of females with large Xq deletions and no major detrimental phenotypes with high throughput technologies appraising gene expression combined to chromatin marks are needed to confirm the proposed upregulation compensatory mechanism in XCI escape/variable escape genes.
DATA AVAILABILITY STATEMENT
The datasets generated for this study can be found in the ClinVar (accession number SCV000897650) and GEO database (accession number GSE141766).
|
2020-03-04T14:03:53.656Z
|
2020-03-04T00:00:00.000
|
{
"year": 2020,
"sha1": "d62ec09df383a52032a5967e62790e52e53fc8e4",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2020.00101/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d62ec09df383a52032a5967e62790e52e53fc8e4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
267222998
|
pes2o/s2orc
|
v3-fos-license
|
The global state of research in stem cells therapy for spinal cord injury (2003–2022): a visualized analysis
Objective Our study aimed to visualize the global status and frontiers in stem cell therapy for spinal cord injury by using bibliometric methodology. Methods Publication citation information related to stem cell therapy for spinal cord injury (SCI) studies between 2003 and 2022 was retrieved from the Web of Science Core Collection database. For the visualized study, VOS viewer software and Graph Pad Prism 9.5 were used to perform bibliometric analysis of included data and publication number statistics in stem cell therapy for the SCI domain. Results A total of 6,686 publications were retrieved. The USA and China made the highest contributions to global research with the highest number of citations and link strength. The journal Experimental Neurology ranks as the top journal, combining the publication amount and bibliometrics results. The University of Toronto, based in Canada, was the first-ranking institution. The directions of the current study could be divided into five clusters. The research of Transplantation and Regenerative Medicine and Neurosciences Mechanism Research may be the emerging frontiers in this domain. Conclusion In summary, stem cell therapy for spinal cord injuries is poised for more valuable advances.
Introduction
Traumatic spinal cord injury (SCI) can cause permanent sensorimotor and autonomic dysfunction, seriously affecting a patient's autonomous activities and quality of life (World Health Organization, 2023).With a life expectancy of several decades, the frequency of SCI is between 250 and 906 cases per million (GBD 2016Neurology Collaborators, 2019;Barbiellini Amidei et al., 2022;World Health Organization, 2023).SCI pathophysiology is one of the most complicated medical disorders, with a main and secondary phase (Ahuja et al., 2017).The current gold standard in SCI management can be summed up as timely surgery, medical care, neurorehabilitation, and lifelong care (Zipser et al., 2022).Although the death rate has decreased due to advancements in surgery and drug therapy, there are no optimal treatment strategies to repair damaged nerve cells, and long-term function Chen et al. 10.3389/fnins.2024.1323383Frontiers in Neuroscience 02 frontiersin.orgrehabilitation is still subpar (Ahuja et al., 2017;Liddelow and Barres, 2017;Mohammed et al., 2019).In traumatic SCI, neuroprotective techniques such as gene therapy, cell-based treatment, and biomaterials (Ziemba and Gilbert, 2017) are used to stop secondary injury mechanisms (Ashammakhi et al., 2019;Yoon et al., 2021;Aderinto et al., 2023).Due to its ability to remyelinate denuded axons, modulate the inflammatory response, restore damaged neuronal circuits, and provide trophic support, cellular transplantation as a regenerative therapy for spinal cord injuries has attracted a lot of attention in recent decades (Liddelow and Barres, 2017;Ashammakhi et al., 2019;Srikandarajah et al., 2023).The mechanisms of stem cells in SCI can be summarized as suppressing immunity against inflammation, releasing nutritional factors to enhance neurological recovery, and promoting the regeneration of in situ cells (Szymoniuk et al., 2022;Xia et al., 2023).Stem cells have been shown to enhance SCI recovery in clinical trials, while clinical translation of stem cell therapy is still difficult.Sensory, motor, and neurological recovery by stem cells has been widely demonstrated (Shinozaki et al., 2021;Szymoniuk et al., 2022;Xia et al., 2023).There are various challenges that affect the progress of stem cell research, such as low patient homogeneity, small sample size, insufficient follow-up duration, insufficient understanding of SCI pathophysiology, and poor cell survival regarding cell type, dosing, and biomaterials delivery (Shang et al., 2022;Hejrati et al., 2023;Schultz et al., 2023;Wong et al., 2023).
In-depth research is currently ongoing to determine the best cell type and transplantation technique for lesion bridging and remodeling, reducing immune rejection, and creating stable circuits (Zipser et al., 2022;Srikandarajah et al., 2023).Bibliometric analysis as a method can outline data in the vast literature based on literary metrology characteristics and literature databases.This allows for the quantitative and qualitative estimation of trends in previous years' research activity.It provides a means of identifying advancements in a specific domain and contrasting the contributions of publications, organizations, and nations (Wang et al., 2022).In recent years, bibliometric analysis has been successfully utilized in several research domains to support the creation of novel theories and has also been used in assessing research frontiers in pain management in OA (Chen et al., 2021), brain-computer interface technology (Li et al., 2023), microbiome-gut-brain axis (Zyoud et al., 2019), andCOVID-19 (Goswami andLabib, 2022).A study on the same topic was published in 2019 (Guo et al., 2019), with the latest research evolving rapidly; therefore, we conducted an updated discussion of stem cell therapy for spinal cord injury, unmasking trends that may be useful for learning about several international advancements in the domain and future research frontiers.
Data source and search methods
Literature citation messages from the Web of Science Core Collection (WoSCC) database, deemed as an ideal and commonly used data source, were analyzed via bibliometric analysis (Leydesdorff et al., 2013).All papers were retrieved in the WoSCC from January 1, 2003 to December 31, 2022, involving the articles in the domain over the last two decades.In the present study, the search terms were as follows: (TS = (spinal cord injury) OR TS = (spinal injury) OR TS = (spinal cord trauma)) and ((TS = (stem cell)) OR TS = (stem cells)) and PY = (2003-2022) AND LA = (English).We limited the article types to original research and reviews.
Data collection
The entire records information of all qualifying publications including title, author, year of publication, nation, affiliation, journal, keywords, and abstract were downloaded from the WOSCC.Graph Pad Prism 9.5 was used for publication number statistics.
Bibliometric analysis
The intrinsic function of the WOS database was used to establish the basic characteristics of papers.The VOS viewer software 1.6.18(Leiden University, Leiden, The Netherlands) was used for bibliometric visualization and analysis of the literature (van Eck and Waltman, 2010), including co-authorship, bibliographic coupling, co-citation, and co-occurrence analysis (Boyack and Klavans, 2010).
Amount of world publication
From 2003 to 2022, a total of 6,686 articles met the search criteria.The process for the selection and inclusion of the title catalog is illustrated in Figure 1.By measuring the publication time and trend distribution, the number of publications peaked in 2018 with 488 literature and fell to 414 in 2019.From 2019 to 2022, a sluggish rise in worldwide publications was seen (Figure 2A).
Publication distribution across nations
A total of 81 nations and regions contributed to this domain.China published the most related articles out of all of these nations (1,898, 30.83%), followed by the USA (1,821, 29.62%),Japan (537, 8.74%), Canada (345, 5.61%), and England (303, 4.93%).The top 20 countries are shown in a bar chart and color-coded on the world map (Figures 2B,C).
Total citation frequency
The number of citations for publications from the USA was the greatest (100,441), while China ranked second (40,982), followed by Japan (23,388),Canada (20,637), and Germany (15,873) (Figure 2B).
Publication distribution across authors
The top 10 authors contributed to 570 papers in total or 8.53% of all publications in this subject (Table 2).Okano Hideyuki and Nakamura Masaya ranking first and second, respectively, are both from Keio University in Japan.Dai Jianwu, Xiao Zhifeng, Zhao Yannan, and Chen Bing had a close cooperation in China.Figure 3C displays the top 20 authors as a bar chart.
Contribution funds across WoS categories
In total, the top 20 major funds across WoS categories have supported 5,311 research as shown in Figure 3D.National Natural Science Foundation of China (NSFC), the United States Department of Health and Human Services, and the National Institutes of Health were the top three fund sources, supporting 1,017, 799, and 789, respectively.Two Japanese funds ranked fourth and fifth, with the Ministry of Education Culture Sports Science and Technology Japan Mext (299) and Japan Society for The Promotion of Science (270), respectively.
Co-authorship analysis
Co-authorship analysis measures researchers' publication links, which can be used to examine the link strength of individual authors or scaled up to reflect the co-authorship link strength of nations and institutions (Chen et al., 2021).
According to co-authorship analysis, the relatedness of items is based on the number of papers co-authored, and 209 authors that published 10 articles or more were examined (Figure 4A).The following were the top five authors with strong link strength: Okano Hideyuki (579); Nakamura Masaya (553); Dai Jianwu (446); Xiao Zhifeng (419); Zhao Yannan (396).
VOS viewer was used to evaluate 53 countries whose publications were five or more (Figure 4B).The USA had a total link strength of 1,102, followed by China with 533, England with 340, Germany with 328, and Japan with 263.. Publications found in the 294 academic affiliations whose publications were five or more were examined (Figure 4C).University of Toronto (197), the University of California, San Diego (184), Sun Yat-sen University (179), Tehran University of Medical Sciences (167), and Chinese Academy of Sciences (149) were the top five universities with high total link strength.
Bibliographic coupling analysis
Using VOS viewer, the names of the journals in all articles were examined.A total of 250 recognized journals were visible in the link strength, as seen in Figure 5A.Cell Transplantation (256,384), Experimental Neurology (230,171), Journal of Neurotrauma (182,712), Neural Regeneration Research (170,468), and PLoS One (151,956) were the top five journals with the highest total link strength.
Papers found in the 639 institutions whose publications were five or more were examined.University of Toronto (790,059), Keio University (472,486), Sun Yat-sen University (415,185), Chinese Academy of Sciences (335,691), and the University of California, San Diego (307,960), were the top five universities with the highest total link strength (Figure 5B).
Papers found in the 53 countries whose publications were five or more were examined.USA (3,048,437), China (2,393,090), Japan (1,104,477), Canada (1,067,438), and England (712,587) were the top five countries with the highest total link strength (Figure 5C).
Co-citation analysis
The relationship between things based on how frequently they were quoted in a single document is displayed through co-citation analysis.
The overall co-citation connection strength of authors or journals was examined using the VOS viewer (van Eck and Waltman, 2017).
A total of 1,000 journals' link strength was displayed, and every journal that was chosen had at least 38 co-citations in this domain.
Co-occurrence analysis
The purpose of co-occurrence analysis is to discover research interests and emerging topics in literature, and it has proven to be important for monitoring the development of science and programs (van Eck and Waltman, 2009;Wang et al., 2019).Keywords that appeared five times or more were analyzed using VOS viewer.Node size in the figure indicates the frequency of occurrence, and lines represent connections between nodes.As shown in Figure 7A, the 1,000 identified keywords were classified into approximately five clusters: Combinatorial Therapy; Types of Stem Cells; Clinical Therapy, Transplantation, and Regenerative Medicine; and Neurosciences Mechanism Research.
On the other hand, the timeline graph in Figure 7B shows the chronological distribution of the keywords.The blue color indicates that the keyword appeared earlier and the yellow color keywords appeared later.Before 2012, namely, in the early stage of research, most studies focused on Types of Stem Cells.The latest trends showed that the Transplantation and Regenerative Medicine and Neurosciences Mechanism Research clusters would be concerned widely in the future.
Discussion
In this study, we used a combination of bibliometric and visualized analyses to generate a representation of the current state of stem cell
Research trends analysis
Since 2003, the publication number has seen continued growth until it peaked in 2018, and it recovered to the 2018 levels by 2022, indicating increasing attention from scholars.The advent of new technology, such as spatiotemporal epidural electrical stimulation (Kathe et al., 2022) and brain-spine interface (Lorach et al., 2023), diverting research interests may be the reason for the decline in articles.In addition, stem cell therapies for SCI not having yet provided reproducible evidence may be another reason, challenged by small effect sizes, low immune suppression, and low sensitivity study design (Zipser et al., 2022;Hejrati et al., 2023;Wong et al., 2023).
Quality of global publications by country, author, institution, and journal
China has the highest number of publications and the second total citation frequency, while the USA has a smaller amount of literature, and the total citation frequency is almost twice that of China.The top two countries have the largest number of fund supports, as well as the top rank for bibliographic coupling and co-authorship analyses conducted by country.These trends suggest that the USA and China have the largest quantity, highest academic impact, and extensive cooperation in this field.With increases in Chinese research funding, the quality of publications and academic impact from Chinese academia should be further improved.On the other hand, Japan and South Korea in Asia and England, Germany, and Italy in Europe have had a large amount of publication, quality, and impact over the past two decades.SCI is still a tough challenge mainly due to various pathological mechanisms including hemorrhage, ischemia, oxidative stress, inflammatory reaction, scar formation, and demyelination, which are difficult to clearly describe and elaborate (Kim et al., 2011;Ashammakhi et al., 2019;Aderinto et al., 2023;Hu et al., 2023).The cell response is the basic unit in the pathophysiology of SCI.The elaboration of stem cell response mechanisms is of great importance for finding effective intervention targets for SCI.Stem cells derive from a wide range of sources and have self-proliferation and multidirectional differentiation capabilities (Liu et al., 2022).The immunomodulatory mechanism is the most attractive aspect, mediated by contact between stem cells and immune cells depending on the realization of exosomes produced by the paracrine effect (Ankrum et al., 2014).Another mechanism is the promotion of axon regeneration to repair the damaged cells.In addition, stem cells can promote vascular repair, which is a new target for SCI treatment (Ni et al., 2018).When the understanding of the molecular mechanism is sufficient, we can find reliable strategies to boost stem cells' functional multipotency (Feng et al., 2022).
To achieve better treatment of SCI with stem cells, transplantation and regenerative medicine are needed, which is a combination of stem cells and biomaterials via tissue engineering (Aderinto et al., 2023).Stem cell transplantation has been deemed to be a promising way to replenish the lost spinal nerve cells (Xu et al., 2023).As mentioned above, the effectiveness of stem cell injection is hampered by challenges in cell delivery and low cell survival rates, while co-transplantation of stem cells and biological scaffold may have the potential to improve treatment performance but can lead to adverse reactions, including local inflammation and immune rejection (Chen et al., 2021).Regenerative medicine currently focuses on the aspects of 3-dimensional network to preserve the stem cell at the site of injury, extracellular matrix better maintaining cell viability, and biological strength.In the future, neurosciences mechanism, transplantation, and regenerative medicine still need more in-depth research (Yousefifard et al., 2016;Wallace et al., 2019;Zhang et al., 2022).
Strengths and limitations
Although the present study evaluated the overall situation and trend of stem cell therapy for spinal cord injury via bibliometric and visualized analyses, the following items about limitations have to be mentioned.English language articles and reviews were included based on the SCIE database of WOS.Non-English language literature could have been omitted, leading to language bias.Additionally, differences may exist between the real world and the present results.Therefore, we still need to focus on the latest primary studies and other non-English studies in our daily research work.
Conclusion
The present study showed the global trend in stem cell therapy for spinal cord injury.The USA and China are the top two contributors to studies and have the leading position in global research in this field.The journal Experimental Neurology had the most publications related to this issue.We believe that more studies about stem cell therapy for spinal cord injury will be published in the coming years.Particularly, the Transplantation and Regenerative Medicine and Neurosciences Mechanism Research studies, involving stem cell therapy for spinal cord injury, are the next popular hot spots.
Publication distribution across journalThe journal Neural Regeneration Research published the most studies with 172 publications.There were 150 articles in Cell Transplantation, 135 articles in Experimental Neurology, 119 articles in PLoS One, and 98 articles in the International Journal of Molecular Sciences on stem cell therapy for spinal cord injury.Table 1 lists the top 10 journals by the number of studies with their Quartile in Category (2022), and the top 20 journals are shown in a bar chart (Figure 3A).
FIGURE 2 Figure
FIGURE 2 Global trends and countries contributing to stem cell therapy for spinal cord injury research.(A) The global number of publications related to stem cell therapy for spinal cord injury research.The green bars indicate the single-year publication numbers.(B) The sum of stem cell therapy for spinal cord injury research-related articles from the top 20 countries.The green bars indicate the single-country publication number, and the black spot indicates the citation number of every country.(C) World map showing the distribution of stem cell therapy for spinal cord injury research.
FIGURE 3
FIGURE 3 Publication amounts of different journals, institutions, authors, and contribution funds.(A) The sum of stem cell therapy for spinal cord injury research from the top 20 journals.(B) The sum of stem cell therapy for spinal cord injury research from the top 20 institutions.(C) The sum of stem cell therapy for spinal cord injury research from the top 20 authors.(D) The sum of stem cell therapy for spinal cord injury research from the top 20 contribution funds.
The following were the top five journals with a high total link strength: 2,032,080 times in the Journal of Neuroscience; 1,397,751 times in Experimental Neurology; 1,143,804 times in the Proceeding of the National Academy of Sciences USA; 964,548 times in Nature; and 964,462 times in Biomaterials (Figure6A).A total of 1,000 authors' link strengths were displayed, and every chosen author had at least 55 co-citations in this area.The following authors ranked in the top five with strong overall links: Lu P. (61,747), Li Y. (32,781), Cao Q. L. (29,387), Basso D. M. (27,985), and Mcdonal J. W. (27,438) are the other five individuals with total link strengths are shown in Figure 6B.
FIGURE 4
FIGURE 4 Co-authorship analysis of stem cell therapy for spinal cord injury research.(A) Mapping of the 209 authors' co-authorship analysis on stem cell therapy for spinal cord injury research.(B) Mapping of the 53 countries' co-authorship analysis on stem cell therapy for spinal cord injury research.(C) Mapping of the 294 institutions' co-authorship analysis on stem cell therapy for spinal cord injury research.The size of the points represents that two authors/ countries/institutions had established collaboration.The thicker the line, the closer the link between the two authors/countries/institutions.
FIGURE 5
FIGURE 5 Bibliographic coupling analysis of stem cell therapy for spinal cord injury research.(A) Mapping of the 250 identified journals on stem cell therapy for spinal cord injury research.(B) Mapping of the 639 identified institutions on stem cell therapy for spinal cord injury research.(C) Mapping of the 53 identified countries on stem cell therapy for spinal cord injury research.(D) Mapping of the 984 identified authors on stem cell therapy for spinal cord injury research.The line between the two journals/institutions/countries shows that they had established a similarity relationship.The thicker the line, the closer the link between the two journals/institutions/countries/authors.
FIGURE 6
FIGURE 6Co-citation analysis of stem cell therapy for spinal cord injury research.(A) Mapping of the 1,000 journals' co-citation analysis on stem cell therapy for spinal cord injury research.(B) Mapping of the 1,000 authors' co-citation analysis on stem cell therapy for spinal cord injury research.The size of the points represents that two journals/authors had established collaboration.The thicker the line, the closer the link between the two journals/authors.
The relative contributions of specific institutions to the field of stem cell therapy for SCI were reflected in publication amount and link strengths of bibliographic coupling and co-authorship analyses.Not unexpectedly, the highest contributing institutions are from top contributing countries, particularly the USA and China.University of Toronto, based in Canada, is the first-ranking institution.The University of California San Diego, Sun Yat-sen University, and the Chinese Academy of Sciences are the top-class institutions.It is worth mentioning that the Tehran University of Medical Sciences, based in Iran, is the only institution that does not belong in the Middle East.The color cluster results show that the top class institutions in the same country are highly collaborative and interconnected.Okano Hideyuki and Nakamura Masaya, committed to research in induced pluripotent stem (iPS) cells to repair a spinal cord injury, are both from Keio University, another top-class institution in this field in Japan.Fehlings Michael G. is another author with a high publication volume and citation from the University of Toronto.Dai Jianwu, Xiao Zhifeng, Zhao Yannan, and Chen Bing had a close cooperation from China.Authors and their institutions can contribute significantly to this field and win great influence.According to clusters, within same color group, the cooperation of the author and institution are tight.On the other hand, the different color groups cooperated more loosely.Therefore, closer academic cooperation between different groups of countries and institutions may yield more achievements.The relative contributions of journals to the field of stem cell therapy for SCI were reflected in publication amount and link strengths of bibliographic coupling and co-citation analyses.Combining the publication amount and bibliometrics results, Experimental Neurology ranks as the top journal, with 135 articles published, cited 8,571 times, according to periodical division area 2 of the Documentation and Information Center of the Chinese Academy of Sciences.Cell Transplantation, Neural Regeneration Research, Journal of Neuroscience, and Proceeding of the National Academy of
FIGURE 7
FIGURE 7 Co-occurrence analysis of research of stem cell therapy for spinal cord injury.(A) Mapping of keywords in the research about stem cell therapy for spinal cord injury; the size of the points represents the frequency, and the keywords are divided into five clusters: Neurosciences Mechanism Research (upper in red), Clinical Therapy (left in purple), Combinatorial Therapy (left in blue), Types of Stem Cell (right in green), and Transplantation and Regenerative Medicine (lower in yellow).(B) Distribution of keywords according to the timeline of appearance; keywords in blue appeared earlier than those in yellow.
TABLE 1
The top 10 journals with most published literature from 2003 to 2022.
TABLE 2
The top 10 active authors with most publications from 2003 to 2022.
|
2024-01-26T16:19:41.406Z
|
2024-01-24T00:00:00.000
|
{
"year": 2024,
"sha1": "5c1eada2994d058f2448fe6fd6d3b18c09bede87",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnins.2024.1323383/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "84f64ab30e452a855eb8ddfe370ae9436512dda4",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210949341
|
pes2o/s2orc
|
v3-fos-license
|
The effect of meniscal repair on strength deficits 6 months after ACL reconstruction.
Introduction Ruptures of the anterior cruciate ligament (ACL) can be accompanied by meniscal lesions. Generally, the rehabilitation protocols are altered by meniscal repair. Therefore, the aim of this study was to investigate the effect of meniscal repair on the early recovery of thigh muscle strength in ACL reconstruction (ACLR). Materials and methods We performed a matched cohort analysis of n = 122 isolated ACLR (CON) compared to n = 61 ACLR with meniscal repair (ACLR + MR). The subgroups of meniscal repair consisted of 30 patients who had undergone medial meniscus repairs (MM), 19 lateral meniscus repairs (LM) and 12 repairs of medial and lateral meniscus (BM). Isokinetic strength measurement was performed pre-operatively and 6 months post-surgery to perform a cross-sectional and a longitudinal analysis. All injuries were unilateral, and the outcome measures were compared to the non-affected contralateral leg. Results Six months postoperatively overall there is no significant difference between the groups (extension strength MR 82% vs. CON 85% and flexion strength 86% vs. 88%, resp.). Subgroup analysis showed that medial repairs exhibit a comparable leg symmetry while lateral repairs performed worse with leg symmetry being 76% in extension and 81% in flexion strength. Patients undergoing BM repair performed in between lateral and medial repairs (82% extension, 86% flexion). Conclusion Generally, meniscal repair in conjunction with ACLR does not significantly alter the recovery of limb symmetry in strength at 6 months postoperatively. Interestingly, medial repairs seem to perform superior to lateral meniscal repair and repair of both menisci. Since the recovery of symmetric strength is a major factor in rehabilitation testing, these results will help to advise surgeons on appropriate rehabilitation protocols and setting realistic goals for the injured athlete. Level of evidence III, retrospective cohort study.
Introduction
When performing anterior cruciate ligament reconstruction (ACLR), one main goal of surgeon and patient is a safe return-to-sport. ACL injuries that occur during pivoting or cutting movements have a relevant risk for associated lesions in the menisci [1]. Recent studies have underlined this co-morbidity in ACL ruptures showing that ACL insufficiency increases the risk and severity of meniscal tears [1,2]. While return-to-sports in isolated ACLR has been the 1 3 focus of many publications, little is known about the role that additional meniscal repair may play in this regard [3][4][5][6].
Meniscal repair has become a standard procedure accompanying ACLR over the last 2 decades [7,8]. Generally, it was shown that the outcome of meniscal repair performed at the same time as ACLR has better results than meniscal repair alone [9,10]. Previous studies have evaluated outcome parameters associated with meniscal repair: The shortterm results of meniscal repair in conjunction with ACLR show that patients may have a slightly worse subjective function during the first 6 months [11]. However, the longterm outcome, measured by arthrometric measurements and signs of osteoarthritis, is better whenever the meniscus is preserved [12]. This can be well explained by the additional stability provided by the menisci [13]. Subgroup-analyses suggest that patients requiring medial meniscal repair may have slightly worse long-term outcome in subjective function and higher risk of developing intrameniscal cysts compared to lateral repair [11,[14][15][16].
When performing meniscal repair, especially in conjunction with ACLR, there is no consensus on the ideal rehabilitation scheme or return-to-play protocol [17,18]. Furthermore, the rehabilitation schemes differ greatly depending on the surgical technique and the location of the meniscal lesion [17]. Generally, partial weight-bearing and restriction of range-of-motion are frequent during the first postoperative weeks after meniscal repair [17]. This contradicts the current recommendations for rehabilitation following ACLR, in which early weight-bearing and full range-of-motion have been shown to be beneficial [17,19]. Thus, rehabilitation after meniscal repair may negatively affect the rehabilitation process and subsequently delay return-to-sports [3,4].
Reducing bilateral strength deficits and normalizing ipsilateral strength balance are important factors for a safe return-to-sport [6]. Several studies have demonstrated imbalances post-ACLR between the operated and the contralateral leg involving knee flexion and extension strength [6]. Strength deficits are the most commonly reported criteria for return-to-play [4]. Furthermore, higher postoperative quadriceps strength is associated with improved return-tosports [20]. Muscular deficits, however, have been shown to be pronounced within the first 6 months after surgery, while they may persist up to several years [6,21]. Additionally, persisting strength deficits are associated with a reduced return-to-play rate and worse patient-reported outcomes [3,22]. It must be stated, however, that the literature on functional measures in the context of return-to-play following meniscal repair is scarce.
Purpose
Hence, the goal of this study was to analyze the effect of meniscal repair on the strength outcomes 6 months post-ACLR. Second, we performed a subgroup analysis to differentiate the potential outcomes according to the location of the meniscal repair. We hypothesized that the strength deficits would be more pronounced in patients undergoing ACLR with meniscal repair when compared to isolated ACLR.
Methods
This is a retrospective analysis of our prospectively collected data of patients treated with ACLR between 12/2015 and 04/2017. All procedures were performed at our orthopedic hospital by a total of five different surgeons following the same standardized procedure. This study was approved by the local ethics committee (EKNZ 2017-01825) and performed according to the Declaration of Helsinki in its current form.
Patients
We screened the medical records of 221 patients that were scheduled for ACL reconstruction. Inclusion criteria for the meniscal repair group were unilateral ACLR with meniscal repair in the same session. The matched control group had undergone unilateral ACLR without meniscal repair. Patients undergoing partial meniscectomy were also included in the control group. Exclusion criteria for both groups were second-stage revision, additional cartilage procedures (microfracturing, matrix associated chondrogenesis) or osteotomies performed on either leg. Furthermore, we excluded all patients that had suffered relevant injuries to either leg like contralateral ACL ruptures and previous tendon or muscular injuries of the lower limbs. Associated treatments like partial meniscectomy or superficial chondroplasty with no effect on the postoperative proceedings were not specifically recorded. Figure 1 summarizes patient recruitment in a flowchart.
A total of 61 patients met our inclusion criteria for the meniscal repair group (ACLR + MR). For the control group, n = 122 was chosen as a propensity-matched control group (ACLR) which was matched for the choice of graft, age decade, sex, height and revision ACLR. This matching resulted in two cohorts with a balanced distribution of covariates (Table 1).
Subgroups
The meniscal repair group was further divided according to the location of meniscal repair, with n = 30 medial meniscal repair (MM), n = 19 lateral meniscal repair (LM) and n = 12 meniscal repair in both compartments (BM) (see below).
Of the 30 patients undergoing medial meniscal repair, 24 of these lesions were in the posterior horn, two buckethandle lesions and four lesions were primarily in the pars intermedia. Of the 19 patients that had a repair of the lateral meniscus (LM), 14 were in the posterior part, 3 in the pars intermedia, 1 in the anterior horn and 1 meniscal root repair. Of the 12 patients that were repaired on the lateral and medial meniscus, 7 had both lesions in the posterior horn, 3 were not specified separately and 2 underwent repair of a lateral root tear combined with a medial posterior horn repair.
Surgical technique
For ACLR, we used a proximal extra-cortical fixation (Endobutton CL Ultra, Smith&Nephew, London, UK) and a tibial hybrid fixation using a bioresorbable interference screw and additionally extra-cortical fixation with the femoral tunnel drilled via the anteromedial portal.
The meniscal repairs were performed as follows: posterior horn repairs were performed using an all-inside technique (FastFix, Smith&Nephew, London, UK), the anterior lesion was repaired using an outside-in technique and the three root tears were arthroscopically reconstructed via an additional transtibial drilling with extracortical button fixation (EndoButton CL Ultra, Smith&Nephew, London, UK). Repairs of bucket-handle lesions were performed combining all-inside techniques (FastFix, Smith&Nephew, London, UK), and inside-out techniques using non-resorbable sutures (PDS 2-0).
Rehabilitation scheme
The postoperative rehabilitation scheme was highly standardized for all patients where isolated ACLR was performed; in these patients, immediate full weight-bearing was allowed. The knee flexion angle was limited at 90° for 2 weeks and no knee-orthosis was used.
In those patients undergoing an isolated medial meniscus repair, the passive flexion limit was kept at 90° for 6 weeks; however, immediate full weight-bearing was allowed while keeping the leg in full extension and wearing external bracing during mobilization. Only the patients after a buckethandle repair had partial weight-bearing (15 kg) for 3 weeks.
In lateral meniscal repair, flexion angle was limited according to the location and severity of the tear; partial weight-bearing was recommended for 3 weeks with a flexion limit of 60° for 3 weeks and 90° for another 3 weeks. In the three cases of meniscal root fixation, no weight-bearing was allowed for 6 weeks with passive flexion angles of 60-90° during that time.
Generally, after the first 6 weeks, the progression within the individual rehabilitation scheme was criterion based [23]. One key factor in allowing a progressive weight-bearing was the focused activation of the quadriceps muscle to allow active anterior-posterior stabilization. Furthermore, in cases of postoperative flexion limit, a gradual increase using continuous passive motion machines was recommended until reaching 90° of knee flexion. Before restarting running activity, an adequate stabilization of a single leg stance was required. Running activity was initially supported using an anti-gravity treadmill; generally, most patients returned to running about 4-5 months postoperatively.
Strength measurements
The functional testing was performed preoperatively and on average 26 weeks post-surgery. However, those patients undergoing surgery within the first few days after the accident, suffering from meniscal impingement or other reasons of limited preoperative ROM like bucket-handle lesions, did not perform preoperative isokinetic strength measurements. The modalities of the strength measurements used in this study are as previously described and in accordance with the current recommendations in the literature [24]. For all strength measurements, we used an isokinetic dynamometer (Humac Norm, CSMi, Stoughton, USA).
Concentric peak torque in flexion and extension was measured as the average of five repetitions at a 60°/sec dynamometer speed. Prior to strength assessments, three submaximal trials were applied for familiarization. Isokinetic testing was completed with maximal effort and verbal encouragement in concentric-concentric mode. During strength assessment, patients were sitting upright, upper body fixed, hands at the grips, while the leg was tightly fixed at the thigh with the lever arm positioned at two-thirds of the lower leg.
Statistical analysis
Missing data were explored according to their pattern and cause [25]. The mechanism behind missing data followed a missing completely at random pattern. In logistic regression analyses, predictors for missingness were determined based on demographic and clinical characteristics and those predictors found significant were used to estimate missing data in multiple imputations. All statistical analyses were run as complete case analyses and then contrasted in a sensitivity analysis with multiple imputations of missing data [25]. Prior to statistical analyses, assumptions for independent and dependent samples, Student's t tests as well as repeated measures analysis of variance to compare outcomes in operated and non-affected limbs in each group and to compare outcomes over time between groups were tested. The presence of normal distributions and the amount of outliers in outcomes were checked using data exploration techniques. To remedy problems with assumptions, outlying observations were shifted to the respective lower and upper ends of 1.5 times the interquartile range to truncate their influence on the data [26].
Two main analyses were run: one to compare outcomes between the ACLR and the matched control group over time, a repeated measures analysis of variance (rmANOVA) was conducted with a main factor for group (ACLR vs. matched control group) and two-level factor time (pre/post) for each outcome, respectively. Mauchly's test of sphericity was used to check assumptions with the Greenhouse-Geisser correction employed if the assumption of sphericity was violated. The level of significance was defined at p < 0.05. In addition to statistical significance, effect sizes eta squared (η 2 ) and percentage change (observed difference to the total amount of difference over time) were calculated for the pairwise comparisons of the repeated measure factor time. Effect sizes were interpreted following Cohen [27] as small: 0.01, medium: 0.06, and large: 0.12. A second analysis was run comparing strength outcomes between operated and uninjured limbs in each ACLR repair group (medial, lateral, both medial and lateral and no repair group) using dependent sample Student's t tests. Due to the high number of statistical tests, statistically significant p values were Bonferroni corrected to a p value of p < 0.002.
Due to the retrospective nature of the study, an a priori sample size calculation was not possible. However, we calculated the maximum effect sizes that could be obtained given the data that were available to estimate type II error. For the first analysis using rm-ANOVA, a power of 0.8 and an alpha-error of 0.05 at a medium correlation of 0.5 between the repeated measurements would yield effect sizes of partial η 2 = 0.02 (f = 0.13). For the second analysis, we performed a sample size calculation for a matched pair t test assuming an alpha-error of 0.05, a power of 0.8 and an effect size of 0.5 which lead to a minimum group size of n = 34.
Results
The amount of missing data in the pre-operative assessment was higher than at the 26 weeks post-surgery measurement time point. On average, 33% of data were missing pre-surgery. This dropped to 14% at the post-operative time point. Table 2 presents the calculations for the between-group analyses and the effect size of changes from pre-to post-surgery. Over the course of rehabilitation, all absolute strength values of the operated limb improved significantly (p < 0.05). The meniscal repair group had a higher preoperative deficit in knee extension strength when compared to controls (p = 0.07). Thus, the improvement during rehabilitation was greater in this group than in controls (19% vs. 5%). For the control group, also the non-affected limb showed significant improvements over time in extension (p = 0.04) and flexion (p = 0.03) strength, whereas the meniscal repair group's healthy leg did not change. The limb symmetry for extension strength improved significantly in both groups (CON pre 80% to post 85% and MEN pre 63% to post 82%, see Fig. 2). The absolute strength as well as the leg symmetry for knee flexion strength (see Fig. 3) showed comparable values for meniscal repair and CON. There were no significant group x time interactions as shown in Table 2. Also, H/Q-ratio did not change over time and effect sizes were negligible.
Cross-sectional analysis and meniscal repair subgroups
At 6 months post-surgery, the cross-sectional analysis of the subgroups for meniscal repair revealed several differences between the different locations of meniscal repair as shown 1 3 in Table 3. Generally, all groups still show a relevant sideto-side deficit, where the operated leg achieves lower values for extension and flexion strength.
Overall MM repair showed higher limb symmetry in flexion (89%) and extension (82%) strength when compared to LM (81% and 76% resp.) or BM (86% and 82% resp.), but significance was not reached between the subgroups. Also, the values of MM were comparable to the values observed for CON (88% and 85%) and no significant differences between the groups were found. Lateral meniscal repair showed the overall lowest values for limb symmetry while the absolute strength values for the non-affected limb were the highest. Across all groups, the H/Q ratio was higher for the operated limb when compared to the non-affected limb.
Discussion
The most important finding of this study was that the effect of meniscal repair performed in conjunction with ACLR does not necessarily alter isokinetic strength performance at 6 months postoperatively. According to the location of the meniscal lesion, it seems that lateral repair performs worse than medial meniscal repair. These results may be used to advise patients undergoing ACLR striving to return to play as soon yet as safe as possible [30].
Preserving the meniscus in ACLR should be sought for whenever possible [31]. It improves subjective outcomes, objective knee stability, shows lower re-operation rates and prevents the progression of osteoarthritis in the long term [1,9,14,32,33]. At the same time, current research is underlining that early ACLR improves the outcome after meniscal repair in conjunction with ACLR while protecting the knee from secondary injury like chondral lesions and aggravated meniscal lesions [1,2,34]. While osseous factors like tunnel positioning and tibial slope are established important factors for ACL graft failure, the role of periarticular structures, meniscal kinematics and strength deficits following ACLR is still a major focus of current research [6,[35][36][37][38][39].
In addition to these beneficial long-term effects, our study revealed that short-term function is only slightly lower in some patients, but overall not significantly altered by meniscal repair. This evidence will encourage the ambitious athlete, showing that recovery of thigh muscle strength seems not to be significantly delayed by meniscal repair [30,40]. This is important since the recovery of strength balance is one major factor in clearing athletes for a safe return-tocompetition [6,41].
Longitudinal analysis
Those patients undergoing meniscal repair showed inferior strength and leg symmetry at the time of surgery when compared to controls. This is in line with the literature where patients undergoing meniscal intervention had shown inferior preoperative function and performance [11,14,42,43]. Possibly, the greater amount of damaged tissue causes a more severe arthrogenic inhibition of the periarticular muscles [44,45]. This inhibition as well as local factors such as pain, swelling and inflammation may disappear once the meniscal integrity is restored [17]. Consequently, the meniscal repair group did not show a pronounced deficit in extension strength at the postoperative testing. However, this somewhat contradicts earlier findings, where it was shown that preoperative quadriceps strength correlates to postoperative strength recovery [22,42]. Furthermore, it was suggested that the partial weight-bearing during rehabilitation of meniscal repairs reduces quadriceps strength [11,17,46]. However, our results indicate that it is possible for the athlete to regain quadriceps strength after ACLR + MR within the same time period as isolated ACLR. Interestingly, limb symmetry in knee flexion strength was generally higher than in quadriceps strength and effect sizes were stronger. Despite harvesting hamstring tendons in the majority of the patients (72-78%) the recovery of knee flexion strength was more symmetric at 6 months postoperatively compared to knee extension strength. This may be due to the finding that arthrogenic muscle inhibition primarily affects the quadriceps muscle which causes a prolonged strength deficit in knee extension compared to flexor strength [45,47]. All other established factors affecting quadriceps strength after ACLR like age and gender were equally distributed across the groups.
Cross-sectional analysis of subgroups
The importance of achieving an adequate limb symmetry in strength before returning to the field is well accepted [6,24,30]. Return-to-play criteria mostly require a recovery of > 85% of the healthy limb's strength [3,6,20]. In our subgroup analysis, we were able to show that only medial meniscal repair (MM 87.2%) fulfilled this criterion, while repairs of the lateral meniscus (LM 75.7%) and repair of both menisci (BM 82.3%) still exhibited a greater deficit. Contrarily, a recent analysis revealed that medial as well as lateral repair is associated with reduced quadriceps strength at 6 months postoperatively, while knee flexion strength was not significantly reduced [46]. Cristiani et al. attributed this to the early restriction in range of motion and partial weightbearing [46]. However, scientific evidence on the effect of these limitations is scarce.
Since strength asymmetry has been linked to subjective knee function post ACLR [48], the strength deficit observed in our study may explain why lateral meniscal repairs also exhibit worse subjective function at 6 months post-surgery [11]. In previous studies, a comparable subjective function after meniscal repair was achieved as late as 1 and 2 years post-surgery [11,14]. A limb symmetry below the cut-off value of 85% was found in the LM and BM groups, which must be considered clinically relevant [6]. Thus, lateral meniscal repair may require more time before successfully passing this return-to-play criterion. It may be suggested that this is due to the initial restrictions (range of motion and weight-bearing), which may have a persisting negative effect on strength recovery. However, the results of this subgroup analysis are of explorative character and they do not allow for a causative interpretation.
Furthermore, it needs to be stated, that the interindividual variability was relatively high; hence, none of the differences between the subgroups reached statistical significance when adapted for multiple testing. This underlines the observation that the individual recovery from ACLR varies greatly [30]. In line with current publications, this supports a criterionbased rehabilitation over an isolated time-based approach [30,49,50].
Limitations of the study include the relatively high rate of missing data from pre-operative isokinetic analysis, which is owed to the fact that patients with effusion, pain or unstable meniscal lesions were excluded from preoperative functional analysis. Furthermore, sample size calculation suggested that a type two error might be present in the subgroup analysis and requires a careful interpretation especially of the results in LM and BM.
Conclusions
This study revealed that meniscal rupture and repair, when performed in conjunction with ACLR, has no significant effect on isokinetic strength outcome 6 months after surgery. Despite the more conservative rehabilitation in the first postoperative weeks, patients seem to recover their strength as quickly as 6 months postoperatively. However, the location of the meniscal lesion seems to influence the recovery of strength with lateral menisci performing worse than medial meniscal repairs.
were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
|
2020-01-30T09:15:12.161Z
|
2020-01-29T00:00:00.000
|
{
"year": 2020,
"sha1": "f00cd3a82741d653e31fb4f55e5f774f5a653385",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00402-020-03347-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "caccb87db303f2cac75764bb71410b8ea0c85373",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
195772001
|
pes2o/s2orc
|
v3-fos-license
|
Intraspecific Fine-Root Trait-Environment Relationships across Interior Douglas-Fir Forests of Western Canada
Variation in resource acquisition strategies enables plants to adapt to different environments and may partly determine their responses to climate change. However, little is known about how belowground plant traits vary across climate and soil gradients. Focusing on interior Douglas-fir (Pseudotsuga menziesii var. glauca) in western Canada, we tested whether fine-root traits relate to the environment at the intraspecific level. We quantified the variation in commonly measured functional root traits (morphological, chemical, and architectural traits) among the first three fine-root orders (i.e., absorptive fine roots) and across biogeographic gradients in climate and soil factors. Moderate but consistent trait-environment linkages occurred across populations of Douglas-fir, despite high levels of within-site variation. Shifts in morphological traits across regions were decoupled from those in chemical traits. Fine roots in colder/drier climates were characterized by a lower tissue density, higher specific area, larger diameter, and lower carbon-to-nitrogen ratio than those in warmer/wetter climates. Our results showed that Douglas-fir fine roots do not rely on adjustments in architectural traits to adapt rooting strategies in different environments. Intraspecific fine-root adjustments at the regional scale do not fit along a single axis of root economic strategy and are concordant with an increase in root acquisitive potential in colder/drier environments.
Introduction
Functional traits of fine roots and mycorrhizal symbionts have become central to understanding belowground acquisition strategies and plant responses to environmental change from local to global scales [1][2][3][4][5]. Recent syntheses of large-scale datasets have facilitated exploration of interspecific (i.e., among-species) fine-root functional trait variation. These studies have notably advanced our understanding of the fundamental constraints underlying fine-root trait variation (e.g., phylogeny and climate and growth form [4,[6][7][8][9]). Importantly, studies across plant species have reported inconsistent evidence for a single root economic spectrum varying from more conservative roots with high structural investment to more cheaply constructed, acquisitive root types [5,[10][11][12]. As a result, a multidimensional root trait framework has begun to emerge [13][14][15][16][17].
Thus far, most studies have focused on assessing fine-root functional trait variation among species, following the assumption that among species differences in root trait values are greater than those
Fine-root Trait Response to Abiotic Factors
Responses of fine-root traits to abiotic factors were mixed and these factors only explained around 10% of the variation in fine-root traits, as most of the variation occurred at small ecological scales (e.g., root branch, individual soil block; Figure 1). With the exception of SRL, most traits were responsive to at least one abiotic factor, yet the direction and strength of these responses were often dependent on root order (Figure 2; Tables 1 and S1). Root diameter, SRA, and RTD each responded significantly to different aspects of climate. However, the size of the effect of environmental factors was relatively small (Figure 2), and the proportion of the variance explained by the different climatic factors was low (all marginal R 2 < 0.1; Table 1). Still, most first-and second-order root morphological traits were significantly related to MAT, with root diameter and RTD increasing with MAT, while SRA decreased with MAT (p < 0.05). In contrast, to first-and second-order roots, the diameter of third-order roots decreased with mean annual precipitation (MAP) as well as cation exchange capacity (CEC), while SRA of third-order roots increased with soil available P. Root tissue density of third-order roots was unrelated to any abiotic factor ( Figure 2). The C:N ratios of first-and third-order roots were most responsive to soil properties, with C:N of first-and third-order roots being positively related to soil C:N. The C:N ratio of third-order roots was also positively related to MAT, but it was negatively related to CEC and soil available P ( Figure 2; Table 1). The positive relationship between branching intensity and soil available P was only marginally significant (marginal R 2 = 0.02, p = 0.05; Table 1), while DBI (values range from 0, a fully dichotomous branching pattern to 1, a fully herringbone branching pattern [30,31]) was not related to any of the environmental factors considered in this study. Variance partitioning of functional traits of the first three fine-root orders of interior Douglas-fir at different hierarchically structured ecological scales (region, site, soil block and fine-root branch). Root C:N, root carbon-to-nitrogen ratio; Diameter (mm); RTD, root tissue density (mg cm −3 ); SRA, specific root area (cm 2 g −1 ); SRL, specific root length (m g −1 ). Figure 1. Variance partitioning of functional traits of the first three fine-root orders of interior Douglas-fir at different hierarchically structured ecological scales (region, site, soil block and fine-root branch). Root C:N, root carbon-to-nitrogen ratio; Diameter (mm); RTD, root tissue density (mg cm −3 ); SRA, specific root area (cm 2 g −1 ); SRL, specific root length (m g −1 ). Table 1) illustrate the effect of each environmental factor on a given fine-root trait in terms of its standardized effect size. MAP, mean annual precipitation (mm); MAT, mean annual temperature (°C); CEC, cation exchange capacity (cmol(+)kg −1 ); soil C:N, soil carbon-to-nitrogen ratio; soil avail. P, soil available phosphorus (ppm). Circles indicate average estimates and lines indicate 95% confidence intervals. Filled circles indicate a significant effect (p < 0.05) of a given environmental variable on a trait. Table 1) illustrate the effect of each environmental factor on a given fine-root trait in terms of its standardized effect size. MAP, mean annual precipitation (mm); MAT, mean annual temperature ( • C); CEC, cation exchange capacity (cmol(+)kg −1 ); soil C:N, soil carbon-to-nitrogen ratio; soil avail. P, soil available phosphorus (ppm). Circles indicate average estimates and lines indicate 95% confidence intervals. Filled circles indicate a significant effect (p < 0.05) of a given environmental variable on a trait.
Ordination and Trait Correlation
Consistent among the three root orders, root trait variation did not fit into one dimension of the ordination (Figures 3 and S1). Root morphological traits (SRL, SRA, diameter, and, to a lesser extent, RTD), were well correlated with the first dimension, which explained c. 30% of the variation for the three root orders and represented a gradient from thinner, high SRL roots to thicker, low SRL roots. This gradient was present within regions, as the five regions considered were not well separated along the first axis. The second axis of variation accounted for c. 22% of the total variation and was best represented by chemical traits (root C and N; root C:N was not included to avoid redundancy). Regions were well separated across this axis, which represented a gradient of higher root N (Kamloops, Revelstoke) to lower root N (Nelson, Williams Lake). Root architectural traits were not well represented along any of the axes (scores < 0.15, for each root order) nor were they well captured by the third PC axis which accounted for <20% of the variation in each root order Figures 3 and S1).
Ordination and Trait Correlation
Consistent among the three root orders, root trait variation did not fit into one dimension of the ordination (Figures 3 and S1). Root morphological traits (SRL, SRA, diameter, and, to a lesser extent, RTD), were well correlated with the first dimension, which explained c. 30% of the variation for the three root orders and represented a gradient from thinner, high SRL roots to thicker, low SRL roots. This gradient was present within regions, as the five regions considered were not well separated along the first axis. The second axis of variation accounted for c. 22% of the total variation and was best represented by chemical traits (root C and N; root C:N was not included to avoid redundancy). Regions were well separated across this axis, which represented a gradient of higher root N (Kamloops, Revelstoke) to lower root N (Nelson, Williams Lake). Root architectural traits were not well represented along any of the axes (scores < 0.15, for each root order) nor were they well captured by the third PC axis which accounted for <20% of the variation in each root order Figures 3 and S1). Ordination plot of study regions (five in total) across a biogeographic gradient based on principal component analysis of interior Douglas-fir first-order root traits. C, root carbon concentration (%); N, root nitrogen concentration (%); SRA, specific root area (cm 2 g −1 ); SRL, specific root length (mg −1 ); RTD, root tissue density (mgcm −3 ); BrIntensity, branching intensity (the number of first-order root/length of second-order root; cm-1); DBI, dichotomous branching index, values closer to 0 indicate a dichotomous branching pattern and values closer to 1, a herringbone branching pattern, see [31]. PC, principal component, PC1 = 30%; PC2 = 22%, see Figure S1. Consistent within each root order, the variation in RTD was negatively correlated with that of root diameter, which was weakly negatively correlated with the variation in SRL (Table S2). As expected from mathematical dependencies (i.e., the root volume and area depend on root diameter), variations in root diameter, SRA, and RTD were correlated. However, the variation in root C:N ratio was not correlated with the variation in morphological traits. Architectural traits were not correlated with any of the morphological traits.
Partition of Fine-root Trait Variance
More than half of the root morphological trait variance in Douglas-fir was expressed at small ecological scales (tree cluster and branch levels; Figure 1; Figure S2). In other words, differences among fine-root branches within individual soil blocks explained most of the variation in root . Ordination plot of study regions (five in total) across a biogeographic gradient based on principal component analysis of interior Douglas-fir first-order root traits. C, root carbon concentration (%); N, root nitrogen concentration (%); SRA, specific root area (cm 2 g −1 ); SRL, specific root length (mg −1 ); RTD, root tissue density (mgcm −3 ); BrIntensity, branching intensity (the number of first-order root/length of second-order root; cm −1 ); DBI, dichotomous branching index, values closer to 0 indicate a dichotomous branching pattern and values closer to 1, a herringbone branching pattern, see [31]. PC, principal component, PC1 = 30%; PC2 = 22%, see Figure S1. Consistent within each root order, the variation in RTD was negatively correlated with that of root diameter, which was weakly negatively correlated with the variation in SRL (Table S2). As expected from mathematical dependencies (i.e., the root volume and area depend on root diameter), variations in root diameter, SRA, and RTD were correlated. However, the variation in root C:N ratio was not correlated with the variation in morphological traits. Architectural traits were not correlated with any of the morphological traits.
Partition of Fine-root Trait Variance
More than half of the root morphological trait variance in Douglas-fir was expressed at small ecological scales (tree cluster and branch levels; Figure 1; Figure S2). In other words, differences among fine-root branches within individual soil blocks explained most of the variation in root diameter, RTD, SRL, and SRA. Consistent among morphological traits and root orders, the variation among soil blocks (within sites) accounted for, on average, 20% of the total variation. This pattern of high variance at small scales was even stronger for the architectural traits (branching intensity and DBI), which expressed over 90% of their variation among branches and tree cluster samplings ( Figure S2). Alternatively, the proportion of the root trait variance at the cross-site level was negligible for root diameter, SRL and SRA (Figure 1). At the regional scale, intraspecific variation was the highest for root C:N and RTD. On average for the three orders, the regional scale accounted for roughly a quarter and 8% of the total variation in root C:N and RTD, respectively.
Discussion
Moderate but consistent trait-environment linkages occurred across populations of Douglas-fir that were distributed across climatic and edaphic gradients in western Canada, despite high levels of within-site variation. Our first hypothesis was partly confirmed, as MAT was the environmental variable that was most highly correlated with fine-root morphological traits. However, fine-root C:N was more responsive to soil properties (soil C:N, soil avail. P and CEC). Generally, fine roots in colder or drier climates were characterized by potentially higher acquisitive capacities (based on trade-offs among morphological and chemical traits), and variations in morphological and chemical traits represented two separate axes for fine-root adjustments. We rejected our second hypothesis as root trait variance was unevenly distributed across ecological scales, with over 50% of the variation in morphological traits occurring within individual soil blocks at a single site.
Fine-root Morphology
Across the biogeographic gradient, first-and second-order roots of Douglas-fir trees tended to increase in diameter and SRA but decreased in RTD with decreasing MAT. This result partly agrees with our first hypothesis because, with the exception of increasing root diameter, these trait adjustments are expected to increase root resource acquisition potential, which was expected under colder climatic conditions. The response of Douglas-fir morphological and chemical traits to MAT were largely consistent with that of Scots pine absorptive fine roots reported by Zadworny et al. [22]. However, these results are in opposition to those reported by Ostonen et al. [3], despite similar temperature ranges used in both previous studies. To adjust to colder conditions, fine roots may increase their potential for soil resource acquisition to compensate for a more limited resource availability and a shorter window for growth and acquisition [14][15][16]28,32]. Alternatively, trees in colder environments may build fine roots with higher tissue protection and persistence (e.g., an increased number of phellem layers, increased phenolic compound content [22]). Our results provide evidence that in environments where temperature limits the availability of soil resources, fine roots increase their potential to acquire resources via morphological adjustments manifesting as greater surface area of roots per unit biomass invested (i.e., higher SRA).
In contrast to increases in SRA and reduced RTD, the larger diameter roots observed at lower MAT are generally associated with a more resource-conservative strategy. Plants with a conservative strategy are frequently expected to have fine roots with low SRL, large diameter, high RTD, low N concentration, low uptake capacity, low respiration rate and a long life span [12]. However, large diameter roots could also be associated with increased associations with mycorrhizal fungi [7,33]. Therefore, the increase in Douglas-fir root diameter in our study may be associated with enhanced root absorptive capacity if it coincides with an increased association with mycorrhizal fungi or greater hyphal growth [34]. In our study system, we did not observe significant changes in the ectomycorrhizal colonization rate across the gradient, which averaged c. 95% for all the sites (data not shown), consistent with other studies in this region [35,36]. The constrained responses of the ectomycorrhizal colonization rate observed here may be because a Douglas-fir has relatively thick fine roots, which are generally associated with higher, and sometimes more constant, levels of fungal colonization [27,37,38]. However, a single measure of colonization rate may less relevant than ectomycorrhizal community composition in Douglas-fir forests [39] and may be more relevant in arbuscular mycorrhizal plant species [7,33]. In a complementary study across the same biogeographic gradient, we showed that root diameter was not related to patterns of ectomycorrhizal fungal exploration type (see [40]) and that fine roots with high RTD and low C:N were more frequently colonized by ectomycorrhizal fungi with short emanating hyphae [26]. Whether association with ectomycorrhizal fungi that proliferate short emanating hyphae could lead to increased acquisitive capacities of thick, large-diameter Douglas-fir fine roots requires further research. Additional assessments of the fungal hyphal production rate and densities in soils are also needed to better assess associations with, and potential reliance of, Douglas-fir trees on their ectomycorrhizal partners across environmental gradients [34]. Finally, we cannot exclude that the trade-offs observed among morphological traits may not necessarily stem from an optimization of resource acquisition by either Douglas-fir trees or ectomycorrhizal fungi. For example, interspecific competition with tree neighbors, which was not quantified in our study, could have affected the fine-root functional traits of Douglas-fir trees.
Fine Root Chemistry and Architecture
The fine-root C:N ratio was most responsive to soil properties (soil C:N, soil available phosphorus and CEC). Fine-root N concentration increased with increasing soil pH, CEC and the decreasing soil C:N ratio, which is potentially associated with greater nutrient uptake at the level of the individual fine root. The more fertile soils in our study area occurred in colder/drier regions and likely limited the diffusion rate of soil resources. Therefore, high root N concentrations with higher SRL, higher SRA, and lower RTD, which are generally associated with a shorter root lifespan and may represent a strategy to thrive where nutrient availability is heterogeneous and intermittent due to seasonality and soil freezing [4]. In the less fertile soils of our study area, the growth of Douglas-fir trees may not be limited by the low nutrient availability because of the high MAT and MAP. Accordingly, in these environments, we observed a more resource-conservative root strategy (higher RTD, C:N ratio and lower SRA). As suggested by Freschet et al. [6], in wetter environments with low nutrient availability, investments in higher branching intensity and/or mycorrhizal hyphae may be more beneficial to capture available N prior to leaching rather than investing in high metabolism (higher root N concentration).
Our results do not provide strong evidence that Douglas-fir fine roots rely heavily on adjustments in architectural traits. The relatively low values and narrow range of variation of root branching intensity are consistent with those of other ectomycorrhizal gymnosperm species considered by Tobner et al. [41] and Liese et al. [28]. These low values could be related to the consistently high rate of ectomycorrhizal colonization across our gradient, which suggests that local proliferation of fungal hyphae instead of increased fine-root branching may be the primary pathway to facilitate greater proliferation and exploitation of the soil environment.
Intraspecific Root Trait Variation
Contrary to our second hypothesis, the highest proportion of root trait variation was not at the regional scale. Though aspects of fine-root trait variation were significantly related to abiotic factors across regions, morphological, architectural, and, to a lesser extent, chemical traits, expressed the majority of their variation among root branches obtained from soil blocks within individual forest stands. Although our study was primarily designed to investigate fine-root trait-environment linkages at the regional scale, these findings demonstrate that processes at lower ecological scales are also important in determining root trait variation. It is not always feasible to intensively sample and quantify root trait variation at such small scales (i.e., within plot or even within tree/tree cluster variation), but in light of this result, care should be taken when interpreting and extrapolating a single mean value for a stand-level functional trait or for an individual species [42]. Thus, while environmental filters operate on the overall distribution of trait values within a region, their effects are lessened due to local variation among trees and root branches.
The high variation in root traits observed among branches within a single sampling location could be explained by differences in resource allocation or by differences in ectomycorrhizal symbiont identity. This may, in turn, affect carbon allocation to each root branch and the distinct morphology and chemistry expressed by individual roots [2,40]. For instance, the concentration of primary photosynthates in ectomycorrhizal root tips, such as starch, glucose, and non-structural carbohydrates, can change substantially among ectomycorrhizal symbionts [43]. Similarly, Pickles et al. [44] demonstrated that the distribution of many ectomycorrhizal individuals is often patchy. This leads to the possibility that different soil blocks from within the same site may be dominated by morphologically distinct ectomycorrhizas, contributing to the high variation in root traits at small spatial scales.
While our study design did not allow testing the relative contributions of genotypic vs. environmental variation, other work focused on aboveground traits in Douglas-fir as well as in root traits of Scots pine both suggest a high degree of genetic control on plant functional traits [21,45,46]. In this case, individuals and local populations of Douglas-fir may be limited in their ability to adjust to local changes in climate through phenotypic plasticity as root traits would primarily be under genetic controls. However, the high degree of within site variation observed here also indicates substantial within population root trait variation, which may enable some acclimation.
Conclusions
Across regional gradients of climate and edaphic factors in western Canada, the majority of Douglas-fir fine-root trait variance occurred within sites. However, we also identified moderate but consistent trait-environment linkages across populations of Douglas-fir. Generally, colder/drier climates were characterized by fine roots with a lower RTD, higher SRA, higher diameter, and lower C:N ratio. We also provided evidence for decoupled variations in fine-root morphological and chemical traits. These findings highlight the existence of multiple axes of within species fine-root adjustments that were consistent with increasing acquisitive potential of fine roots in harsher environments. The substantial within population root trait variation may then enable further acclimation at the stand level. As predicted changes in climate will likely impact belowground processes with important outcomes for tree persistence and resilience, further work connecting root traits and environmental variation will be particularly important to ensure that well-adapted tree populations are regenerated.
Biogeographic Gradient
A biogeographic gradient, including five study regions ranging in latitude from 49.6 to 51.7 • N, was located within the natural range of Douglas-fir in British Columbia (Figure 4). Two regions (Kamloops and Williams Lake) were in the Interior Douglas-fir biogeoclimatic zone (IDF) and three regions (Salmon Arm, Nelson, and Revelstoke) were in the Interior Cedar-Hemlock biogeoclimatic zone (ICH) [47]. Regions were distributed along substantial precipitation and temperature gradients (see Table 1 in [26]; Figure S3). Sites in Williams Lake had the lowest MAT (on average 3.4 • C) followed by Revelstoke, Kamloops, Salmon Arm and Nelson (on average 7.3 • C). The driest region was Kamloops (average MAP, 441 mm), and the wettest region was Revelstoke (average MAP 1200 mm). Unlike other large environmental gradients that often correspond to wide ranges in latitude and daylength (e.g., [3]), the limited latitudinal range encompassed here corresponds to minimal differences in day length among the study regions.
In each region, three replicated study sites separated by at least 400 m were selected in naturally regenerated, mature, closed-canopy forest stands on ecosystems that best reflect the regional climate (namely, zonal ecosystems [47]). The average stand age at each region ranged from 98 years old (Revelstoke) to 143 years old (Salmon Arm; Table S3). The northern-most stands in the IDF zone were growing on Luvisolic soils, and the southern-most stands in the ICH zone occurred predominantly on Brunisolic soils (Table S3; [48,49]). Soils in the southern-most regions (Nelson, Revelstoke and Salmon Arm) were N-limited, but the mineral soil available P concentration was up to five times greater than that in the northern regions. The mineral soils in Revelstoke and Nelson were also characterized by low CEC and low soil pH compared to those in the northern regions (see Table 1 in [26]). In the semi-arid regions of interior British Columbia, Douglas-fir occurs in pure stands (in the IDF), while in the wetter regions, Douglas-fir trees grow in mixed species forests (in the ICH; [44]). Consequently, six sites contained pure Douglas-fir forest stands, and nine sites had mixed stands (Table S3). The proportion of the Douglas-fir by basal area ranged from 49% in the mixed, evenly-aged forest stands of Salmon Arm to 100% in the pure, unevenly-aged forest stands of Kamloops (Table S3).
Revelstoke and Salmon Arm) were N-limited, but the mineral soil available P concentration was up to five times greater than that in the northern regions. The mineral soils in Revelstoke and Nelson were also characterized by low CEC and low soil pH compared to those in the northern regions (see Table 1 in [26]). In the semi-arid regions of interior British Columbia, Douglas-fir occurs in pure stands (in the IDF), while in the wetter regions, Douglas-fir trees grow in mixed species forests (in the ICH; [44]). Consequently, six sites contained pure Douglas-fir forest stands, and nine sites had mixed stands (Table S3). The proportion of the Douglas-fir by basal area ranged from 49% in the mixed, evenly-aged forest stands of Salmon Arm to 100% in the pure, unevenly-aged forest stands of Kamloops (Table S3).
Fine-Root Sampling and Processing
We used a nested strategy for sampling fine roots across ecological scales. A single sample plot of 30 m × 30 m containing at least ten dominant or co-dominant Douglas-fir trees was established at each site in summer 2016. We selected five healthy Douglas-fir trees per plot in a manner to avoid clumping of the sampling location. For each selected tree, a coarse root originating from the target tree was identified and traced 200 cm out from the trunk. At that point, a single soil block (20 cm × 20 cm × 20 cm) was extracted as closely as possible to the coarse root. Soil samples were collected by hammering a steel frame into the soil and the block were extracted using a flat shovel. In addition, one organic (L, F, and H layers) and one mineral soil (upper mineral horizons A and B, from the bottom of the organic layer to 10 cm depth) sample were collected using a trowel near the location of the soil block. Collected soil blocks and soil samples were stored individually in plastic bags, transported on ice to the laboratory within 1 to 4 days, and stored at 4 °C until processing (up to three months) to avoid alteration of fine-root traits that can occur with freezing. A total of 73 sample sets were collected in this study (5 regions, 3 sites per regions, 5 soil blocks per site, but only 3 blocks could be collected at Nelson site N2).
To extract fine roots, each soil block was soaked in water overnight, and washed over a 4 mm screen. All fine-root branches (diameter < 2 mm) and fragments >3 cm in length were recovered from the sieve and sorted by tree species. To do this, we developed a morphological key from root samples of known species identity collected from our study sites ( Figure S4). The proportion of roots belonging to other tree species was not estimated. Douglas-fir roots that were turgescent with visible, intact periderm and that had (if present) colorful, swollen ectomycorrhizal tips were considered live roots. For trait measurements, we selected intact root branches containing at least
Fine-Root Sampling and Processing
We used a nested strategy for sampling fine roots across ecological scales. A single sample plot of 30 m × 30 m containing at least ten dominant or co-dominant Douglas-fir trees was established at each site in summer 2016. We selected five healthy Douglas-fir trees per plot in a manner to avoid clumping of the sampling location. For each selected tree, a coarse root originating from the target tree was identified and traced 200 cm out from the trunk. At that point, a single soil block (20 cm × 20 cm × 20 cm) was extracted as closely as possible to the coarse root. Soil samples were collected by hammering a steel frame into the soil and the block were extracted using a flat shovel. In addition, one organic (L, F, and H layers) and one mineral soil (upper mineral horizons A and B, from the bottom of the organic layer to 10 cm depth) sample were collected using a trowel near the location of the soil block. Collected soil blocks and soil samples were stored individually in plastic bags, transported on ice to the laboratory within 1 to 4 days, and stored at 4 • C until processing (up to three months) to avoid alteration of fine-root traits that can occur with freezing. A total of 73 sample sets were collected in this study (5 regions, 3 sites per regions, 5 soil blocks per site, but only 3 blocks could be collected at Nelson site N2).
To extract fine roots, each soil block was soaked in water overnight, and washed over a 4 mm screen. All fine-root branches (diameter < 2 mm) and fragments >3 cm in length were recovered from the sieve and sorted by tree species. To do this, we developed a morphological key from root samples of known species identity collected from our study sites ( Figure S4). The proportion of roots belonging to other tree species was not estimated. Douglas-fir roots that were turgescent with visible, intact periderm and that had (if present) colorful, swollen ectomycorrhizal tips were considered live roots. For trait measurements, we selected intact root branches containing at least three root orders, live ectomycorrhizal tips, and minimal breakage. Whenever possible, selected branches were carefully cleaned with a soft brush and tweezers and analyzed immediately after extraction. Otherwise, branches were stored in a plastic bag with a damp paper towel (changed daily) at 4 • C for no more than 3 days until further analysis.
Fine-Root Traits Measurements
We selected five, live and intact Douglas-fir fine-root branches from each of 73 soil blocks. A total of 365 root branches were scanned on a desktop scanner (400 dpi, 165 level gray scale, EPSON Perfection V800 Photo, STD 4800) and analyzed with WinRHIZO pro 2016 software (Regent Instruments Inc., Quebec City, Canada). Branches were analyzed for topology (magnitude, altitude, and external path length). We acknowledge the possible limitations of scanning roots at 400 dpi for root length measurements, particularly for very fine roots [50]. However, in our study system, this resolution was a good trade-off between speed and accuracy, as we avoided scanning overlapping roots (i.e., root length density < 1 cm cm −2 ). Furthermore, our scans had a good contrast between the roots and the background as Douglas-fir is a relatively thick-rooted tree species (first-order root diameter > 0.40 mm).
Following the initial scans of intact branches, each branch was divided into individual root orders using a scalpel under a stereomicroscope following the morphometric classification approach [51]. In our system, typical first-order roots were either comprised of ectomycorrhizal tips or displayed unbranched and uncolonized root tips. In all cases, we avoided thicker, longer pioneer roots [52]. Each root order group was scanned separately and analyzed for morphology (total length, total surface area, average diameter, and total volume). For the measure of root volume and area, we used the total value rather than the sum of values provided for each diameter class [53], because, in our study system, these two methods of estimation led to similar results (R 2 = 0.99 for length and R 2 = 0.97 for volume; data not shown). Root orders were then stored in envelopes, dried at 65 • C for 48 h, and weighed. For each root order group, SRL (m g −1 ) was calculated as the ratio of root length to root dry mass; SRA (cm 2 g −1 ) as the ratio of root surface area to dry mass; and RTD (mg cm −3 ) as the ratio of root dry mass to root volume. To determine C and N concentration (%; Thermo Scientific Flash 2000 NC analyzer) in each of the three root orders, we randomly selected samples for each of the 15 sites as follows. Two soil blocks were randomly selected out of five originally sampled per site, and two root branches were randomly selected out of five originally sampled per block for a total of 180 root samples. The number of first-order roots for each branch was determined with the ImageJ software (National Institute of Health, USA). Root branching intensity was calculated as the number of first-order roots per length of second-order roots. The dichotomous branching index was calculated as: with Pe, the external path length, defined as the sum of the number of root segments from the most distal root segment to the most basal root segment (i.e., third-order roots).
Climate and Soil Data
We obtained long-term averages for climatic variables over the period 1981-2010 from ClimateNA (http://www.climatewna.com/; [54]). To obtain soil properties, organic and mineral soil samples were air-dried and sieved to 2 mm. Samples were then sent to the analytical laboratory of the B.C. Ministry of Environment (Victoria, BC, Canada). Total soil C and N concentration (%) were measured using a combustion elemental analyzer (Thermo Scientific Flash 2000 NC analyzer). For available P. (PO 4 -P; orthophosphate as phosphorus), samples were prepared with the Bray P-1 method (dilute acid fluoride [55]) and analyzed with a UV/visible Spectrophotometer (Agilent Cary 60). To estimate the effective cation exchange capacity (CEC), cations were extracted from the soil samples with 0.1 M barium chloride [56] and analyzed with an ICP-OES spectrometer (Teledyne Leeman, Prodigy Dual view).
Data Analyses
Statistical analyses were conducted in R version 3.5.1 [57] and results were considered statistically significant at p < 0.05. To test whether fine-root functional traits related to the environment at the intraspecific level, we fitted linear mixed-effects models (LMMs). For each root order we separately considered SRL, SRA, RTD, root diameter, and root C:N, as response variables. For the third-order root C:N ratio, we used a linear model (multiple linear regression) instead, as the random effects were not significant. Models for branching intensity and DBI were fitted considering the whole absorptive root branch (as opposed to considering each root order separately). Before analyses, all response variables were log 10 -transformed to meet the assumptions of the LMMs. Data points that were >3 standard deviations from the region median were considered as statistical outliers and removed. This represented <2% of the data points for each trait and did not change the outcome of the LMMs. Climate (i.e., MAT and MAP) and soil variables (i.e., C:N ratio, avail. P, CEC) were added as fixed factors, while region, site, and tree were added as nested random factors. We also added stand age, Douglas-fir basal area, soil type, and stand composition (mixed vs. pure) as fixed factors. However, to avoid over-parametrization and multicollinearity among predictors, these variables were removed from LMMs as they all had a variance inflation factor >3 [58]. To fit LMMs, we used the 'lmer' function from the package lme4 [59]. Global LMMs had this form: log 10 (Fine-root trait)~MAP + MAT + CEC + Soil C:N + Soil avail. P+ (1|region/site/tree).
We used the function 'step' from the lmerTest package to eliminate non-significant effects of LMMs based on the Akaike information criterion [60]. We checked LMMs adequacy with the 'plot_model' function from the sjPlot package [61] and LMMs fit using the conditional R 2 (variance explained by the entire model, including both fixed and random effects) and marginal R 2 (variance explained by the fixed effects) values, estimated following Nakagawa and Schielzeth [62]. We tested LMM significance with the log likelihood ratio and the significance of fixed effects with a type II Wald χ 2 test. Standardized beta coefficients, and their 95% confidence intervals were extracted with the 'beta.coef ' function from the sjstats package. Pairwise trait relationships were assessed using Spearman's rank-order correlation because the assumption of pairwise linear relationships between variables of the Pearson product-moment correlation was violated. Trait coordination was explored using Principal Component Analysis (PCA). We acknowledge the mathematical dependence among root morphological traits and discuss the results accordingly. To quantify fine-root trait variation within sites, among sites, and across regions along the biogeographic gradient, we performed a variance partitioning analysis. For each root trait (i.e., SRL, SRA, RTD, root diameter, root C:N, Branching intensity, and DBI), we fitted linear models (ANOVA) with nested random effects in this order: region, site, tree. To partition variance among these hierarchically structured ecological scales, we used the function 'varcomp' from the package ape [60].
Supplementary Materials: The following are available online at http://www.mdpi.com/2223-7747/8/7/199/s1. Figure S1: Ordination plot and associated scores of samples across a biogeographic gradient based on principal component analysis of fine root traits of second-order and third-order roots of interior Douglas-fir. Figure S2: Distribution of branching intensity and dichotomous branching index values across a biogeographic gradient and variance partitioning of architectural traits at different hierarchically structured ecological scales. Figure S3: Ordination plot of study regions (five in total) across a biogeographic gradient based on principal component analysis of climatic, edaphic and site variables. Figure S4: Description of morphological attributes of fine roots for three coniferous tree species encountered in this study. Table S1: Means and standard error (SE) of fine-root morphological, chemical and architectural traits of interior Douglas-fir across a biogeographic gradient made of five study locations. Table S2: Spearman's correlation coefficient for pairwise root order (first three root orders) relationships. Table S3: Stand properties of the 15 study sites selected across a biogeographic gradient in Western Canada.
|
2019-07-03T13:05:23.942Z
|
2019-06-30T00:00:00.000
|
{
"year": 2019,
"sha1": "89b9a69549bac226c36f25014560a3e5da3b07ae",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/8/7/199/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "327ddb93a7ee27a0d59383b086fbecc9a4ba11b5",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
221108554
|
pes2o/s2orc
|
v3-fos-license
|
Sonochemical synthesis of aluminium and aluminium hybrids for remediation of toxic metals
Highlights • Aluminium oxide exhibits unique properties such as porosity, and high surface area.• Aluminium hybrids are synthesized using sonochemical assisted sol gel method.• Sonochemical method emerge as environmental benign route due to less energy consumption and less reaction time.• The surface area of aluminium hybrids increases up to 167 m2/g.• Adsorption batch experiment showed optimum adsorption of Lead (63%) and Mercury (67%).
Introduction
The excessive release of untreated industrial effluents in water bodies drastically impact the quality of drinking water. People rely on consuming contaminated water and suffer from water-borne diseases worldwide. According to the statistic of the World Wide Fund (2005) and Pakistan Council of Research in Water Resources (2010), only 25.61% (including 23.5% rural and 30% urban) of Pakistani population have access to safe and drinkable water [1]. Furthermore, water borne diseases are responsible for one-third of all deaths and cause an income loss of 25-58 billion rupees, i.e., 0.6-1.44% of Pakistan's GDP. To address these alarming issues, different organizations like Pakistan Council of Research in Water Resources (PCRWR), Pakistan Standards and Quality Control Authority (PSQCA), Ministry of Climate Change, Ministry of Health and Environmental Protection Agency are working effectively and devised set the standards for safe drinking water [2]. Existing wastewater treatment methods include chemical precipitation, coagulation, ion-exchange and electrochemical deposition, have high operational cost, high energy consumption and produce toxic sludge, which requires further treatment. Adsorption emerged as an alternate method as it is easier to operate, consumes less energy, low operational cost, and a variety of adsorbent which can be regenerated and reused in multiple of times [3]. Considering these advantages, scientists widely studied the adsorption method as a tool for the remediation of noxious metals from industrial effluents. Aluminium oxide emerged as fascinating versatile material [4] with its intrinsic properties such as polymorphism, chemical and mechanical stability, low thermal conductivity, porosity, high surface to volume ratio and amphoteric nature [5]. These diverse properties of aluminium oxide provided enough room to be applied in various applications, such as adsorbent, catalyst, filler and electric insulator, etc. Aluminium oxide can be synthesized by diverse methods like precipitation [6], sol-gel [7], hydrothermal [8], solvothermal [4] and direct thermal treatment [9]. However, limitations of these methods such as long reaction time, uncontrolled particle size, high-temperature requirement, use of expensive and toxic organic solvents and salts [10] placed great demand for the new synthetic method which is economical and efficient. The sonochemical method emerged as a facile, efficient and environmental benign route due to less energy consumption, short reaction time and an alternate option for tuning the desired shape and size-controlled properties [11]. The main working principle behind the sonication is the generation of active radicals through the formation, growth and implosion of acoustic cavitation. Suslick [12] reported that the collapse of acoustic cavitation produces intense local high temperature (~5000 K), high pressures (~1000 atm.) with enormous heating and cooling rates (> 10 9 K/sec) and liquid jet streams (~400 km/h). Thus this condition favors initiating the chemical reaction within the respective media. Previously it has been observed that sonication works efficiently for the synthesis of metal oxides and metal hybrid. Luévano-Hipólito and Torres-Martínez [13] synthesized the zinc oxide using a sol-gel method assisted with sonication (26 kHz; 150 W) from 0 to 60 min to test the photocatalytic hydrogen production. The result showed 10 7 μmol g −1 h −1 of hydrogen production. Hassanjani-Roshan et al., [14] synthesized iron oxide from FeCl 3 ·6H 2 O and NaOH. The high-energy sonication waves (20 kHz) were applied for 1 h which yield spherical particles with crystallite size between ~5 and ~7.5 nm. Cui et al., [15] synthesized the graphene oxide wrapped gold nanoparticles using the sonochemical method for the photocatalysis of rhodamine B. The graphene oxide wrapped gold nanoparticles were sonicated (200 W) for 1 h. It showed good photocatalytic activity under visible light and decompose rhodamine B in 2 h. Lee et al., [16] synthesized the copper-doped bismuth vanadate/graphitic carbon nitride nanocomposite using a sonochemical method. The reaction mixture was sonicated (700 W, 20 kHz) for 0.5 −1 h, and used the synthesized nanocomposite as a photocatalyst for the degradation of the Bisphenol A. The results showed that the nanocomposite improved the electron/hole pair separation, stability and light-harvesting efficiency in comparison to the pristine bismuth vanadate or graphitic carbon. It completely degrades the Bisphenol A after 90 min. Similarly, Dezfuli et al., [17] synthesized ceria reduced graphene oxide nanocomposite after sonication (400 W, 24 kHz) for 88 min. These results showed that the cerium oxide nanoparticles were anchored on graphene oxide, the synthesized nanoparticles were applied as an electro-catalyst. Table 1 shows the previously reported literature on the aluminium oxide and its composites with different sonication treatments.
Hence, the literature review showed that sonication treatment is proved to be a very effective method for the synthesis of different metal oxides and metal composites. Therefore, the present research is designed to synthesize aluminium oxide and aluminium hybrids using sol-gel and post-grafting assisted with sonochemical method. To best of our knowledge, the present study is the first attempt to synthesize aluminium hybrids using indole and its derivatives and applied as an adsorbent for metal remediation. The indole group is an important class of heterocyclic compounds due to its unique structure. It has π-electron and lone pair at C 2 , C 3, and N 1 positions of indole which provides reactive sites to electrophiles and nucleophilic substitution. It is less toxic, thermally stable, and present in the structure of many natural products such as amino acid, auxin. It also exhibits metalbinding properties. For this purpose, aluminium nitrate nano-hydrate and ammonia solution were used as the starting materials for the synthesis of aluminium oxide by employing a sonochemical assisted sol-gel method. Then 1:2 (w/w) ratio of aluminium oxide and indole group were hybridized using post grafting method with aid of sonication. The effects of sonication treatment on the internal and outer structural properties of aluminium were recorded and compared with non-sonicated aluminium hybrids.
Synthesis of aluminium oxide
Aluminium oxide was synthesized using a sol-gel assisted sonochemical method [19]. For this purpose, 0.1 M aluminium nitrate nanohydrate solution was prepared in Milli-Q water and a 10% ammonium solution was added dropwise until the pH was adjusted to 8. Then the solution was ultra-sonicated using a Branson Digital SonifierS-250D (13 mm tip diameter, 20 kHz, 40%) for 1 h at room temperature. The horn tip is immersed 1 cm into the solution. After sonication, the solution was aged, centrifuged (6500 rpm for 20 min) to separate the solid product, washed, dried and calcined at 500 °C with a ramp rate of 10 °C/minute for 4 h. The 40% amplitude power dissipation in Milli-Q water is optimized using the Weissler method (see supplementary file). The product obtained was coded as S-Al. Similarly, the aluminium oxide was also synthesized using a sol-gel method without sonication. The solution was left undisturbed (24 h) for Ostwald ripening, centrifuged (6500 rpm for 20 min) to separate the solid product, dried and calcined at 500 °C with a ramp rate of 10 °C/minute for 4 h. The product obtained was coded as NS-Al. The reaction 1 and 2 illustrates the chemical reactions involved in the formation of aluminium oxide.
Synthesis of aluminium hybrids
Aluminium hybrids were synthesized using a post grafting method as reported earlier [26,27]. Briefly, as synthesized S-Al 2 O 3 was charged Table 1 Aluminium and its different composites reported previously.
Metal composites Sonication treatment Application References
Aluminium sphere loaded with palladium 20 kHz; 100 W Catalyst Gaudino et al., [ with a 10% solution of 3-aminotriethoxypropylsilane (APTES) in isopropanol as shown in the reaction 3. The solution was sonicated, as described for aluminium oxide. Then filtered, and washed repeatedly with isopropanol and ethanol, centrifuged, dried and stored until further use. APTES was used as a bridging agent for effective chemical integration between aluminium oxide with indole and its derivatives.
An amount (0.15 g) of amine-functionalized aluminium oxide was dispersed in 20 ml of dichloromethane in a reagent flask for 15 min, filtered and dried. Then the indole solution (0.30 g in 20 ml acetonitrile) was added to it and sonicated for 1 h (as described earlier). The 40% amplitude power in dichloromethane is selected as the best power dissipated to the solution and where a high concentration of radicals is generated (see supplementary file). The synthesized aluminium hybrid was filtered and dried in air. Same procedure was also repeated for two other indole derivatives (indole-2 carboxylic acid, 2-methyl indole) as shown in reaction 4. A total of three hybrid products were obtained and coded as S-AI , S-AlCI, and S-AlMI. In the same way, aluminium hybrids were also synthesized using post grafting method without sonication and coded as NS-AlIN, NS-AlCI, and NS-AlMI.
Instrumentation
Different analytical instruments (FTIR, SEM, BET, XRD and XPS) were employed to characterize the synthesized products. Standard KBr pellet method was used to record FTIR spectra (averaged 15 scans) with a resolution sweep rate of 4 cm −1 from 4000 to 400 cm −1 using a clean cell window for background and air as reference. Scanning Electron Microscopy (Quanta 200 FEI) operated under 10 kV voltage and distance of 10 mm. Gold sputtering was used to prevent sample charging and placed on a thin film of carbon tape mounted on the stub. Air pulse was applied to remove excess and loose samples and then placed inside a vacuum chamber under argon. XRD data were collected from 5° to 80° with a step size of 0.02° (using Bruker D8 X-Ray diffractometer) having under 77 K and relative pressure (0.01-0.995) was used. Each sample was degassed at 423 K for 24 h on the vacuum line and pore size distribution was calculated from the Barret-Joyner-Halenda isotherm. Xray photoelectron spectroscopy (Kratos Axis ULTRA; Thermo Scientific) equipped with analyzer (165 mm hemispherical electron energy) and monochromatic Al X-rays (1486 eV at 150 W) was used. The chamber and sample were placed under 1 × 10 −9 torr and 1 × 10 −8 torr, respectively. The survey scan was taken at 160 eV over 1200-0 eV with 1000 meV steps and dwell time of 100 ms. The curve fitting and deconvolution were performed using CasaXPS 2.3.15 software.
Batch adsorption experiments
Time-dependent batch experiments were designed for 60 min to test adsorption of selected toxic metals (Pb and Hg) as a function of pH (5 as acidic, 7 as neutral and 9 as basic), and concentration (30 mg/L, 40 mg/ L and 50 mg/L for Pb while 30 µg/L, 40 µg/L and 50 µg/L for mercury). For each batch, a known concentration of synthetic adsorbate solution was pipetted out into eight separate vials containing a known amount (30 mg) of the adsorbent. After contact of five minutes between adsorbent and adsorbate, the solution was centrifuged and the supernatant was analyzed in flame atomic absorption (Varian Spectra AA 220) spectrophotometer. The adsorbed concentration per unit mass of the adsorbent (qe) was calculated using Eq. (1). Furthermore, the adsorption efficiency (% A) of sonicated aluminium oxide and its hybrids as adsorbent was calculated by using Eq. (2).
where C i , Ce and C f are the initial, equilibrium and final concentrations of the adsorbate, respectively; w is the weight of the adsorbent (mg) and v is the volume (ml) of the adsorbate.
Adsorption kinetics and isotherms
Linear equations (see table 2) of adsorption kinetics such as pseudofirst-order, pseudo-second-order and intra-particle diffusion were applied to the adsorption data to determine the adsorption rate involved in the removal of lead (Pb) and mercury (Hg). Further, the adsorption isotherm of Langmuir and Freundlich was also applied to understand the adsorption mechanism [28]. The fitness of experimental data is estimated based on adsorption capacity (q e ) and regression coefficient (R 2 ) values.
Results and Discussion
The present study is an endeavor towards the synthesis of Table 2 Linearized equation of Adsorption kinetics and isotherms.
1/q e versus1/C e Freundlichlogqe = log K F +(1/n) logCe K F (intercept) is freundlich capacity; n (slope) is freundlich intensity. log q e versus log C e aluminium oxide and aluminium hybrids using sol-gel and post grafting assisted with sonochemical (20 kHz at 40 W) method. Sonication power is utilized for the sonolysis of the aqueous and non-aqueous solvents that resulted in radical formation. Sonication of an aqueous solution causes the growth of existing bubbles towards a resonance size range. When these cavitation bubbles implode, they generate extremely high temperatures and pressures in microscopic regions (hot spots) accompanied by the production of primary and secondary radicals (hydrogen atoms and hydroxyl radicals). These radicals can be used to initiate the chemical reaction. In the present study, hydrogen peroxide (formed by the reaction between OH radicals) and hydroxyl radicals generated due to the sonolysis of water aid in the formation of aluminium oxyhydroxide (see Reactions 1 and 2). On the other hand, the sonolysis of non-aqueous solvents (dichloromethane) generates radicals such as methyl, methylene chloride which help in the abstraction of hydrogen from the amine-functionalized aluminium oxide. The abstraction of hydrogen creates binding sites for incoming heterocyclic compounds (indole and its derivatives). Thus, the aluminium hybrids are synthesized (see Reaction 4). For comparison, the aluminium oxide and aluminium hybrids are also synthesized using the sol-gel method without sonication. When we compared the product yield and reaction time of both sonicated and non-sonicated reactions; the sonicated reaction showed better yield in less reaction time (see table 3). The sonication energy has high power which mediates the chemical reaction faster (due to the consumption of available radicles) and produces a good yield. Thus it reduces the reaction time from days to the hours. The non-sonicated method is a slow process and took a day for Ostwald ripening. Hence, the sonochemical assisted sol-gel is proved as an
FTIR
FTIR Spectra of sonicated aluminium oxide and aluminium hybrids showed distinctive absorption bands from 4000 to 400 cm −1 (see Fig. 1). The broad absorption bands at 3460 cm −1 is assigned to stretching vibration of the hydroxyl group attached with aluminium (Al-OH) [29]. [30] and Gomez et al., [31]. It is noticed that the benzene ring (C-C, C = C) of indole has not been affected by hybrid synthesis [32,33] and the synthesis of hybrids occurred via C 2 and C 3 of pyrrole ring of indole due to high electron density. Furthermore, methyl and
SEM
SEM images of sonicated aluminium oxide showed spherical shaped clusters (see Fig. 2 (a)) in comparison to non-sonicated aluminium oxide. It is assumed that the spherical shape of particles is developed by impinging the intense physical stress as a result of the implosion of acoustic cavitation. These clusters get interconnected with each other after hybrid synthesis and form a porous structure with different particle sizes (see Fig. 2 (c, e and g)). This kind of interconnection is also observed by Krishnan et al., [34]. Furthermore, the indole derivatives provide an additional reactive point for aluminium oxide, resulting in the reorientation of particles and forms the clumps [35]. The reorientation of particles creates voids that facilitate the adsorption process. While non-sonicated aluminium hybrids showed fused and aggregated particles (see Fig. 2 (b, d, f and h)). The fusion of particles covered the interstitial spaces and block the passages, that's why only surface attachment is possible. The less incorporation of indole groups into aluminium oxide is duly supported by FTIR results.
BET:
The BET data elucidate the specific surface area while the physical adsorption-desorption of nitrogen demonstrates the pore size distribution of particles. From table 4, it is observed that sonicated aluminium oxide and aluminium hybrids show a high surface area to volume ratio in comparison to non-sonicated aluminium oxide. The high surface area depicted the presence of interstitial spaces between the particles. The interstitial spaces were created by the implosion of acoustic cavitation [36,37]. The increase in the surface area confirmed the attachment of 2-methylindole (142 m 2 /g), indole (115 m 2 /g) and carboxylic acid 2-indole (167 m 2 /g) groups. It is assumed that attachment of indole group provides the additional surface area which facilitates the adsorption process. A similar trend was also observed by other researchers [38,39]. The nitrogen adsorption-desorption isotherm showed adsorption type-III and IV by non-sonicated and sonicated aluminium hybrids. The BJH hysteresis loop indexed H3 which is the signature of mesoporous particles (IUPAC classification, 1985). The type-III adsorption isotherm indicates the multilayered and mesoporous structure which facilitates the only physisorption of lead and mercury. While the type-IV adsorption isotherm defined the multilayered structure likely to facilitate physisorption and chemisorption via capillary condensation within and outside the pores (see Fig. 3). From the above mentioned FTIR, SEM and BET results, it is confirmed that the aluminium oxide and aluminium hybrids synthesized from the sol-gel assisted sonochemical method showed better properties. Therefore, XRD and XPS of sonicated aluminium oxide and aluminium hybrids are only performed (see supplementary file).
Application of sonicated and non-sonicated aluminium oxide and aluminium hybrids:
The potential applications of sonicated and non-sonicated aluminium oxide and aluminium hybrids were investigated as an adsorbent for the removal of toxic metals (lead and mercury). Time-depended adsorption batch experiments were conducted using a known amount of adsorbent (30 mg) as a function of pH (5 as acidic, 7 as neutral, and 9 as basic), and concentrations (30 mg/L, 40 mg/L and 50 mg/L for Pb while 30 µg/L, 40 µg/L, and 50 µg/L for mercury) till equilibrium is attained.
Influence of initial concentration:
The influence of initial concentration on the adsorption of lead (Pb) and mercury (Hg) are studied and presented in Figs. 4 and 5. It is found that adsorption of lead (Pb) and mercury (Hg) increase with an increase in time because initially adsorbent sites are vacant and the concentration gradient is high [40]. As the adsorbent sites are filled the adsorption process leveled off and stopped. It is also noted that the increase in Table 6 Parameters of intra-particle diffusion model on the adsorption data of mercury and lead at 40 mg/L and pH 7. Table 7 Parameters of adsorption isotherm fitted on the adsorption data of mercury and lead at 40 mg/L and pH 7. the induced concentration from 30 mg/L to 40 mg/L for Lead or 30 µg/ L to 40 µg/L for mercury, increases the metal ion adsorption because of space available on the adsorbent surface. It is observed that a further increase in induced concentration (50 mg/L for Lead and 50 µg/L for mercury) decrease the adsorption. It is due to a fixed number of sufficient adsorbent sites which is already saturated (see Fig. 4). Thus, the maximum adsorption is found at 40 mg/L for lead and 40 µg/L for mercury that were selected for further investigation on varying pH. The equilibrium attainment is noted within 60 min. A similar adsorption trend is observed in the adsorption of lead and mercury using non-sonicated aluminium oxide and aluminium hybrids (see Fig. 5).
Influence of pH:
The influence of pH on the adsorption of lead (Pb) and mercury (Hg) is studied and presented in Figs. 6-13. It is noted that adsorption of lead and mercury is generally higher at pH 7 in comparison to pH 5 (acidic) and pH 9 (alkaline). Low adsorption of Pb and Hg at pH 5 is due to the presence of hydronium ion (H 3 O + ) in competition with lead or mercury ion to cover the adsorbent site. On the other hand, hydronium ion (H 3 O + ) concentration decreases at pH 7 offering more adsorbent sites to lead and mercury for adsorption. The decline in adsorption at pH 9 is associated with precipitation of metal ion with anion (hydroxide) into solution. Thus the pH variation followed the general trend (pH7 > pH5 > pH9). Similar results have been reported previously [41]. Different types of adsorbents showed different behavior toward different metals, based on surface chemistry. The sonicated aluminium oxide showed 37% and 33% adsorption of mercury and lead, while nonsonicated aluminium oxide showed 26% and 24% adsorption of mercury and lead (see Fig. 6). It can be seen that the sonicated treated aluminium oxide showed relatively better adsorption of metals due to its high surface area to volume ratio in comparison to non-sonicated aluminium oxide. Fig. 7. show sonicated aluminium-indole hybrid adsorbed 55% mercury and 43% lead at pH 7 in comparison to non-sonicated aluminium-indole 49% and 34% of mercury and lead, respectively. A similar trend is observed for other derived hybrid i.e., aluminium-carboxylic indole and aluminum-methyl indole, as shown in figure (see Figs. 8 and 9). The sonicated aluminium-carboxylic indole showed 60% and 57% removal of mercury and lead while sonicated aluminum-methyl indole showed 67% and 63% removal of mercury and lead. The non-sonicated aluminium-carboxylic indole showed 57% and 50% removal of mercury and lead while sonicated aluminum-methyl indole showed 58% and 46% removal of mercury and lead. It is also likely to mention that aluminium hybrid showed better adsorption potential towards mercury and lead than aluminium oxide due to the synergetic effect of organic-inorganic moieties [42]. It is very interesting to note that the adsorption of mercury is relatively higher than lead due to its smaller ionic radii (102 pm) and lower hydration enthalpy (-1829kj/mol) than lead (119 pm, −1485kj/mol) which help to diffuse in and on the particles.
Adsorption kinetics and Isotherms:
Linear equations of adsorption kinetics (pseudo-first-order, pseudosecond-order, and intra-particle diffusion) and adsorption isotherms (Freundlich and Langmuir) have been applied to experimental data. The applied kinetics and isotherms helped in the determination of the adsorption rate and adsorption mechanism involved in the removal of Pb and Hg using sonicated and non-sonicated synthesized aluminium oxide and aluminium hybrids [28]. The fitness of experimental data is estimated based on adsorption capacity (q e) and regression coefficient (R 2 ) values. Table 2 shows the kinetic parameters of pseudo-first and pseudosecond-order on the adsorption of experimental data. Both kinetics (pseudo-first-order and pseudo-second-order models) are fitted well on the experimental data with the value of R 2 close to 1 for Hg and Pb. The sonicated and non-sonicated synthesized aluminium oxide and aluminium hybrids showed more coherence towards mercury with potential adsorption capacity in comparison to the lead metal ion. Due to the small size of mercury, initially, it started the physio-sorption followed by chemisorption. While lead showed more inclination toward physisorption. It can be witnessed by comparing the value of regression as presented in table 5 that less deviation and more closeness between the experimental and calculated adsorption capacity (q e) present a good correlation. The findings of the present research are further strengthened by other studies [42]. Application of the intra-particle diffusion [42] shows that the adsorption occurs through diffusion in two steps followed by equilibrium or saturation of adsorbent surface (see table 6). The rapid diffusion of adsorbate in the first step is evident that it is governed by physicochemical forces. Whereas, adsorption is mostly controlled in a second step defining it as a rate-limiting step. The calculated diffusion coefficient (K id ) values indicate the deviation from the linear relationship between qt versus time. A higher K id suggested more than one rate-controlling step.
Minimum and maximum values of intercepts (C) calculated for Hg and Pb suggest that the boundary layer thickness is higher for lead adsorption that restricted its movement in comparison to the mercury.
Elucidation of Langmuir and Freundlich models [43] fitted on adsorption data are summarized in Table 7. The result of Langmuir parameters (qm, K L ) refers to the distribution of adsorbed molecules between the liquid and solid phases under equilibrium. On the other hand, Freundlich parameters (n and K F ) refers to adsorption intensity and capacity which can easily be calculated from intercept and slope.
Further probing revealed that the qm of mercury (17-47 mg/g) is higher than lead (8-38 mg/g) which represented the monolayer coverage. The uptake rate K L (1.5 × 10 −2 −2 × 10 −2 ) of mercury per minute was also higher than lead as shown in table 7. Regarding the Freundlich model, a higher value of n (1.2 × 10 −1 −8 × 10 −2 ) is found for Hg, indicating that Hg uptake on hybrids is more than Pb (n = 6 × 10 −2 −9 × 10 −2 ). Similarly, K F (min −1 ) for mercury is higher than lead. The value of correlation coefficient (R 2 is close to 1) shows that both adsorption isotherms (Langmuir and Freundlich) are fitted well on the adsorption data. The same trend is observed for nonsonicated synthesized aluminium oxide and aluminium hybrids. Therefore, it can be concluded that the adsorption of lead and mercury on the sonicated and non-sonicated synthesized aluminium hybrids is a good combination of mono sublayer leading to multilayer adsorption.
The separation factor "R L " (values ranging from 0 to 1) also signifies whether the adsorption process is favorable or unfavorable and Upon considering the R L values (table 8), it can be assessed that adsorption is a favorable process under optimum conditions because all the values are between 0 and 1. The R L value equal to 1 is assigned to the linear and reversible process [44]. If R L value is equal to 0 or more than 1, then the adsorption process become irreversible and unfavorable.
Conclusions
Aluminium oxide and aluminium hybrids have been successfully synthesized using the sol-gel assisted sonochemical method and are used as adsorbents for the adsorption of lead and mercury. The optimum adsorption of lead and mercury is attained at 40 mg/L and pH 7 within an equilibrium contact time of 1 h. The sonicated treated aluminium hybrids showed better adsorption potential of mercury and lead up to 67% and 63%, respectively, in comparison to non-sonicated treated aluminium hybrids (58% for mercury and 50% for lead). Thus the adsorption process is governed by pseudo-first-order, pseudosecond-order, Langmuir and Freundlich isotherm with the regression coefficient (R 2 ) greater than 0.99.
Declaration of Competing Interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
|
2020-08-06T09:02:32.572Z
|
2020-08-04T00:00:00.000
|
{
"year": 2020,
"sha1": "eec854a48fec44b025e8eeebbe2f12314aa1ac0b",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.ultsonch.2020.105299",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "813eaa64ff684f3dd9e555892b8e7c6bcb310571",
"s2fieldsofstudy": [
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
}
|
3671148
|
pes2o/s2orc
|
v3-fos-license
|
A signaling visualization toolkit to support rational design of combination therapies and biomarker discovery: SiViT
Targeted cancer therapy aims to disrupt aberrant cellular signalling pathways. Biomarkers are surrogates of pathway state, but there is limited success in translating candidate biomarkers to clinical practice due to the intrinsic complexity of pathway networks. Systems biology approaches afford better understanding of complex, dynamical interactions in signalling pathways targeted by anticancer drugs. However, adoption of dynamical modelling by clinicians and biologists is impeded by model inaccessibility. Drawing on computer games technology, we present a novel visualization toolkit, SiViT, that converts systems biology models of cancer cell signalling into interactive simulations that can be used without specialist computational expertise. SiViT allows clinicians and biologists to directly introduce for example loss of function mutations and specific inhibitors. SiViT animates the effects of these introductions on pathway dynamics, suggesting further experiments and assessing candidate biomarker effectiveness. In a systems biology model of Her2 signalling we experimentally validated predictions using SiViT, revealing the dynamics of biomarkers of drug resistance and highlighting the role of pathway crosstalk. No model is ever complete: the iteration of real data and simulation facilitates continued evolution of more accurate, useful models. SiViT will make accessible libraries of models to support preclinical research, combinatorial strategy design and biomarker discovery.
INTRODUCTION
Targeted cancer therapy aims to disrupt aberrant cellular signaling pathways. Drug targets are identified within those pathways that should be functionally linked to disease progression and have a disease specific biomarker to predict or assess therapeutic response [1]. Such biomarkers are thus surrogates of pathway state, but there has been limited success in translating candidate biomarkers to clinical practice [2]. Indeed only a tiny fraction of identified potential biomarkers have been adopted into clinical practice [3]. A key limitation to clinical translation of biomarkers is rooted in a drug design process typically framed in a single-target-single-drug paradigm [4] in the face of three major complexities: (1) the intrinsic complexity of pathway networks, (2) unforeseen feedback effects, and (3) dynamical adaptive changes in pathways when challenged by a drug.
Each of these complexities attracts different challenges. Topological complexity is an essential regulatory characteristic of cellular signaling pathways, with signaling networks exhibiting such features as pathway cross-inhibition, cross-activation, redundancy and convergence [5]. Targeted therapies can impact these regulatory processes resulting in system-scale changes to behavior reaching beyond the targeted region in ways that are difficult to predict [6]. Feedback in signaling networks likewise has a regulatory role, exerting either positive or negative effects on cascade components [7]. Feedback loops provide plasticity in signaling pathway behavior that can enable cells to adapt to therapeutic insult [8]. Further, there is a growing body of research to suggest that the strengths of these regulatory mechanisms are not static: they are dynamic over time in response to drug action [9] and so it is likely that dynamic features of network signaling might form the basis of drug targets rather than the network components themselves [10]. These complexities and dynamical adaptive changes may confer resistance to drug therapy [11] and increasing evidence from clinical studies that combination therapies offer a possible route to address drug resistance [12,13] and cell line studies point towards the use of combination therapy to sensitize cells to anti-cancer therapy [14].
In the face of this complexity, our representations of signaling pathways and drug combinations have become increasingly sophisticated, and there is a growing opportunity for systems biology modeling to contribute to experimental design and to unravel the mechanisms and complexities of network functioning and combination therapy design [15]. For example, in a recent theoretical study of mono-and combination therapy to overcome drug resistance to kinase inhibitors [16] based on thermodynamic factors, the developed model is able to demonstrate both resistance to single drug treatment for two inhibitors when applied individually to the same kinase target and the overcoming of that resistance when those inhibitors are applied in combination. Further, through systematic investigation into model dynamics a suggested mechanism of action is identified, whereby the binding of one promoter to an inhibitor introduces conformational change in another promoter and this change provides a target for a second inhibitor to act in combination.
The value of this and other systems biology models depends on sophisticated model analysis and interpretation of model complexities, including for example the determination of system-scale control parameters such as the transition from drug resistance to sensitivity [17] and state space search optimization methods for model parameter estimation [18]. Analytical methods such as these are typically the working arena of theoreticians. Importantly, biologists and clinicians with the pertinent domain expertise are then dependent on such theoreticians to explore model dynamics, and this is a relative barrier to effective implementation of systems biology models into preclinical and clinical stages of drug development.
To overcome this barrier, we present a new, interactive, visualization and animation technology, SiViT (Signaling Visualization Tool), to enable biologists and clinicians to work directly with the model. SiViT allows biologists and clinicians to directly introduce and visualize the effects of changes in pathway dynamics in silico (for example by introducing mutations or inhibitors) thereby identifying unforeseen interactions, suggesting further experiments where the model is incomplete and identifying and assessing the possible effectiveness of candidate biomarkers as read outs of pathway status after dynamical adaptation. SiViT is generalizable and accessible, thus supporting preclinical research, combinatorial strategy design and biomarker discovery.
SiViT provides a single framework within which models may be imported (in Systems Biology Markup Language (SBML) format [19]) and their dynamics animated. Model structure is automatically projected onto a graph, with graph nodes representing entities in the network, such as molecular species and drugs, and edges representing node interconnections-the pathways. SiViT allows interactive animation of both species concentrations and signaling activity over time. These core features enable life scientists to animate and probe the dynamics of a cellular signaling model. Most importantly, SiViT allows domain experts to interact with the model, be it by introducing species mutations and/ or by adding specified (combinations of) concentrations of drugs at specific times.
Beyond these features, SiViT facilitates comparison of model dynamics in two different experimental regimes, for example with and without drug intervention and/ or under species mutation, through an easy to use menu system. Comparisons between experimental regimes are depicted using intuitive, color-coded animations. The result is an interactive in silico exploration and discovery platform to enable the life scientist to explore and exploit existing SBML-format models of cellular signaling and drug action. We illustrate these model features using as an exemplar the cell signaling model and experimental regimes described in detail in [20] and [21]. SiViT and supporting documentation is available from the authors on request together with the exemplar signaling model. SiViT requires a full installation of MATLAB 2011b (www.mathworks.com) but will automatically install all other required software.
Visualizing signaling networks with SiViT
The visualization created by SiViT is encapsulated in a User Interface (Figure 1). SiViT automatically arranges the model as a network where nodes are species and edges are reactions, arranged according to a force-directed graph algorithm [22] that optimizes layout. A play icon allows visualization of model dynamics and is linked to a slider bar that allows the user to manipulate the visualization forwards and backwards in time.
The main window ( Figure 1) depicts the signaling network in response to a particular in silico experimental regime (drug interventions, mutational status). The color scheme depends on the configuration of SiViT. For a single experiment SiViT shows the network in white. Each species is shown as a node and species concentration is depicted by the radius of a translucent sphere around that node (see Figure 1, Inset 2). Importantly this radius will increase and/ or decrease over the course of the simulation according to the calculated concentrations from within the signaling model. Reaction velocities among nodes are visualized in a similar manner: edge thickness is a function of reaction velocities and so increases and/ or decreases in line with model dynamics.
Where two different experimental regimes are set up for comparison, the visualization is tri-colored (white, red, blue). Regimes are defined via the intervention panel, and one regime is designated "Control"; the other "Experiment". For every time point the values of each node and each edge in the Control and Experiment are compared. If there is no difference between these values the node/ edge is white; if the Experiment value is higher or lower than the Control value the node/ edge is colored red or blue respectively, with intensity proportional to this difference. The intervention panel ( Figure 1, Inset 1) is a pop-up dialog box that allows the introduction of known drugs and/ or mutations. Drugs may be selected from a drop-down list and both the dosage and timing of application of that drug specified. For protein expression and catalytic activity changes indicative of mutations, any species in the model may be selected from a drop-down list and the protein concentration level or kinetic constant and the time of change in that concentration level or kinetic constant may be specified. In this way, complex regimes with multiple drugs and multiple mutations may be specified.
Additionally, and to illustrate the visualization of other models with SiViT, we loaded the SBML model of ERK signaling from [23]. In an exploration of the link between cell fate and signaling dynamics model results show that an increasing in one ERK isoform results in a decrease in the other isoform. To demonstrate model functioning we reproduced some key findings (see Supplementary Figure S1).
Interactive animation of signaling responses to combination therapies
Using the model of the PI3K/PTEN/AKT and RAF/ MEK/ERK signaling pathways developed in [20], [21] and [17], we use SiViT to reveal the dynamic signaling response to: (1) application of a growth inhibitor drug; (2) introduction of a mutation in the network that is known to confer drug resistance; (3) addition of a second drug to restore network sensitivity, i.e. a combination therapy to overcome drug resistance. The computational model describes the signaling response kinetics to heregulin, a growth factor that binds with receptors in the Erbb family to induce HER3/HER2 receptor dimerization pHER23 (see model schematic in Figure 2). This in turn stimulates tyrosine phosphorylation that in turn drives intra-cellular signaling activity. This signaling activity can be inhibited by receptor tyrosine kinase (RTK) inhibitors, and we include one such inhibitor in our model: pertuzumab (2C4 antibody). Pertuzumab is designed to target HER2 to inhibit HER2 dimerization with other Erbb family members, and especially formation of the oncogenic HER2-HER3 dimer. This HER2-HER3 dimer can activate the PI3K/PTEN/AKT signaling pathway that governs cell survival and so proliferation and tumor growth. Pertuzumab thus acts to suppress activation of the PI3K/PTEN/AKT cell survival pathway. Supplementary Videos S2 to S4 and still Figure 3A-3D show the dynamical response of this model visualized through SiViT, and show the PI3K/PTEN/AKT pathway vertically downwards.
Supplementary Video S2 shows the addition of 30 nM pertuzumab at the beginning of the simulation, and animates the network dynamics in response to drug action compared to network functioning without drug application. The drug is added through a drop-down menu of available therapies, with a standard dosage provided that can be edited, and the drug is added at the current time point in the simulation by default (also editable to allow for sequential application). The first minute of simulation dynamics shows the early network response to the addition of the drug (top left): regions depicted with higher activity are the node in the network representing the drug concentration and the HER2 node to which it binds. Additionally, increases in HER3 and HER3 bound with heregulin are observed (in red) since HER2 inhibition limits HER2/HER3 dimerization and so more HER3 is free.
SiViT highlights the impact of pertuzumab on network signaling following drug action, and shows this impact propagating along both PI3K/PTEN/AKT and RAF/ MEK/ERK pathways over time in Supplementary Video S1. Figure 3A depicts the response of the signaling network to 30 nM pertuzumab after 1 minute compared with normal functioning in the absence of pertuzumab. This reduction in signaling activity was constant throughout the simulation: Figure 3A shows the simulation after 10 minutes. The key output from this pathway, AKT, is shown in the graph insert in Figure 3A: SiViT analysis revealed an increase in AKT, and so a decrease in active, phosphorylated AKT over the 10 minute simulation. Note the single node to the far left of this pathway is the input node for a second drug.
Within this pathway, the PTEN-pPTEN cycle response was time variant as shown in the Supplementary Video S2: PTEN concentration level decreased over time in normal functioning since activated PTEN, i.e. pPTEN, increased. Following inhibition by pertuzumab those species that would normally bind with PTEN were inhibited (blue) and so PTEN levels reduced at a much slower rate. Figure 3A shows no (notable) change in PTEN level after 1 minute; Figure 3B reveals the relative increase in PTEN level following inhibition and the graph insert in Figure 3B shows PTEN level over the whole simulation.
A similar time variant is observed in the RAF/MEK/ ERK signaling pathway. Figure 3A-3D shows the RAF/ MEK/ERK pathway across the top of the signaling network. As a consequence of the reduction in input signal pHER23 following HER2 inhibition by Pertuzumab we observed a reduction in signaling activity in the whole pathway at 1 minute ( Figure 3A). At 10 minutes ( Figure 3B) we observed increases in signaling activity. Importantly, and in contrast to PI3K/PTEN/AKT signaling, this represents a time lag in signaling activity. The reduced input pHER23 slowed down the rate but not the level of signaling in this pathway: at the 10-minute time point measured levels are then higher but this is an artefact of differential phasing. These dynamics can be observed in Supplementary Video S2.
Next we introduced a mutation, PTEN loss, into the network associated with resistance to anti-cancer drug therapy. PTEN loss was represented in the model by a 50% reduction in original PTEN level. Again we focused on AKT signaling and the impact of PTEN loss on the effectiveness of pertuzumab in inhibiting AKT signaling. Note that both our model and experimental systems confirmed that PTEN loss alone does not influence AKT activity. Supplementary Video S3 shows the introduction of that mutation through the Adjust Model section of the dialog box, and the resulting network dynamics in response to pertuzumab compared with a network response without this mutation. SiViT reveals an increase in AKT signaling, shown in the edges connected to the AKT node and these are visible from one and a half minutes onwards. Supplementary Video S3 also shows progressive decreases in the region surrounding PTEN (lowest part of the network in Figure 3C) manifest in the various PTEN complexes (see schematic in Figure 2), with some nodes that would otherwise interact with PTEN showing an increase due to lower levels of complex formations. Figure 3C shows the effect of PTEN loss on the efficacy of Pertuzumab compared with Pertuzumab action in a network with no mutation. Figure 3C depicts a marked decrease in the amount of AKT (see graph insert) over the 10-minute period, reflecting the increase in AKT signaling. Finally, the decrease in MEK, pMEK and ppMEK is due to cross-talk between the PI3K/PTEN/AKT and RAF/MEK/ ERK signaling pathways, e.g. lower levels of activating, phosphorylated RAF.
We then restored network sensitivity to pertuzumab following PTEN loss with the addition of a second drug, the PI3K inhibitor LY294002. The combination therapy of pertuzumab and LY294002 was identified via an in silico perturbation analysis and subsequently confirmed via in vitro experiments [17], to derive a control parameter that governs the signaling response of the PI3K/PTEN/ AKT pathway to pertuzumab. This control parameter encapsulates the ratio of PTEN to the product of active PI3K and AKT, and so PTEN loss can be compensated for by PI3K inhibition (or AKT inhibition). Supplementary Video S4 and Figure 3D show the response of the mutated network to this combination therapy compared to the normal network response to pertuzumab. Note that in [17] we reported on 5000 nM of LY294002; here we used SiViT to explore the parameter space to identify a very close match to AKT signaling in normal response to pertuzumab (see Inset graph) with only 100 nM of LY294002. The down-regulated region in Figure 3D at 10 minutes of simulation time occurs because of the reduction in concentrations of PTEN (through mutation) and PI3K (through drug action). Supplementary Video S4 reveals the dynamics of this down-regulation, and shows differential, increased down-regulation of the PI3K/PTEN/AKT pathway after 1 minute compared to the original sensitive network (Supplementary Video S2). This difference is short-term and after approximately 7 minutes there is no substantial difference between the original sensitive network and this network where sensitivity has been restored through combination therapy. www.impactjournals.com/oncotarget
Dynamics of biomarkers of drug resistance
In [24] we used a slightly modified version of the model presented in Figure 2 and developed a global sensitivity analysis (GSA) approach to support exploration of the effect of adjusting multiple model parameters on signaling pathway activity, with a particular focus on phosphorylated AKT. Biomarkers of AKT signaling dysregulation were determined based on those parameters that had the highest levels of sensitivity for pAKT signaling levels. Figure 4A-4D show the integration of GSA and SiViT analyses for the identification and interpretation of biomarkers PDK1 and PI3K. Figure 4A shows experimental results and model predictions for phosphorylated AKT signaling dynamics in OVCAR4 cell line in response to heregulin stimulation and drugs targeting either HER2 growth receptor or the identified biomarkers PDK1 and PI3K. Figure 4B-4D shows SiViT visualizations of the OVCAR4 cell line for each experimental condition in Figure 4A.
Supplementary Video S5 and Figure 4B show the response of the signaling network to heregulin and 30nM of Pertuzumab after 60 minutes compared with normal functioning. The effect of pertuzumab is similar to the PE04 network of Figure 3A: SiViT analysis revealed an increase in AKT, and so a decrease in active, phosphorylated AKT over the simulation. Following inhibition by pertuzumab those species that would normally bind with PTEN were likewise inhibited (blue). Of note is the effect of pertuzumab on signaling downstream of AKT (far right of Figure 4B). Downstream AKT complexes were down-regulated; in contrast, SiViT revealed upregulation in the MEK signaling cascade that binds with PP2A. PP2A provides cross-talk between AKT and MEK pathways, and it is this cross-talk that drives upregulation of MEK-PP2A complexes. Downregulation of AKT causes an increase in availability of PP2A, the levels of which remain largely constant following pertuzumab, resulting in increased MEK-PP2A complexes. The effect of this cross-talk becomes increasingly Figure 4B shows integrated pAKT signaling in response to heregulin the presence (blue line) and absence (black line) of Pertuzumab and these show good agreement with the blue (HRG+2C4) and red (HRG) lines in Figure 4A respectively. Figure 4C and 4D show the network response to heregulin and either PDK1 inhibition with UCN-01 ( Figure 4C) or PI3K inhibition with LY294002 ( Figure 4D) compared to the network response to heregulin. Figure 4A shows PDK1 inhibition is less effective at reducing AKT signaling than Pertuzumab. In addition to this single measure, Figure 4C shows the effect of PDK1 inhibition on the entire network after 60 minutes, and reveals markedly less inhibition upstream of AKT compared with inhibition by Pertuzumab. Down-regulation and upregulation of the AKT and MEK signaling cascades respectively are comparable with signaling activity following pertuzumab and this is driven by the same PP2A cross-talk. Figure 4D shows the effect of PI3K inhibition on the whole network. Network response to PI3K inhibition is broadly similar to Pertuzumab inhibition across the network, although the down-regulation of the PI3K/AKT pathway is more pronounced following PI3K inhibition. Insets in Figures 4C and 4D show time course dynamics of the integrated pAKT signaling in response to heregulin the presence (blue line) and absence (black line) of PDK1 and PI3K inhibition respectively, and these show good agreement with the purple (HRG+UCN-01) and green (LY294002) lines in Figure 4A.
DISCUSSION
We have developed an interactive animation tool that can import any suitably formatted dynamical model written in SBML. SiViT is compatible with a wide range of curated models stored on the open access EMBL-EBI BioModels database (https://www.ebi.ac.uk). We provide scripting that converts those models to a form executable by the SimBiology toolbox. Models may be uploaded to the BioModels database for curation and then used with SiViT. Additionally, our software is open source and so other researchers are able to provide bespoke conversion scripts, in either Java or MATLAB, guided by our scripts or otherwise. Figure 4A shows phosphorylated AKT over one hour in response to heregulin stimulation (red line) combined with pertuzumab (blue line), PI3K inhibitor LY294002 (green line) and PDK1 inhibitor UCN-01 (purple line); Figure 4A reproduced from [24]. Figure 4B-4D shows SiViT visualizations after one hour; Insets show integrated AKT signaling in the control condition (black line; heregulin only) and in response to pertuzumab ( Figure 4B), LY294002 ( Figure 4C) and UCN-01 ( Figure 4D).
SiViT allows a non-computational specialist user to interrogate the possible effects of a drug, a combination of drugs and the response of a biomarker to adaptive change in the tumor cell. Biomarker elucidation in the context of combination therapies attracts the challenge of searching through a large state space of possible drug targets, pathway status readouts, drug dosages and timings of application in a problem domain characterized by non-linearities, topological complexity and dynamic rewiring. Systems biology models can capture some of that complexity as exemplified in our consideration of signaling responses to combination therapies. Computational search frameworks can explore that state space in a focused, directed manner as exemplified in our elicitation of biomarker dynamics. SiViT affords observation of those complexities in a given network and allows easy exploration of model dynamics and sensitivities that can inform the search criteria crucial to success of any computational search framework. SiViT can make a direct contribution to complement existing efforts in this arena of study.
For example, [25] describe a model for predicting the impact of combination therapies on the RAS/PI3K signaling network for a panel of cell lines with different mutational status. They show how model analysis can support the identification of combination treatments and subsequently confirm the predictions from the model in a xenograft system. A central issue raised is that only particular combination treatments work for particular cells and modeling can guide this challenging discovery process. Complementary to such model analysis, our approach allows biologists and clinicians to design in silico combination treatments by adding drugs to different cell lines with specified doses and mutations through a menu interface and observing the impact on the signaling network.
Moreover, given a growing awareness that it is not simply the mix of drugs that constitute a combination therapy but also the scheduling of their application, SiViT can support sequential application studies. For example, [9] undertook a sophisticated combined experimental and theoretical study of the sequential application of anticancer drugs. They noted that complexities in signaling networks such as feedback and cross-talk make predicting cellular responses to drug action difficult and especially so in cancer cells since functioning is aberrant. This difficulty is compounded for combination therapies. [9] targeted triple negative breast cancers and showed that EGFR inhibition prior to DNA damaging chemotherapy (doxorubicin) sensitizes some cell lines to that damaging agent. Analysis of gene expression profiles of cell lines that were both sensitive and insensitive to time-staggered EGFR inhibition followed by doxorubicin revealed marked differences in genes including those linked to key survival and inflammation pathways. Further proteomic analyses revealed differences in pathways, including those linked to survival, in cells sensitive to sequential combination therapy. This response is explained in terms of a rewiring of the signaling pathways to sensitize cells to doxorubicin as a result of pre-treatment with EGFR inhibitor; co-treatment or post-treatment did not sensitize cells.
The notion of pre-treating cells to promote sensitivity to a second treatment seems intuitive, yet other work highlights further complexities in sequential combination therapy [26]. In this study a time-dependent effect of PI3K/mTOR inhibition on doxorubicin-induced apoptosis in neuroblastoma cells was observed. Posttreatment with the PI3K/mTOR inhibitor most sensitized the cells to doxorubicin treatment; the sensitizing effect was less pronounced in co-treatment and pre-treatment. This observation reveals that the order of application of combination therapy depends on context. SiViT provides a platform that supports such contextual investigation. Drugs can be added in any order, with each added at an individually specified time and dose. Comparison between different regimes, which could vary in timing and or dose, allows in silico optimization of time-staggered combination therapies.
Clearly, the identification of biomarkers and the design of effective combination therapies are challenging and require systematic experimental study informed by systems biology modeling. We propose that SiViT provides a valuable bridge between the fields of cell biology and computational modeling, enabling cell biologists and clinicians to explore available models of signaling pathways and drug actions in an environment that does not require expert computational modeling expertise, simply an awareness of the process of modeling. Through our generalizable technology we seek to promote the uptake of modeling by the biological and clinical communities in support of preclinical research, combinatorial strategy design and biomarker discovery.
SiViT framework
SiViT comprises three major components: a controller interlinking interfaces to both the user and to MATLAB for model (re)calculation, and is implemented as a suite of Java program files. Figure 5 depicts the overall structure of SiViT. SiViT requires as external files the cellular signalling model itself as implemented in MATLAB with the model structure represented within a SBML scheme, a list of therapeutic interventions (drug name and typical concentration) and an optional set of 3D graphical object files (not shown).
Central to this architecture is the SiViT controller, which has two major roles: to import both a specific signaling model and a predetermined list of model interventions; and to translate both user interactions with the visualized model into updated parameter sets for MATLAB, and model results into the visualized model. Optionally the SiViT controller can import 3D graphical objects illustrating each node (species) to enhance visual aesthetics (not shown in Figure 5; see User Interface below).
Importing the model requires the reading in of the SBML scheme that defines the model. Implemented using the matlabcontrol (code.google.com/p/matlabcontrol/) Java API to MATLAB, the SiViT controller establishes a communication protocol to MATLAB in terms of parameters and forms a software link in order to invoke the MATLAB model, managed by the MATLAB interface. The list of pre-defined therapeutic interventions is a file of [drug name, dosage] pairs; note dosage (in nM) and time of application can be modified through the User interface. Note interventions not on this list can be introduced easily. The algorithm describing SiViT operation is provided in Supplementary Text S6.
MATLAB interface
Interlinking the MATLAB model and the User interface is more complex. The User interface drives the dynamics of this interlink. In summary, the loading of a new model and changes to a model through the User interface (see below) generate interface events that are passed to the SiViT controller and converted into changes to the parameter set for the MATLAB interface. The MATLAB model is then (re)calculated and results passed back to the SiViT controller via a data structure. This data structure is then processed and passed to the User Interface for an updated visualization. Note the User interface is able to specify and then compare two different model regimes, and in this case the SiViT controller manages two concurrent data structures: one for each set of model results.
In more detail, the MATLAB interface is provided with a data structure capturing the form of the parameter set of the signaling model in terms of names of both species and reactions together with protein concentrations and reaction velocities over time. Through the matlabcontrol API the selected signaling model is executed and the time series of results (concentrations, velocities) updated. Any interventions added to the model through the User interface are captured as user-generated events and added to the parameter set for the signaling model. When the SiViT controller detects a parameter set change the MATLAB interface triggers recalculation of the model. The computed model may then be queried through this MATLAB interface for all species and reaction names, and all protein concentrations and velocities over time. This combination of intervention addition, model recalculation and state variable query provide all the data to feed the User interface. Figure 5: SiViT architecture. Interoperation of major components (rounded rectangles), external resources (rectangles) and key interactions (arrows). Through the User Interface, users may select an SBML file containing the SBML scheme for the model, and load that model into SiViT with an accompanying set of therapeutic interventions (a file of [drug name, dosage] pairs). SiViT then constructs a model as defined by the SBML, passed as parameters to MATLAB, and captures all time series data computed by MATLAB, i.e. the results, pertaining to all biomolecular species. The visualization is based on both the model structure, i.e. species and reactions as prescribed in the SBML, and the results of model execution, i.e. species concentrations and reaction velocities over time. A user may make changes to that model via the interface, e.g. by adding a drug at a prescribe time, and this generates an event that is translated into a change in model parameters. This in turn results in a model recalculation, the results of which are passed back for re-visualization. graph layout algorithm [22]. This algorithm seeks to optimize the network structure such that edges are of equal length and that edges do not intersect (in 2D or 3D space as specified). In our implementation, and to provide additional flexibility in layout choices, it is also possible to constrain the layout algorithm such that the network is arranged onto the surface of a sphere.
User interface
The time slider bar allows the user to move forwards and backwards in time through simulation dynamics simply by moving the slider between 0 and maxTime, the maximum duration of the simulation. By pressing the play icon to the left of the slider bar, the slider icon on this bar will move from left to right automatically during the simulation. The play function can also be paused, and the slider moved by the user (forwards and backwards in time).
Finally, a small dialogue box to the right of the slider bar allows modification of the speed of model animation.
The central viewing frame holds the signaling network itself, and the detail of the visualization is dependent on whether SiViT is being used to explore the dynamics of a single experimental regime or compare two different regimes. For a single experimental regime the visualization is monochrome (white). Nodes in the network depict species concentration: the radius of the sphere is proportional to the species concentration and so varies in line with model dynamics. If the optional 3D graphical objects are imported the center of the node is a bespoke volume image for that species; if not it is a yellow volume image. Reactions are shown as inter-node connections, i.e. graph edge, and edge thickness is proportional to reaction velocities and so varies in accordance with model dynamics over the course of the simulation.
CONFLICTS OF INTEREST
There are no potential conflicts of interest to disclose.
|
2018-04-03T01:12:31.847Z
|
2016-05-18T00:00:00.000
|
{
"year": 2016,
"sha1": "990ca78406f1c4f5e529d537054eea2d99859240",
"oa_license": "CCBY",
"oa_url": "https://www.oncotarget.com/article/8747/pdf/",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b2bb5cd0a9e685ee216359cf76216c5a70826836",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
212626365
|
pes2o/s2orc
|
v3-fos-license
|
Design and Optimized of Solar PV System a Case Study of KIOT Administration Offices
Design and optimized of solar PV system is a leading trend in modern energy management of distribution system. In modern or currently most of the life, customers take in energy from different sources like, sunlight, wind, diesel, biomass, even batteries and from main grid of electric power and facilitate not only its conversion into electric energy, but also the demand management, storage and generation association with the system’s output. In recently distributions generations (micro grid) implementations combine loads with sources, allow for intentional islanding and try to use the available waste heat. These solutions rely on complex communication and control, and are dependent on key components and require extensive site engineering. This paper focuses on the design, optimization and simulation of 48-V rated stand-alone solar PV using HOMER software that is suppling primarily by photovoltaic (PV) panels and using battery and diesel for comparison, but which also has the capability to tie in to a main electrical grid. A system of this size should be able to supply power for KIOT administration office buildings. The most important objectives of this paper are the selections of an appropriate PV array, the selection or design of a charge controller and the design of the system’s renewable energy converter.
Introduction
One of the best ways to get power to remote, off-grid locations and reliable electric power in grid connected system, whether in developed or developing countries is through Solar Home System (SHS). The system includes Solar PV, battery, solar charge controller and inverter. In most cases consumers consume solar energy at evening hours. So, solar energy shall be stored into batteries [1]. A solar charge controller is similar to the voltage regulator. It regulates the voltage and current that is coming from the solar panels and going to the battery. Most of the batteries are fully charged at 14 to 14.5 volts. On the other hand, battery's life time drastically reduces due to the discharge over the level of 70%-80%, at this discharge level the battery voltage normally goes down to 11.5 volts. Each battery has a certain limit of capacity. Battery lifetime reduces drastically due to overcharging and deep discharging. As battery is a very expensive component of a Solar Home System, it is necessary to protect the batteries from being over charged or deeply discharged. In this case charge controller plays a vital role to protect the battery. A series charge controller disables further current flow into batteries when they are full. A shunt charge controller diverts excess electricity to an auxiliary or "shunt" load, such as an electric water heater, when batteries are full. Since the industrial revolution, human activities have constantly changed the natural composition of earth's atmosphere. Concentrations of trace atmospheric gases, nowadays termed "greenhouse gases" are increasing at an alarming rate [2]. The consumption of fossil fuels, conversion of forests to agricultural land and the emission of industrial chemicals are principal contribution factor to air pollution. Under normal atmospheric conditions, energy from the sun controls Earth's whether and climate pattern. Heating of Earth's surface from the sun radiates energy back to the space. This will result the greenhouse effect or global warming. Global and national scenarios of primary and in particular electrical energy consumption for the coming decades basically all predict a strong increase in technical utilization of renewable energy (RE). A significant increase in the use of RE satisfies the requirements of climate protection and allows suitable growth in energy consumption for newly developed and developing countries. At the same time, the aims to decrease the share of fossil primary energies in the medium and long time, and has the potential to reduce the share of nuclear energy to zero. However, to achieve this goal all RE technologies (hydro power, wind power, biomass, geothermal heat and solar irradiation) have to be mobilized in a balanced way and in coordinated time sequence, which is according to their economic market relevance and their technical potential. Among RE, solar irradiation will in the long term have to become the main contributor to a global renewable energy supply because of its unlimited potential. Photovoltaic (PV) is favored by its flexibility with respect to size and fields of application, its long life-time and low required maintenance. PV system is a device that mainly used to convert solar energy (which is the utilization of the radiant energy from the sun) to the electricity [3]. Solar power is often used interchangeably with solar energy but refer more specifically to the conversion of sunlight into electricity. Presented of PV technology that used to convert solar energy to the electric energy was acknowledge and used in many countries around the world nowadays.
Photovoltaic (PV) is the technology used to convert the energy directly from the sunlight into electricity by using the solar cell. The photon from the sunlight knock electron into higher energy which will produce electricity. A typical photovoltaic cell produces less than 3 watts at approximately 0.5 volt dc, cells must be connected in series-parallel configurations to produce enough power for high power applications. The electricity produce are in direct current which can be used to power equipment or to recharge battery. The first application of photovoltaic was used to power up the orbiting satellites and space shuttle. Nowadays, photovoltaic is important in grid-connection generation which required an inverter to convert DC current to AC current [4].
Advantage of using PV
Disadvantage of using PV Have a warranty until 20 years and more. and System is very durable and not easily damage The production of electricity is uneven throughout the day.
Solar energy production is quite System does not produce electricity at the night or when it is overcast. Pay off point (which mean no need pay monthly bill) Even there are many advantages of PV system. Initial cost of installation PV system is very high.
This research is involved to design the software to make the calculation to choose the solar panel and inverter become easier. The software must be based on the case study to implement the solar panel to the KIOT administration offices.
To choose the suitable solar panel and inverter that can be implemented to KIOT administration offices, the data of the power consumption must be get first before can design the suitable software to make the calculation to find and choose the suitable inverter and solar panel that can be implemented to KIOT administration offices.
Solar Radiation
Based on different researches, 51% of the total solar energy reaches at ground, 6% is reflected by the atmosphere, 10% is reflected by the clouds, 4% is reflected from the earth's surface, 16% is absorbed by the atmosphere and the other 3% is absorbed by the clouds as shown in the figure below [5][6]. Solar radiation is an electromagnetic wave emitted by the sun's surface that originates in the bulk of the sun where fusion reactions convert hydrogen atoms into helium. Every second 3.89×10 26 J of nuclear energy is released by the sun's core. This nuclear energy flux is rapidly converted into thermal energy and transported toward the surface of the star where it is released in the form of electromagnetic radiation. The power density emitted by the sun is of the order of 64MW/m 2 of which approximately 1370W/m 2 reach the top of the Earth's atmosphere with no significant absorption in the space [8]. The latter quantity is called the solar constant. The spectral range of the solar radiation is very large and encompasses nano metric wavelengths of gamma-and xrays through metric wavelengths of radio waves.
Cell, Module & Array
Since an individual cell can be produces 0.5 V or 0.8 V, it is a rare application for which just a single cell for any use. Instead, the basic building block for PV applications is a module consisting of a number of pre-wired cells in series, all encased in tough weather-resistant packages. A typical module has 36 cells in series and is often designated as a "12-V module" even though it is capable of delivering much higher voltages than that. Some 12-V modules have only 33 cells, which, as will be seen later may be desirable in certain very simple battery charging systems. Large 72-cell modules are now quite common, some of which have all of the cells wired in series, in which case they are referred to as 24-V modules. Some 72-cell modules can be field-wired to act either as 24-V modules with all 72 cells in series or as 12-V modules with two parallel strings having 36 series cells in each.
Figure 2. Formation of cell, module (combination of cells) & array (combination of modules) of PV.
Multiple modules, in turn, can be wired in series to increase voltage and in parallel to increase current, the product of which is power. An important element in PV system design is deciding how many modules should be connected in series and how many in parallel to deliver whatever energy is needed. Such combinations of modules are referred to as an array.
From cells to modules when photovoltaic are wired in series, they all carry the same current and at any given current their voltage adds. From modules to array modules can be wired in series to increase voltage, and in parallel to increase current. Arrays are made up of some combination of series and parallel modules to increase power.
Working Principle of PV System
Solar or photovoltaic (PV), cells are electronic devices that essentially convert the solar energy of sunlight into electric energy or electricity. The physics of solar cells is based on the same semiconductor principles as diodes and transistors. Solar cells convert energy as long as there is sunlight. In the evenings and during cloudy conditions, the conversion process diminishes. It stops completely at dusk and resume at dawn. Solar cells do not store electricity, but batteries can be used to store the energy [9-10]. The most basic power conversion unit of a photovoltaic (PV) system is the solar cell. As shown in figure 3 sunlight strikes a PV cell and a direct current (D.C.) is generated. An inverter inverts the D.C. to an alternating Current (A.C.) and by connecting the electric load to the output terminals, the current can be utilized [11][12].
Data Collection
There are different loads and their rating at KIOT administration building.
Estimate the Total Loads of KIOT Administration Building
First, a load chart is developing based on the load and consumption of energy. Different appliance, their wattage, number of appliances and duration of usage are required to develop the load chart. The load chart is prepared by multiplying the number of appliances with wattage of each appliance to get maximum watts. This is multiplied with number of hours of usage to get watt hours. For different appliances the maximum watts, average watts and total watt hours are aggregated individually for calculation purpose. 1. Hours per day used is the number of hours each appliance used per day is listed in Hrs. actual time of load operation, must be considered here. 2. Energy per day is the amount of energy each appliance requires per day is determined by multiplying each appliance's wattage by the number of hours used per day. 3. Total energy demand per day. The sum of the quantities in the last column determines the total energy demand required by the appliances per day. Peak sun hours at optimum tilt is obtained from solar radiation data for the design location and array tilt for an average day. Annual average Peak sun hours at Latitude 10.986 degrees north for Kombolcha is 5.89 hours.
Solar PV System Design Calculation for KIOT Administration Building
In this part, determine the number of inverters, charge controller, battery, solar PV modules and series and parallel connection of battery and determine the rating value of each equipment based on the collected data (electrical loads). The figure shows below the block diagram of the connection procedures of KIOT administration offices.
Mounting Structure
The PV module should be designed in such a way that it can withstand rain, hail, wind and other adverse conditions. Tilting angle optimally varies the efficiency of the solar PV module so; the mounting structure also serves as a PV module tilting structure which tilts the PV arrays at an angle determined by the latitude of the site location, to maximize the solar insulations falling on the panels. The optimum tilt angle required to maximize the solar insulations changes as the position of the sun varies every month. Similarly, shading has a significant effect on PV generation. Partial shading can reduce the system production up to 90%. Thus, it is essential that the PV arrays to be installed at a suitable location without any difficulties.
The angle formed between the plane of the equator and a line drawn from the center of the sun to the center of the earth is called the solar declination, δ. It varies between the extremes of ± 23.45•, and a simple sinusoidal relationship that assumes a 365-day year and which puts the spring equinox on day n = 81 provides a very good approximation. The tilt angle that would make the sunray perpendicular to the module at noon would therefore be Tilt angle= 90° 90 64.114 25.886
Cost Estimation of KIOT Administration Building
PV panels are the most expensive part of a solar electric system; as such, they are sometimes targets for theft.
Vandalism can also be a problem. For larger loads, their high capital cost can render them a less preferable option if grid extensions or fuel for generators are readily available. Regular maintenance on batteries is essential; they should be checked every month, with the electrolyte level replenished as needed. Properly maintained, batteries should last several years before needing replacement.
Electrical Loads
The average hourly electrical loads for each month of the year for the KIOT administration building are inputs of HOMER software. The system immediately graphs the load and calculates parameters as shown in figure 6 below.
Simulation Analysis
The simulation process models the system configuration. It serves two purposes. First, it determines whether the system is feasible or not. HOMER considers the system to be feasible if it can adequately serve the electric loads and satisfy any other constraints imposed by the user. Second, it estimates the lifecycle cost of the system, which is the total cost of installing and operating the system over its lifetime. The quantity to represent the life-cycle cost of the system is the total net present cost (NPC). This single value includes all costs and revenues that occur within the project lifetime, with future cash flows discounted to the present. The total net present cost includes the initial capital cost of the system components, the cost of any component replacements that occur within the project lifetime, the cost of maintenance and fuel. Table 5 below is generated based on the set of input values of the system configuration. The costs, capacity, quantity and lifespan of each component of the system are taken from the different websites. The diesel price is 0.75$/L and it is the current price of diesel in the country. Figure 7 shows the complete configuration of the system. It is composed of the PV panel, generating unit, batteries, converters, electrical loads and the AC and DC bus bars. The optimization process determines the best possible system configuration. In HOMER, the best possible, or optimal, system configuration is the one that satisfies the user-specified constraints at the lowest total net present cost. Finding the optimal system configuration may involve deciding on the standalone of components that the system should contain, the size or quantity of each component, and the dispatch strategy the system should use. In the optimization process, HOMER simulates many different system configurations, discards the infeasible ones, ranks the feasible ones according to total net present cost, and presents the feasible one with the lowest total net present cost as the optimal system configuration.
Hybrid PV-Diesel System Optimizations Analysis
The optimization results are generated in either of two forms; an overall form in which the top-ranked system configurations are listed according to their net present cost (NPC) and in a categorized form where only the least-cost system configuration is considered for each system.
Standalone PV System
The result shows when we use PV system consider as standalone, so the graphs tell about different cost relation of over all the system. Currently, PV cell have high initial cost due to the it increase the total cost of the system, but PV has less O&M, COE and relatively low operating cost. Figure 10. HOMER output of Hybrid PV and diesel system cost comparison.
Hybrid PV-diesel System
Above figure shows the cost benefit of hybrid system, the cost of each type increases as comparison of standalone but more reliable. Now a day the cost of diesel varies and it affects the customers.
PV, Diesel and Inverter System (Without Batteries)
Inverter use to control the system of voltage, current and frequency of electrical power system also used to convert DC system to AC since PV system generate DC. The total cost of the result shows higher than both cases (case 1 & case 2) due to the cost of inverter, diesel and fuel. Diesel generator even no need inverter it is still too costly in operating and maintenance (O&M), so, that NPC is high as comparison of the among results. The above result shows that overall configuration of most cost effective system, i.e. the system with the lowest Net Present Cost, standalone PV-battery-converter set-up.
For this set-up, the total net present cost (NPC) is $21673, the cost of energy (COE) is $0.067/kWh, and contribution from renewable resources is 100%. This setup could be a best solution and choice for implementation.
Conclusion
A standalone system comprising of PV arrays and diesel generator with battery banks and power conditioning units has been discussed in this study to achieve a cost effective system configuration, which is to supply KIOT administration building. The purpose is to have a continuous supply of electricity even during brown-outs. Before the design of the standalone PV system was started, and solar energy resources of the area under study were taken. Then, based on these resources, a standalone PV electric power supply system was designed.
The proponents therefore conclude that the design of this study is feasible to supply the entire load and at same time having the most economical cost from construction up to its lifetime. Therefore standalone PV system is more feasible than other different components of system and hybrid system (PV, diesel, battery and inverters) are more reliable than different components of system.
|
2020-03-07T21:36:06.115Z
|
2020-03-03T00:00:00.000
|
{
"year": 2020,
"sha1": "bb8fa8b5390273ef56f039691d80cbbee5779649",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.jeee.20200801.15.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "bb8fa8b5390273ef56f039691d80cbbee5779649",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
355387
|
pes2o/s2orc
|
v3-fos-license
|
Morphology and cardiac physiology are differentially affected by temperature in developing larvae of the marine fish mahi-mahi (Coryphaena hippurus)
ABSTRACT Cardiovascular performance is altered by temperature in larval fishes, but how acute versus chronic temperature exposures independently affect cardiac morphology and physiology in the growing larva is poorly understood. Consequently, we investigated the influence of water temperature on cardiac plasticity in developing mahi-mahi. Morphological (e.g. standard length, heart angle) and physiological cardiac variables (e.g. heart rate fH, stroke volume, cardiac output) were recorded under two conditions by imaging: (i) under acute temperature exposure where embryos were reared at 25°C up to 128 h post-fertilization (hpf) and then acutely exposed to 25 (rearing temperature), 27 and 30°C; and (ii) at two rearing (chronic) temperatures of 26 and 30°C and performed at 32 and 56 hpf. Chronic elevated temperature improved developmental time in mahi-mahi. Heart rates were 1.2–1.4-fold higher under exposure of elevated acute temperatures across development (Q10≥2.0). Q10 for heart rate in acute exposure was 1.8-fold higher compared to chronic exposure at 56 hpf. At same stage, stroke volume was temperature independent (Q10∼1.0). However, larvae displayed higher stroke volume later in stage. Cardiac output in developing mahi-mahi is mainly dictated by chronotropic rather than inotropic modulation, is differentially affected by temperature during development and is not linked to metabolic changes.
INTRODUCTION
Recruitment of fish populations is highly dependent upon environmental conditions encountered during early life-history stages (Houde, 1989;Rijnsdorp, 2009;Wexler et al., 2007). The sensitivity of embryos and larvae may be explained by their small size, incomplete morphological and physiological development, high dynamic metabolic rate, low energy reserves, small migration capacities (little or no swimming activity) and their heavy dependence on ambient environmental conditions (dissolved oxygen, temperature, pH, etc.) (Finn and Kapoor, 2008). This makes larval fishes more vulnerable to mortality during periods of adverse environmental conditions (food shortage, pollution, predation pressure, etc.) than the adult phenotype of the species (Rijnsdorp, 2009;Tyus, 2011). Ecosystem changes in response to climate change are mainly driven by the global warming trend (Perry et al., 2005;Rosenzweig et al., 2008;Walther et al., 2002). Developing fish, in particular, have a narrower thermal window than the later stages of their life history, making them particularly vulnerable to temperature changes (Pelster, 1999;Pörtner and Farrell, 2008;Pörtner et al., 2001;Rijnsdorp, 2009).
Temperature influences and constraints on amphibian and fish development primarily involve change in size and duration of the period when larvae are susceptible to modification of the normal ontogeny dynamic (Atkinson, 1995;Elliott and Elliott, 2010;Green and Fisher, 2004;Pelster, 1999;Pörtner et al., 2001;Rijnsdorp, 2009). The cardiovascular system is the first organ system to function in vertebrate embryos, so slight variations in cardiovascular function at these early stages could significantly affect subsequent development and/or species survival. For instance, temperature is a major modulator of intrinsic heart rate, typically having a Q 10 value of ∼2.0 (Barrionuevo and Burggren, 1999;Farrell, 2009;Kopp et al., 2005;Schönweger et al., 2000). A rising temperature typically increases the oxygen demand and therefore requires some adjustments in cardiac activity (at least in adult stages) because of the respiratory and circulatory convection needed to deliver sufficient oxygen to the body tissues (Pelster, 1999). Consequently, the cardiac output (amount of blood pumped by the heart per minute) must be finely regulated.
Heart rate increases significantly with elevated rearing temperature in larvae of freshwater fish such as rainbow trout (Oncorhynchus mykiss) (Mirkovic and Rombough, 1998), zebrafish (Danio rerio) (Barrionuevo and Burggren, 1999) and the common minnow (Phoxinus phoxinus) (Schönweger et al., 2000). In the common minnow, these heart rate changes resulting from temperature variations increased during development as well. However, heart rate does not indicate all the physiological changes, because the chronotropic response was different to that for ventricular performance, where ventricular end-diastolic volume and stroke volume were higher at lower temperatures (15 and 17.5°C) and during initial cardiac activity in early development. Yet, overall cardiac output increased only at higher incubation temperatures and during later larval stages (e.g. swim bladder inflation stage) (Schönweger et al., 2000). A positive correlation exists between cardiac output and increasing incubation temperature, as well as with tissue mass, in advanced larval stages of rainbow trout (Mirkovic and Rombough, 1998).
Adult fish typically exhibit changes in cardiac rate and contractility in response to environmental challenges like temperature change. However, changes that occur in cardiac performance are much smaller than concurrent changes in metabolism in larval fish (Pelster, 1999;Schönweger et al., 2000). Evidence clearly points to a link between metabolism and cardiac performance in adult vertebrates, and this link represents the main driver for adaptions and adjustments of cardiac activity in response to changing environmental conditions. However, it is unclear whether this relationship exists in embryonic and early larval stages of ectothermic vertebrates, in which the circulatory system does not initially play a primary role in oxygen delivery (Burggren, 2005(Burggren, , 2013Cano-Martínez et al., 2007;Mirkovic and Rombough, 1998;Pelster and Burggren, 1996). In the natural environment, these physiological and morphological changes are influenced by temperature changes, which may cause significant fluctuations in productivity and distribution of fish populations, therefore leading to important ecological and evolutionary consequences (Pörtner, 2001;Pörtner et al., 2001).
Small temperature variation may have a greater impact on development of tropical fish than temperate fish, the latter subjected to a larger environmental temperature variation (Green and Fisher, 2004). Consequently, we elected to study thermal influences on cardiac physiology and morphology in the mahi-mahi (also known as the common dolphinfish), Coryphaena hippurus. Mahi-mahi is a migratory epipelagic fish species inhabiting tropical and subtropical waters (Beardsley, 1967;Gibbs and Collette, 1959;Palko et al., 1982). Mahi-mahi provide important commercial and sports fisheries in the Gulf of Mexico and others areas where they are commonly found (Oxenford, 1999). Despite the ecological and economic importance of this species, mahi-mahi provide some relevant benefits as a fish model for physiological and environmental studies. Most important is the very short embryonic development compared to zebrafish model [hatching between 36-45 h post-fertilization (hpf ) at water temperatures of 25-28°C] and remain relatively transparent until 128 hpf.
We hypothesized that mahi-mahi, as a very rapidly growing and high performance fish, would be particularly vulnerable in early life stages to temperature fluctuations within their normal range and could increase exposure to the high risk pelagic environment. Thermally related physiological responses in larval mahi-mahi were measured under two rearing temperatures (26 and 30°C, defined in this study as chronic temperature exposure) and acutely exposed to three temperatures (25, 27 and 30°C). Worldwide temperature distribution of mahi-mahi varies between 25 and 31°C, so measurements were performed within this thermal range (Gibbs and Collette, 1959;Oxenford, 1999;Palko et al., 1982).
Temperature influence on embryo-larval morphology
Standard body length increased with development ( Fig. 1A, ANOVA, P=0.005). No differences in body length at any developmental stage occurred in hatched larvae raised at 25°C and acutely exposed to 25, 27 and 30°C. Thus data have been averaged (Fig. 1A, two-way ANOVA, P>0.05). However, mahi-mahi chronically raised at 30°C (4.72±0.04 mm) were longer than those chronically raised at 26°C (4.56±0.04 mm) (Fig. 1B, P=0.02). Despite these body length changes in 56 hpf larvae, Q 10 values of ∼1.0 suggest no impact due to length of temperature exposure (Fig. 1C).
Temperature and cardiac function
Heart rate in embryos and larvae reared at 25°C then acutely exposed to 27 and 30°C were considerably higher across development ( Fig. 2A). For example, heart rate in 56 hpf larvae reared at 25°C and then acutely exposed to 27 and 30°C were 1.2and 1.4-fold higher, respectively, than heart rate measured in larvae at 25°C. Variation pattern in heart rate across development was similar at 27 and 30°C with an increase of heart rate from 32 to 56-80 hpf and then become constant later in stage. Stroke volume (Fig. 2B) and cardiac output (Fig. 2C) were 1.6-and 2.1-fold higher, respectively, at 104 hpf in larvae reared at 25°C and acutely exposed to 30°C (P<0.05). Both 27 and 30°C acute temperatures significantly increased stroke volume (1.8-1.6-fold) and cardiac output (2.4-fold) at 128 hpf (P<0.05).
Q 10 values from 25 to 30°C for cardiac variables as a function of development are presented in Fig. 3. Q 10 for heart rate increased from 1.6 to 2.0 after hatching (Fig. 3A), was constant until 1.4 hpf and then increased to 2.4 at 128 hpf. Q 10 values for stroke volume (Fig. 3B) decreased from 1.9 to 1.0 between 32 and 56 hpf and then greatly increased until a plateau was reached at 104 and 128 hpf with Q 10 of 2.6. Over a similar temperature variation, Q 10 pattern for cardiac output (Fig. 3C) was above 2.0, with a decrease from 3.1 to 2.0 between 32 and 56 hpf. Q 10 then greatly raised thereafter reaching a Q 10 of 5.9 at 128 hpf.
A linear relationship between heart rate and increasing temperature was observed during acute and chronic assays, with a relatively high correlation coefficient at 32 hpf (R 2 =0.912, P=0.031) and 56 hpf (R 2 =0.908, P=0.033) (Fig. 4).
Specifically, at 56 hpf, heart rate of mahi-mahi larvae were elevated 1.4-and 1.1-fold with increasing temperatures during acute and chronic temperature exposures, respectively (Fig. 5A, P<0.05). Stroke volume and cardiac output were not significantly affected by increasing temperature at 56 hpf, irrespective of applied thermal challenge (Fig. 5B, P>0.05). While no statistical difference occurred in cardiac output during acute temperature exposure, values tended to increase with elevated temperature (Fig. 5C).
Q 10 values for heart rate in 56 hpf hatched larvae illustrated temperature dependence in acute exposure with Q 10 value of 2.4 ( Fig. 6A). Q 10 is 1.8-fold lower for heart rate in chronic temperature exposure. For stroke volume, Q 10 was maintained at ∼1.0 at 56 hpf ( Fig. 6B). Interestingly, despite no statistical changes in cardiac output whatever the acute or chronic conditions, Q 10 for this variable increased to around 2.0 when measured acutely over the temperature range of 25-30°C (Fig. 6C), indicating a trend to increase. However, Q 10 was 1.1 when comparing cardiac output over the range of the two chronic rearing temperatures, 26-30°C.
DISCUSSION
The development of cardiac functional and morphological parameters has been explored in numerous freshwater and saltwater fish species (Bagatto, 2005;Bagatto and Burggren, 2006;Burggren and Bagatto, 2008;Burggren and Blank, 2009;Farrell et al., 2009;Incardona et al., 2014;Jacob et al., 2002;Pelster, 1999). The rationale driving this research has been thoroughly outlined with prominent arguments based on the importance of the heart and the impact the developmental environment can have on morphological and function phenotypes. Therefore, to further document this critical process, we explored the influence of temperature on development of cardiac form and function in embryonic and larval stages of mahi-mahi. To our knowledge, the literature exploring the physiological capabilities in early development of mahi-mahi due to environmental changes (such as temperature) is very limited. The present study is the first to report acute and chronic temperature effects on the development and cardiac filling changes (stroke volume and cardiac output) in the embryonic stages of mahi-mahi.
Challenges to cardiovascular measurements in mahi-mahi
Although morphological changes during development can be readily monitored, assessment of cardiac function during early heart development can be challenging. First, cardiogenesis progresses rapidly (hours to a day) in many warm water fish species. In mahi-mahi, at 26°C primitive heart ( precursors) appears by 18-19 hpf, with the onset of heart beat by 22 hpf. By 50-56 hpf, Standard length as function of development in mahi-mahi raised at 25°C and then acutely exposed during 20 s to 25, 27 and 30°C. Standard body lengths (B) and related Q 10 (C) in hatched larvae (56 hpf ) acutely and chronically temperature exposed. Pericardio-vitelline areas (D) and the atrio-ventricular angle (E) in 56 hpf larvae chronically temperature exposed. Data are presented as mean±s.e.m. N=30. N=42 and 56 in chronic assay for 26 and 30°C, respectively. Small and capital letters denote significant differences for acute and chronic temperature assays, respectively (ANOVA and Student's t-test, P<0.05). Asterisks in D and E indicate differences between both temperature conditions (26 and 30°C). No significant differences were found in pericardium and yolk sac fluid areas between both rearing temperatures (Student's t-test, P>0.05).
the sinus venosus and bulbus cordis start to be differentiated and the heart starts to loop to the lateral side, as it begins to assume its adult configuration over 80 hpf with formation of both valves (P.P., M.G., W.W.B., unpublished data).
A second challenge to cardiovascular measurements is imposed by the small size of larval mahi-mahi. The use of digital imaging methods in embryos and larvae, while feasible, is labor-intensive, which can limit sample size. However, digital imaging does allow observation of dynamic process of cardiovascular parameters in vivo. Furthermore, the accuracy and reliability of these techniques has been well established in larvae of freshwater fish and amphibians (Bagatto and Burggren, 2006;Hou and Burggren, 1995;Kopp et al., 2014;Mirkovic and Rombough, 1998;Schönweger et al., 2000).
Finally, a third challenge is modeling the shape of the ventricle in a manner that most accurately reflects this complex structure (P.P., M.G., W.W.B., unpublished data). During the early developmental phase, the heart of mahi-mahi is somewhat irregularly shaped. As noted in the Materials and Methods, the ventricle is relatively elongate at 50-56 h, but quickly assumes a shape more like a prolate spheroid with further development (P.P., M.G., W.W.B., unpublished data). In this study, we have modeled all stages on the commonly used prolate spheroid model, allowing us to focus on relative changes between stages and temperatures for better comparison with existing data in literature. However, future studies will gain even greater accuracy by using stage-specific modeling of ventricular shape.
Standard cardiac development at rearing temperature of 25°C
Heart rate in developing mahi-mahi initially increases from 32 to 80 hpf and becomes stable thereafter. These results are qualitatively similar to other fish species, such as zebrafish (Bagatto and Fig. 2. Acute temperature influence on cardiac function of mahi-mahi during early development. (A) Heart rate, (B) stroke volume and (C) cardiac output measured in mahi-mahi raised at 25°C and acutely exposed to 25, 27 and 30°C. Data are mean±s.e.m. N=10-11 for each plotted developmental stage per temperature. Boxes surround statistically identical values at the same measurement time. Letters denote significant differences between different developmental time at same temperature (two-way ANOVA, P<0.05). Burggren, 2006; Barrionuevo and Burggren, 1999;Denvir et al., 2008;Jacob et al., 2002), common minnow (Schönweger et al., 2000), rainbow trout (Mirkovic and Rombough, 1998), several species of tuna (Pacific bluefin tuna Thunnus orientalis, Atlantic bluefin tuna Thunnus thynnus, and yellowfin tuna Thunnus albacares) (Clark et al., 2013;Incardona et al., 2014), and the greater amberjack (Seriola dumerili) . At a standard rearing temperature of 25°C, stroke volume and cardiac output variations tend to increase across development. Until 80 hpf, they are relatively constant and then during mouth opening and resorption of yolk sac stage (104 hpf ), stroke volume and cardiac output increased threefold [V H(104hpf ) =0.40±0.07 nl and Q (104hpf ) =81.7±18.0 nl min −1 ] compared to 80 hpf. These values were then constant at later stages.
Heart rate is considered a highly accurate proxy used in the literature regarding modification of cardiac performance in fishes (Clark et al., 2013;Edmunds et al., 2015;Incardona et al., 2009Incardona et al., , 2014Mager et al., 2014;Perrichon et al., 2016). This variable provides insight into, but not a comprehensive picture of, the dynamics of blood pumping. Stroke volume and cardiac output should be included as new functional variables when evaluating cardiac function in fish early life stages exposed to environmental challenges. These variables could provide an explicit physiological expression of the environmental impact while acting as an estimate and predictor of impairment and risk arising from challenging natural or anthropogenic conditions.
Little is known about cardiac performance in developing marine fish and how it is impacted by environmental factors such as temperature. To our knowledge, this is the first study reporting quantitative data of stroke volume and cardiac output in developing pelagic marine fish. The developing coastal Gulf of Mexico fish species, the red drum (Sciaenops ocellatus) (Khursigara et al., 2017), at similar stage (56 hpf ) and temperature (25°C), have stroke volumes and cardiac outputs that are 2.8-and 3.4-fold lower respectively, compared to mahi-mahi. Previous studies using similar imaging method measurement on developing zebrafish, common minnow or rainbow trout have reported accurate measurement of stroke volume and cardiac output (Bagatto and Burggren, 2006;Kopp et al., 2005;Mirkovic and Rombough, 1998; P.P., M.G., W.W.B., unpublished data; Schönweger et al., 2000). Cardiac output measurements are mass specific, therefore a direct comparison of data between species is not possible considering organisms weighing less than 1 mg (Mirkovic and Rombough, 1998;Schönweger et al., 2000). However, even given existing differences in life cycle and species, our data compare favorably to the limited literature values.
Enhancement of early larval development by chronic warming temperature
Not surprisingly, an acute increase in temperature in short duration above the standard rearing temperature of 25°C did not affect the body length of larvae (Q 10 ≤1.0). However, larvae and 56 in chronic assay for 26 and 30°C, respectively. Small and capital letters denote significant differences in acute and chronic assays, respectively (ANOVA and Student's t-test, P<0.05). No temperature effect was shown on stroke volume and cardiac output irrespective of exposure protocol (P>0.05). chronically exposed to 30°C from hatching displayed greater body length than those raised at 26°C. These changes in size were accompanied by a reduction of the yolk sac volume, which likely indicates a faster absorption of vitelline reserve associated with the presumably elevated metabolic rate and more rapid advancement through successive larval stages (Pasparakis et al., 2016). Furthermore, chronic elevated temperature accelerated the lateral repositioning of cardiac chambers in hatched larvae of mahi-mahi, as quantified by an increase in the atrio-ventricular angle (Fig. 1). These changes in cardiac angle may be associated with some adjustments in preload, afterload and contractility, all factors modulating stroke volume and therefore cardiac output (Edmunds et al., 2015;Kalogirou et al., 2014). Indeed, even if no evidence of direct functional disruption is shown, and it might simply be due to advanced development, an increase of this angle might result in cardiac function depression in aquaculture related to domestication, or again after crude oil exposure, leading to changes in general heart tube patterning (Edmunds et al., 2015;Hicken et al., 2011). Thus, our findings regarding temperatureinduced body length changes for mahi-mahi are similar to those previously documented (Green and Fisher, 2004;Wexler et al., 2011).
Embryonic development is highly energetically costly (Rombough, 2006(Rombough, , 2011. Increasing temperature exacerbates the impact by increasing the energetic extraction from finite yolk reserves of the developing fish. Tropical fishes develop faster, hatch earlier and grow up more rapidly than more temperate fishes (Val et al., 2005). As a consequence, the vitelline reserve is rapidly exhausted, which necessitates a quick transfer to reliance upon exogenous food sources (Kim et al., 2015;Yúfera and Darias, 2007). Rapid depletion of the yolk sac with increasing temperature has been documented in at least four marine species: porgy (Acanthopagrus schlegeli), Japanese anchovy (Engraulis japonica), red sea bream (Pagrus major) and Japanese flounder (Paralichthys olivaceus) (Fukuhara, 1990). In these species, earlier yolk depletion is associated with faster development and in the accelerated development of swimming ability, which is temperature-dependent (Fukuhara, 1990). In fish, exposure temperatures near the lower and upper tolerance limits may delay hatching and induce morphological and cardiovascular impairments (Burggren and Bagatto, 2008;Clark et al., 2013;Pörtner and Farrell, 2008;Schirone and Gross, 1968). In the present study, increasing temperature within a range of optimal water temperature tolerance enhances the development time of mahi and thereby increases larval size. As previously suggested, faster development may help to quickly develop swimming ability, which might be beneficial for migration, seeking food or avoiding predators (Blaxter, 1991;Houde, 1989). In contrast, acceleration of development at the time of yolk sac absorption might be harmful and limiting the optimal growth of the larvae. The yolk sac is the sole source of nutrients in the embryo, so an excessively rapid depletion might reduce the nutritional state of organism and favor developmental abnormalities. This could reduce the first feeding success, thereby increasing the chance of mortality in the pelagic environment.
Temperature as a positive chronotrope
Warmer temperatures strongly affected heart rate during early development of mahi-mahi. Analysis of Q 10 patterns revealed a great sensitivity of heart rate to acute increases in temperature between 56 and 128 hpf, which matches Q 10 values ≥2.0 (1.9 to 2.4) over this developmental range. This suggests that temperature influences cardiac pacemaker tissue function. In chronic exposure, the Q 10 value of 1.3 indicates a slight influence of elevated temperature in hatched larvae compared to those acutely exposed. Temperature triggers change in cardiac physiological performance in larval fish, with published Q 10 values for heart rate of ∼2.0 in both isolated hearts and intact fish (Burggren et al., 2016;Farrell, 2009;Gamperl and Farrell, 2004;Graham and Farrell, 1989). Patterns of heart rate due to increasing temperature similar to mahimahi occur in freshwater fish, such as the common minnow (Schönweger et al., 2000) and zebrafish (Barrionuevo and Burggren, 1999), resulting in Q 10 values of 1.8 and from 1.2 to 2.5 over the temperature ranges of 15-25°C and 25-31°C, respectively. In cold water species, Q 10 values of 2.2-2.4 and 2.6 were also reported in embryos of the speckled trout (Salvelinus fontinalis) and Atlantic salmon (Salmo salar), respectively (Fisher, 1942;Klinkhardt et al., 1987;Pelster, 1999).
Temperature independence of stroke volume and cardiac output
Stroke volume and cardiac output exhibited similar patterns during the earliest development (up to 128 hpf ) of mahi-mahi. At all three acute temperatures (25, 27 and 30°C), increased heart rate was not directly associated with any change in stroke volume and cardiac output from 32 to 80 hpf of development. However, from 104 hpf of development, hatched larvae showed a significant elevation in both stroke volume and cardiac output whereas heart rate remain constant. Regarding the acute temperature of 27°C, embryos and newly hatched larvae (56 hpf ) both displayed significantly higher heart rate. At the same temperature, cardiac output elevation was related to increased heart rate from 80 hpf and related to greater stroke volume from 104 hpf.
Cardiac output increased linearly during development, directly linked with similar increase in stroke volume, whereas heart rate remain constant. Increased stroke volume and cardiac output are likely related to greater development and increased body size. Cardiac volumes are even more important at the warmest exposure temperatures at 104 hpf and 128 hpf (stroke volume and cardiac output are 1.6-and 2.1-2.4-fold higher, between 25 and 30°C, respectively). Q 10 values corroborate this influence, with Q 10 for stroke volume and cardiac output being 2.6 and 4.5-5.9, respectively, in 104 hpf mahi-mahi larvae. Considering larvae at 56 hpf, both acute and chronic increases in temperature created a similar range of variation in cardiac performance. Yet, cardiac output was not significantly affected by temperature at this developmental stage, but tended to increase. This slight increase (trend) is due to the product of heart rate and stroke volume, with heart rate playing a greater role in cardiac output regulation. Collectively, this trend and calculated Q 10 values lead us to conclude that cardiac output is influenced to a greater extent later in larval development. Our results also highlight that cardiac output in embryonic and newly hatched larva might be mainly dictated by chronotropic rather than inotropic modulation. Similarly, in the larval common minnow, incubation temperature did not create any change in stroke volume and cardiac output (Q 10 =1.40) (Schönweger et al., 2000). In the adult isolated trout heart, no major changes in stroke volume result from increasing chronic temperature (Q 10 =1.3-1.4), while heart rate and cardiac output were significantly higher (Graham and Farrell, 1989). The equal stroke volumes observed in embryonic and newly hatched larval mahi-mahi (32 hpf to 80 hpf ) under acute and chronic temperature conditions might result from extrinsic compensatory mechanisms.
Dissociation of cardiac performance and oxygen consumption
A marked increase in oxygen consumption occurs as a consequence of increasing chronic rearing temperature from 26 to 30°C in mahimahi embryos and newly-hatched larvae (Pasparakis et al., 2016). A clear increase in energy demand was particularly evident when embryos approached hatching and especially so at higher temperatures. Chronic Q 10 for oxygen consumption calculated on published data for mahi-mahi (Pasparakis et al., 2016) yielded a Q 10 value of 3.6 in newly hatched larvae (56 hpf ). Our study has clearly demonstrated a temperature influence on early rhythmicity of cardiac function in developing mahi-mahi under acute temperature change or chronic rearing conditions. Increasing heart rate in developing mahi-mahi coincides with an increase of oxygen consumption, but cardiac output seems to be independent of oxygen consumption at this developmental stage. Regarding the associated Q 10 , the influence of temperature appears greater in energy metabolism (oxygen consumption) than in heart rate at similar developmental stages. These differences in Q 10 and the absence of correlation between cardiac output and metabolic rate are not especially surprising. Indeed, while the relation between metabolic demand of tissues and cardiac activity is well established in adult fish, no apparent link exists between these two variables in early life stages. Diffusion of oxygen transport through the skin and tissues likely suffices to supply oxygen to tissues making cardiac regulation independent of the circulatory system in larval fish (Burggren, 2005(Burggren, , 2013Farrell et al., 2009;Mirkovic and Rombough, 1998;Pelster and Burggren, 1996;Schönweger et al., 2000).
Conclusions and further directions
From an eco-physiological point of view, this study highlights the importance of measuring a range of physiologically relevant traits to characterize their relative condition during early development due to environmental change, and then understanding how these variations will influence later life. While heart rate has been used as an indicator of larval condition and health in the past, our data indicate that heart rate alone does not tell the full story of cardiovascular performance. We hypothesized that rapidly growing, high performing mahi-mahi would be particularly vulnerable as larvae to temperature fluctuations within their normal range. Yet, this hypothesis was only partially supported, as important cardiac variables showed small to no temperature dependency. Further studies should be undertaken to assess the cardiorespiratory capacity of developing fish and define the connection period between cardiac performance and metabolic supply. Furthermore, while considerable information is available about general aspects of development, the underlying mechanisms of cardiac physiological responses to temperature are poorly understood in early life stages and warrant further investigation.
Maintenance and egg production of mahi-mahi
Mahi-mahi broodstock were captured off the coast of Miami, FL, USA using hook and line angling techniques. The fish were subsequently transferred to the University of Miami Experimental Hatchery (UMEH), where they were acclimated in 80 m 3 fiberglass maturation tanks equipped with re-circulated and temperature controlled water (∼26°C). All embryos used in the experiments described herein were collected within 8 h following a volitional (non-induced) spawn using standard UMEH methods (Stieglitz et al., 2012). A prophylactic formalin treatment (37% formaldehyde solution at 100 μl l −1 for 1 h) was administered to the embryos, followed by 30 min of flushing with a minimum of 300% water volume in the treatment vessel using filtered, UV-sterilized seawater. A small sample of embryos was then collected from each spawn to microscopically assess fertilization rate and embryo quality. Spawns demonstrating low fertilization rate (<85%) or frequent developmental abnormalities (>5% of individuals) were not used.
All handling and use of animals in the present study were in compliance with the Institutional Animal Care and Use Committee (IACUC) of the University of Miami.
Experimental protocols
Morphological and physiological data were acquired under two conditions: acute and a chronic temperature exposure. During acute exposures, morphological and functional variables were measured in a range of developmental stages of mahi-mahi that were initially reared at 25°C through 128 hpf and then acutely exposed (20 s) to 25 (rearing temperature), 27 or 30°C. For chronic rearing exposures, measurements were made at either 26 or 30°C, at only two developmental points, 32 and 56 hpf. Table 1 summarizes the variables measured during both acute and chronic temperature assays. Embryo and larvae developmental stages are expressed in hpf. The rearing temperatures of 25 and 26°C were chosen for the acute and chronic experiments, respectively, to match the temperatures at which the eggs were collected from the broodstock tanks.
Data acquisition Image and video capturing
Unanaesthetized embryos and larvae were individually immobilized in UV-sterilized seawater containing 2% methylcellulose (to increase viscosity) in a Petri dish. Individuals were positioned on a thermally regulated microscope stage (Brook Industries, Lake Villa, IL), oriented for ventral and left lateral views for embryos and larvae, respectively. Video images of the positioned individuals were captured using a Nikon SMZ800 stereomicroscope (objective lens 8× and 9.8×) connected to a Fire-i400 or Fire-i530c digital camera (Unibrain, San Ramon, CA). For morphological and functional assessment, 20-s-long live videos were digitized at 30 frames s −1 using PhotoBooth software (dslrBooth Lumasoft, East Brunswick, NJ). Images were calibrated using a stage micrometer.
Assessment of cardiac function
Morphometric measurements were made using ImageJ software (Schneider et al., 2012; http://imagej.nih.gov/ij/) from acquired images as described above. Atrio-ventricular angle (AV) was measured with the ImageJ freehand tool, as two lines diverging from the center point of the maximally relaxed atrium and the maximally contracted ventricle (end-systolic) (Fig. 7). AV angle was used to estimate the looping of cardiac chambers (Edmunds et al., 2015). Angle measurements involved the average of three measurements from videos frames.
Heart rate ( f H ) and stroke volume (V H ) variables were determined from video sequences of the ventricle in embryos and larvae and used for cardiac output calculation (Q ). Heart rate (heartbeat min −1 ) was visually determined from slow speed videos. End-diastolic and end-systolic volumes of the ventricle were determined by outlining the ventricular perimeter (circumference) using the best fitting ellipse drawn with ImageJ tools (P. P., M.G., W.W.B., unpublished data). Major and minor axes were then determined, extracted and exported into a Microsoft excel worksheet.
An important potential limitation of this technique is that at approximately 50-56 h of development in mahi-mahi (the time of our first measurements) the anteriorly located ventricle is relatively elongate compared to the larger and nearly spherical posteriorly atrium (Edmunds et al., 2015; P.P., M.G., W. W. B., unpublished data). Within hours, however, the ventricle quickly assumes a more prolate spheroid shape as it grows rapidly to meet and then exceed the size of the atrium. Thus, it is easy for the observer to confuse atrium, ventricle and bulbus in early developmental stages, and especially during the early stages of coordinated heart chamber contraction. This makes quality of lighting and careful observation of moving erythrocytes and the timing of their movement between chambers of critical importance. While recognizing the above limitations, as has been more fully explored (P.P., M.G., W.W.B., unpublished data), we were interested in cardiac output changes under thermal influence. Thus, stroke volume for all stages was calculated using the same formulanamely the formula for a prolate spheroid commonly used in previous studies on larval fishes and amphibians and embryonic birds (Bagatto and Burggren, 2006;Burggren and Fritsche, 1995;Burggren et al., 2004;Faber et al., 1974;Hou and Burggren, 1995;Keller et al., 1990;Kopp et al., 2005;Schönweger et al., 2000). This heart-shape model has been previously demonstrated to give the most accurate measurement of ventricular volumes (P.P., M.G., W.W.B., unpublished data): where a represents the major (longitudinal) semi-axis and b the minor (width) semi-axis. Three systolic and diastolic events for each larvae were captured, analyzed and then averaged to minimize measurement error. Mean stroke volume (nl) was calculated as the difference between diastolic and systolic ventricular volumes. Ejection fraction (%) was calculated from stroke volume/diastolic volume. Finally, cardiac output (nl min −1 ) was calculated by multiplying the f H by V H .
In mahi-mahi, hatching occurs between 41 and 45 hpf at 26°C. Standard length measurements were made in hatched larvae. Area of the yolk sac, internal yolk sac fluid and pericardial area were also determined with Image J.
Calculation of temperature sensitivities
The Q 10 temperature coefficient represents a standardized measure of the change in rate (R) of a biological system when the temperature (T ) is increased by 10°C. Q 10 values were determined by the following equation: where R 1 and R 2 are the measured reactions rate at temperature T 1 and T 2 respectively (T 1 <T 2 ). A Q 10 value of 2.0 is typical of the normal rate of change of routine metabolism with temperature (Drost et al., 2014;Farrell et al., 2009;Fry, 1971;Randall et al., 2002).
Statistical analysis
Statistical analyses were performed using Statistica 12 software package (Statsoft, Tulsa, OK, USA). We statistically evaluated morphological and functional variables with a one-and two-way ANOVA, followed by Tukey post hoc test for the acute temperature assays. In chronic rearing temperature assays, we used Student's t-tests to compare both exposure temperatures.
Results are expressed as mean± s.e.m. Data from chronic temperature assays involved an average of six experiments in time. A significance level of 5% was used for all analyses.
|
2017-05-19T14:51:49.650Z
|
2017-04-21T00:00:00.000
|
{
"year": 2017,
"sha1": "c5b8ecc80c430e8aef816a24f0b8acf85c437c27",
"oa_license": "CCBY",
"oa_url": "https://bio.biologists.org/content/biolopen/6/6/800.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2d8021d3d2a093b7eb181c90a89e66281ca127ca",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
212420571
|
pes2o/s2orc
|
v3-fos-license
|
An evaluation of time series summary statistics as features for clinical prediction tasks
Background Clinical prediction tasks such as patient mortality, length of hospital stay, and disease diagnosis are highly important in critical care research. The existing studies for clinical prediction mainly used simple summary statistics to summarize information from physiological time series. However, this lack of statistics leads to a lack of information. In addition, using only maximum and minimum statistics to indicate patient features fails to provide an adequate explanation. Few studies have evaluated which summary statistics best represent physiological time series. Methods In this paper, we summarize 14 statistics describing the characteristics of physiological time series, including the central tendency, dispersion tendency, and distribution shape. Then, we evaluate the use of summary statistics of physiological time series as features for three clinical prediction tasks. To find the combinations of statistics that yield the best performances under different tasks, we use a cross-validation-based genetic algorithm to approximate the optimal statistical combination. Results By experiments using the EHRs of 6,927 patients, we obtained prediction results based on both single statistics and commonly used combinations of statistics under three clinical prediction tasks. Based on the results of an embedded cross-validation genetic algorithm, we obtained 25 optimal sets of statistical combinations and then tested their prediction results. By comparing the performances of prediction with single statistics and commonly used combinations of statistics with quantitative analyses of the optimal statistical combinations, we found that some statistics play central roles in patient representation and different prediction tasks have certain commonalities. Conclusion Through an in-depth analysis of the results, we found many practical reference points that can provide guidance for subsequent related research. Statistics that indicate dispersion tendency, such as min, max, and range, are more suitable for length of stay prediction tasks, and they also provide information for short-term mortality prediction. Mean and quantiles that reflect the central tendency of physiological time series are more suitable for mortality and disease prediction. Skewness and kurtosis perform poorly when used separately for prediction but can be used as supplementary statistics to improve the overall prediction effect.
Background
Clinical prediction tasks such as patient mortality and disease prediction are highly important for early disease prevention and timely intervention [1,2]. Patient mortality prediction in intensive care units (ICUs) is a key application for large-scale health data and plays an important role in selecting interventions, planning care, and allocating resources. Accurate assessment of mortality risk and early identification of high-risk populations with poor prognoses followed by timely intervention are key in improving patient outcomes. A preliminary disease diagnosis assists doctors in making decisions. With the goal of accurately predicting clinical outcomes, studies have proposed methods that include scoring systems and machine learning models [3,4]. The scoring systems for mortality prediction in widely clinical use include the Sepsis-related Organ Failure Assessment (SOFA) [3], the New Simplified Acute Physiology Score (SAPSII) [5], and the Multiple Organ Dysfunction Syndrome (MODS) [6]. However, most scoring systems based on simple logistic regression for patient mortality prediction have limited prediction performance. With the development of machine learning and deep learning models, studies have applied trained models to clinical prediction tasks and achieve better performance compared to earlier approaches [4,7].
Feature extraction and patient representation are the underlying premise for constructing prediction models; consequently, these factors are important and affect the prediction performance. An increasing number of monitoring devices and laboratory tests in modern ICUs collect multivariate time series data of varying lengths from patients. Variable-length multivariate time series means that more than one physical measurement will be collected from a patient after admission to the ICU and that the sampling frequency of each predictor differs within a given time window. Overall, patient data consisting of physiological measurements have typical characteristics, such as high resolution, varying lengths, noisy values, and system bias, making the extraction of the temporal features of time series challenging. Most of the existing models select specific summary values for each predictor over a given time period and concatenate them to form patient vectors. Statistics are a form of summary values, and studies have shown that summary statistics can reflect the characteristics of time series. Moreover, they have advantages such as simple extraction, high robustness and strong representativeness [8][9][10]. The features of time series can be divided into three aspects: central tendency, dispersion tendency and distribution shape. The distribution and trends of time series can be reflected by combining multiple summary statistics, thus approximating the original data distribution and reducing the impact of noise on the prediction results.
Existing studies based on machine learning models have mainly used simple summary statistics to summarize time series information, such as maximum and minimum observations, as of physiological time series features. However, this lack of more comprehensive summary statistics leads to a lack of information in physiological time series. In addition, using only the maximum and minimum statistics to indicate patient features fails to provide adequate explanations. Despite the likelihood that more comprehensive features would have clinical implications, few existing studies have experimentally evaluated which summary statistics can best represent physiological time series. In this paper, we report an exhaustive set of results based on different combinations of summary statistics used as features of physiological time series for three clinical prediction tasks. The contributions of this study are twofold: on the one hand, we summarize and use 14 statistics as options for physiological time series representation compared with previous studies that used only a few statistics. On the other hand, we experimentally evaluate the performance of different summary statistics as features of physiological time series for different prediction tasks and obtain many conclusions that have practical implications and can provide guidance for subsequent related research.
The remainder of this paper is arranged as follows. First, we outline the related works. Second, we describe our method and its details and then present the experiments and results. Next, we discuss the results of the previous section. Finally, conclusions and future prospects are provided in the last section.
Methods for representing physiological time series
The most common method for representing physiological time series is to summarize the changing features of data contained in predictors using summary features and concatenate them as representative of a patient. Such statistics are simple and easy to calculate and have wide applications. Some studies also adopt the first measurement of predictors as the characteristic value of time series. The statistics used in some of the existing studies are listed in Table 1. From Table 1; these include maximum, minimum and mean values, which are widely used. One reason for their wide use is that these statistics are easy to acquire. Another is that experts tend to believe that the maximum and minimum observations reflect the normality or abnormality of the patient index, while the mean value reflects the average fluctuation range of the index over a period of time. A few studies have attempted to characterize time series features using statistics such as standard deviation, median and skewness. In addition to the above studies, many studies have attempted to fully understand the temporal trends hidden in multivariate time series data. Hug et al. considered a comprehensive set of physiologic measurements and manually defined a set of trend patterns [26]. McMillan et al. used temporal pattern mining to discover time series feature patterns [27]. Cohen et al. identified clinically relevant patient physiological states from physiologic measurements based on hierarchical clustering [28]. Yuan et al. applied nonnegative matrix factorization to group trends in a way that approximates patient pathophysiologic states [29]. Compared with these methods, patient representation based on summary statistics is a simple concept that is easy to calculate and can improve the interpretability of the results. However, the above studies based on summary statistics do not provide a clear reason why only these statistics were selected. It can be surmised that these choice were subjective and lack theoretical and experimental support. In addition, relevant research to determine which summary statistics can achieve the best performances for physiological time series is lacking. Therefore, the goals of this paper are to discover statistics that yield important summary performances and thus provide support for these studies and to improve model prediction performance based on representations of these summary statistics.
Feature selection methods
Datasets containing massive amounts of features can reduce classification accuracy, raise the computational cost and increase the risk of overfitting [30,31]. Varying length multivariate time series can be characterized by multiple summary statistics; however, some statistics may contain useless or redundant information, and some features may be coupled. If representative features are not selected, algorithm resources will be consumed, but accurate classification results will not be obtained. Thus, it is beneficial to use feature selection mechanisms not only to identify the most representative features but also to reduce the number of features. To select a suitable combination of important summary statistics, feature selection is critical [32]. Previous works used three feature selection categories: filter methods, wrapper methods and embedded methods. Genetic algorithms are classically used for feature selection and have wide applicability because they can overcome the shortcomings of exhaustive methods that have high time complexity. Additionally, the genetic algorithm is a feature selection method of combinatorial optimization that can fully consider the relationships between features and find the most suitable feature combinations. Many previous works have selected features based on genetic algorithms and achieved satisfactory results. Leardi R et al. first proposed that the genetic algorithm can be a valuable tool for solving feature selection problems [33]. Mahdi Mohammadi et al. used a genetic algorithm to identify the most significant features of EEG signals and find their diagnostic value for depression [34]. Dino et al. combined a genetic algorithm with gene expression data to classify gene expression data in two steps [35]. Lei et al. proposed a new electrocardio-graph pattern recognition method by combining a genetic algorithm with a support vector machine [36].
Method
Clinical prediction tasks include mortality, length of hospital stay, and disease prediction. The distribution characteristics of physiological time series are the manifestations of physiological states, including dispersion tendency, central tendency, and distribution shape, and these correspond to multiple statistics. By comparing the effects of different statistical combinations on different prediction tasks, the commonalities and differences of the optimal statistical combinations can be found, which can guide subsequent prediction tasks. The premise for finding the best combination of statistics is global search; however, global search is laborious and difficult in practice. This paper considers a feature selection method based on combinatorial optimization, that is, using the genetic algorithm to find the best combinations of statistics.
Identification of the distribution features of physiological time series
To characterize the time series distribution features of different predictors, it is critical to explore many different aspects of the data distribution. Based on statistical theory and existing research, this paper approximates the original data distribution by analysing the central tendency, the dispersion tendency and the distribution shape of each predictor. The central tendency reflects the representative value of the general level of the data or the central value, including statistics such as the mean, median, mode and quantile. The dispersion tendency of the distribution reflects trends describing how far the data are from the central value, including statistics such as maximum, minimum, standard deviation, coefficient of variation, range and interquartile range. The shape of the distribution reflects whether the distribution is symmetrical, the degree of skewness and the flatness of the distribution, including statistics such as skewness and kurtosis. Figure 1 shows the temperature fluctuation of a patient within 24 hours of admission to the ICU. The minimum and maximum values reflect the range of temperature change of the patients and can reflect the trend of the data from the centre value. The mean value reflects the average temperature of the patients over 24 hours and can reflect the degree to which the data distribution aggregates to its centre value. Furthermore, the mode reflects the temperature value that appears most frequently within the 24 hours. The median reflects the average value, and the quantile reflects values in a specific position. The range and interquartile range reflect the degree of difference among the whole data distribution. The variance and standard deviation reflect the dispersion degree of the temperature distribution and the stability of the temperature data: a larger variance indicates that the patient's temperature fluctuates widely, which may indicate that the disease is more severe. The coefficient of variation also reflects the degree of discreteness of the data. However, the central tendency and the dispersion tendency of the temperature distribution cannot reflect the order of temperature measurements; therefore, the shape of the distribution should be considered. The shape of the distribution can reflect the evolution of the disease. Skewness can reflect the symmetry of the data distribution. Generally, the symmetry of the data distribution can be understood as the stability of the temperature change. Both left and right skewness can reflect changes in temperature. Kurtosis reflects sharpness of the peak and the peak degree of the data distribution and reveals the fluctuation trend and the patients' physiologic state. The summary statistics used in this study included the 13 statistics mentioned above, namely, minimum (min), maximum (min), mean, standard variation (std), median, lower quartile (Q1), upper quartile (Q3), mode, range, interquartile range (IQR), coefficient of variation (CV), skewness (skew) and kurtosis (kurt). Based on previous works, the first measurement (first) is also added.
Selection of best statistical combination based on the genetic algorithm
To explore the impact of different combinations of statistics on prediction performance and find the optimal combination, we formalize the problem. Let V = {V 1 , V 2 , · · · , V P } represent a collection of P multivariate time series. Series V i consists of a multidimensional time series of m variables, and the time series of each variable j has n j observations. For a variable-length time series, n j may differ for each variable j. V i can be written as follows: The j-th component of the i-th time series, that is, For every univariate time series V ij , the different variables have different dimensions (observations), but every time series can be represented and transformed into L summary statistics extracted from the time series. In this paper, according to the 14 statistics mentioned, we set L = 14.
Multiple clinical predictors with different sampling frequencies from multiple patients are collected in the ICU. Thus, V is a set of time series of varying length multivariate time series. Specifically, in Formula (1), P represents the number of patients, m corresponds to predictor dimensions such as heart rate, blood pressure, temperature and other vital signs and laboratory predictors and t is the time measurement point, and the length of t differs for different predictor sampling frequencies. Thus, V ijt denotes the t-th measurements of the j-th predictor in the i-th patient. Because of the different sampling frequencies of different predictors in different patients, the total lengths of the vectors obtained by concatenating them differ. We can summarize the measurements of different variables by statistics of fixed numbers and concatenate them to obtain vectors of the same length for patients. The time series of patient i after extracting the time series features using the L summary statistics can be expressed as follows: Note that different statistics have specific statistical meanings. Some problems, such as information overlap, may exist among the statistics. Not all the statistics may perform well for prediction; thus, using all the statistics directly to represent a patient will increase the modelling complexity and can lead to overfitting. Let binary variable x k denote whether statistic k is selected in the best combination, that is, Then, the selection vector X of the best combination of statistics can be expressed as and thus, the representation of patient i after statistical selection can finally be expressed as To select the combination of statistics that best reflects the physiological time series, we regard the selection vector X as an unknown parameter and construct an objective function to solve the optimization problem. The optimal objective function can be written as follows: where E is an evaluation function used to measure the prediction performance; in this study, the area under the receiver operating characteristic curve (AUROC) is chosen in this paper. Here, y i is the true label of the patient in different prediction tasks, and f is the prediction model, which is the random forest algorithm in this study. Because the objective function in Formula (6) cannot be written using explicit expression levels, the simplest and most direct way to find the optimal solution of X is to adopt a global search strategy, that is, to find the prediction effect of all statistical combinations and then select the optimal combination. However, the time complexity of this method is O (2 n − 1), which has practical limitations. The purpose of this paper is to evaluate which statistical combination is most effective for time series representation, and the final result of feature selection is a combination of statistics (such as [minimum, maximum and mean]). The optimal combination can be achieved by chromosome coding in a genetic algorithm. The genetic algorithm is a combinatorial optimization algorithm that approximates a global search; it can fully consider the relationships between features and find the most suitable feature combination.
The parameter settings in the genetic algorithm are as follows. (1) Coding and decoding: Because the selection vector of summary statistics is a binary variable, we use binary coding, and no decoding process is needed. (2) Population: We select the size of the population as 20, and the initial population is generated randomly. (3) Fitness function: In this paper, we select the AUROC as the fitness function to select the feature subset with a better classification effect. The fitness function corresponds to E in Formula (6). (4) Genetic operators: We use the roulette wheel selection scheme as the selection strategy, single point crossover with a probability of 0.6 as the cross strategy and uniform mutation with a probability of 0.1 as the mutation strategy. (5) Termination condition: To determine the convergence of the algorithm adaptively during the iteration process, the termination condition for the genetic algorithm used in this paper combines the maximum genetic algebra with the stationary fitness value. When the continuous fluctuation range of the fitness value is less than the specified threshold or the genetic algebra is larger than the specified algebra, the solution of the algorithm is complete.
To avoid optimistically biased performance estimates from conducting feature selection on the full dataset, we refer to previous work by Ozcift and Gulten, who embedded a genetic algorithm for feature selection into Bayesian network classifier training using a nested cross-validation approach [37]. The general flow of feature selection with the genetic algorithm is given in Table 2. The feature selection based on the genetic algorithm is embedded in a 5-fold cross-validation. For each fold of test data, a set of summary statistics will be obtained by the genetic algorithm; thus, five groups of summary statistics will be obtained under 5-fold cross-validation. Then, based on the summary statistics of each group, the random forest model is used for prediction, and the mean and standard error of the metrics index is taken as the experimental result.
Experiments and results
We explored the performances of different statistical combinations for different clinical prediction tasks, including patient mortality, length of hospital stay and disease prediction, and obtained the optimal statistical combination based on a genetic algorithm. Then, we analysed the results to find the commonalities and differences of the optimal combinations under different tasks.
Dataset and preprocessing
We used the MIMIC-III dataset collected from a variety of ICUs between 2001 and 2012 [38]. MIMIC-III is a large, freely available critical care database developed by the Laboratory for Computational Physiology of Massachusetts Institute of Technology (MIT). The database integrates deidentified, comprehensive, healthrelated data of 58,976 admissions admitted to the ICU of the Beth Israel Deaconess Medical Center (BIDMC) in Boston, Massachusetts.
To reflect the universality of the results, we did not target patients with a certain disease, but accepted all patients. After removing duplicates, we obtained a total of 42,145 admission records; patients less than 15 years of age were excluded. To prevent possible information leakage and to ensure similar experimental settings compared with related works, we used only the first ICU admission for each patient [39]. In the MIMIC-III database, bedside monitoring data, laboratory test data, input events and output events all consist of time series with time tags. The data for the predictors selected in this paper came from three tables: chartevents, labevents and outputevents. Following the related research, we chose the predictors used in SAPS II, as shown in Table 3 [10,17,21]. For each predictor, we used raw data instead of calculated data. For example, we treated GCSVerbal, GCSMotor, and GCSEyes from the Glasgow Coma Scale (GCS) score as separate features. All the extracted predictors shown in the table came from the first 24 hours after the patient was admitted to the ICU.
Data preprocessing mainly included processing missing values, noisy values and duplicate values. The missing value processing process was divided into three aspects: patients, predictors and statistics. We eliminated patients missing more than 30% of their data and predictors missing more than 40%. Because the sampling frequency of each predictor is different and the calculation of statistics such as std, kurt and skew have requirements for sampling frequency, some indicators with very low sampling frequency led to the inability to calculate those statistics. We eliminated the statistics in which the missing data rate was greater than 20% under these indicators. Then, we used mean interpolation to interpolate the remaining missing values. Abnormal values were processed for each predictor. The outliers were found and dealt with by the box-plot combined with the clinical normal range of the different predictors. For example, to protect information about surviving patients older than 90 years old, the age of these patients is recorded as 300 years old. Here, we replaced it with the median value. In addition, duplicate records were deleted, and inconsistent units were converted. For the interval value, we chose the median value to represent the predictor value of the time point. Ultimately, 6,927 admission records remained after preprocessing. Figure 2 shows the patient cohort selection inclusion criteria and the data extraction process, and Table 4 shows the baseline characteristics and outcome measure of our dataset. The median age of the adult patients was 65 years, and 58.8% of patients were male. In-hospital mortality was approximately 19.5%, and the median length of stay in the ICU was 4.7 days. We did not process non-time series predictors such as age and sex. For the time series predictors, we calculated 14 statistics, including min, max, mean, std, median, Q1, Q3, mode, range, IQR, CV, skew, kurt and the first measurement of each predictor from the first 24 hours after admission to the ICU.
Clinical prediction tasks
The clinical prediction tasks selected in the experiment included patient mortality, length of hospital stay, and disease prediction. Mortality prediction is a primary patient outcome, including short-term, in-hospital and long-term mortality. In the experiment, whether the patient died within 72 hours after entering the ICU was selected as the short-term mortality label, and the 30-day and 1year mortality rates were used as the long-term mortality label. The length of the hospital stay of an admission can be defined as the time interval between admission and discharge; we calculated the length of hospital stay for each admission in hours. When a patient is discharged, there will be multiple diagnosis, which are represented by the ICD(international statistical classification of disease)-9 diagnosis codes. We followed [10] and divided all the ICD-9 codes into 20 diagnostic groups; each diagnostic group had similar diseases (e.g. respiratory system diagnosis). Thus, the task of disease prediction is transformed into the task of predicting the ICD-9 code groups.
Experimental design
For the three prediction outcomes, we approximated a global search to obtain the best combination of statistics Fig. 2 Patient cohort selection inclusion criteria and data extraction process. Adult patients at their first hospital admission with a low missing data rate were selected as the patient cohort, and then, the clinical data of these patients, such as cohort demography, vital signs, and laboratory examinations, were extracted using a genetic algorithm. To improve the generalizability of the statistical combinations obtained by the genetic algorithm, we embedded the genetic algorithm in a crossvalidation procedure, as shown in Table 2. For each data fold, we obtain a set of optimal statistical combinations (i.e., fivefold cross-validation yields 5 sets of statistical combinations). To reduce the effect of randomly partitioning the data during cross-validation, we repeated the entire process five times, selecting different random seeds for dividing the data each time. For the 25 sets of statistical combinations obtained under each prediction task, on the one hand, we compared their prediction performance with the combination of statistic commonly used in previous studies, and on the other hand, we conducted an in-depth analysis of these combinations. Then, we constructed two indexes to quantify the importance of different statistics used for prediction (see the Discussion section). The most important statistics were found by comparing the commonalities and differences of the optimal combination of statistics under different prediction tasks. As performance measures, we choose the AUROC and the area under the precision-recall curve (AUPRC) for the classification tasks and Mean Squared Error (MSE) for the regression tasks. AUROC and AUPRC evaluate the discrimination ability of the model, namely, the ability to assign higher severity scores to patients who died in the hospital compared with those who did not. The higher the AUROC and the AUPRC are, the better the model is. We calculated the mean and standard error of AUROC, AUPRC and MSE scores based on cross-validation as the final result.
All the experiments in this paper were programmed in the Python language, using Spyder 3.6 on a PC equipped with an Intel (R) Core (TM) i7-6700 CPU @ 3.40 GHz processor. The iterations of the genetic algorithm were terminated when the fluctuation in the fitness value became less than δ = 10 −3 for 50 consecutive iterations or when the total number of iterations exceeded 200. The crossover probability was set to 0.6, the mutation probability of the genetic algorithm was set to 0.1, and the size of the population was set to 20.
Results
We report the results under different prediction tasks separately. For each prediction task, we list the prediction results based on a single statistic, commonly used combinations of statistics, and the optimal combinations of statistics obtained by the genetic algorithm.
Results of mortality prediction
Patient mortality prediction tasks are divided into shortterm, in-hospital, and long-term mortality prediction by Table 5 shows the AUROC and AUPRC of the 14 selected statistics applied separately for the four mortality prediction tasks. When using a single statistic for mortality prediction, mean, median and Q3 achieved the best results under different prediction tasks. In other words, the statistic that reflects the concentrated trend of the physiological time series achieved the best and near-best prediction results on the mortality prediction task whether in the short or long term prediction. In addition, for short-term mortality prediction, the effect of the max statistic is also significantly greater, which is a statistic that reflects dispersion trends. It is not difficult to understand that if the short-term mortality is predicted using the data of patients 24 hours after entering the ICU, the values that will be significantly related to the predictive label are the degrees of fluctuation of the patient predictors. If the predictors are relatively stable, patient state can also be considered relatively stable. In contrast, large fluctuations are considered to indicate an unstable patient condition; such patients have a higher mortality rate. For the long-term prediction, the average levels of the predictors at a certain stage are closely related to the prediction results over extended periods. If the predictor remain at a consistently abnormal level, the mortality rate is higher over longer time spans.
combination of dispersion and central tendency is better.
It is further demonstrated that for short-term prediction, statistics that reflect the dispersion tendency have a better representation effect and can reveal fluctuations in the patient's physiological state. For longer-term mortality prediction tasks (such as 30-day and 1-year), the addition of the std statistic enriches the physiological time series fluctuation information. Even knowing the min, max and mean value of the physiological time series, it is difficult for these statistics to reflect violent fluctuations in the patient's physiological state. Long-term prediction causes a reduction in the time dependence of the prediction; thus, more information needs to be added to achieve good results. Tables 7,8,9,10 presents the optimal ten combinations of statistics obtained by the genetic algorithm and their performances for short-term, in-hospital and longterm mortality prediction. As shown, the prediction effect of the optimal combination of statistics obtained by the genetic algorithm is rarely weaker than the prediction effect of the commonly used combinations of statistics. As the prediction interval is extended, the prediction performance decreases, which indicates that predicting long-term mortality based only on data collected within 24 hours after patient entering the ICU not ideal. For short-term mortality prediction tasks, Q1 and Q3 appear more frequently. And the statistics that show dispersion tendency also appear frequently, such as min, max and so on. Skew and kurt, two statistics that describe the shape of the time series distribution and are often ignored, appear quite frequently and reflect the role of these two statistics in supplementing the other available information. Under longer-term mortality prediction tasks, mean, Q1 and Q3, which are concentrated statistics, also achieve better results. Combining statistics such as min, max, and mean can better characterize the distribution of physiological time series. In addition, the commonly used combinations of statistics such as [min, max] and [min, max, mean, std] also achieve good prediction results on both in-hospital and long-term mortality prediction tasks. In other words, this paper used experiments to demonstrate why the existing studies chose these particular statistical combinations to represent physiological time series. Table 11 shows the performance of a single statistic for length of hospital stay prediction. A certain level of correlation exists between the length of hospital stay and mortality prediction. Generally, patients with higher mortality have more severe symptoms; consequently, their hospital stays are relatively long. Consistent with mortality prediction, range works best when based on a single statistic. At the same time, std, CV, and IQR, which reflect the dispersion tendency, have better effects. In addition to indicating the dispersion tendency, the better performing statistics also constitute crossover features, just as range = max − min. Therefore, the importance of cross features is self-evident. Table 12 shows the performances of commonly used combinations of statistics for predicting length of hospital stay. [min, max, mean, std] corresponds to the smallest MSE and the best prediction performance. Table 13 shows the optimal ten combinations of statistics obtained by the genetic algorithm and their prediction performances for length of hospital stay prediction. The effect of the combinations of statistics obtained by the genetic algorithm is superior to the effect of the common combinations of statistics. Range appears in each group, illustrating the validity of this statistic for predicting the length of hospital stay of patients. A larger range indicates an unstable condition, and patients with unstable conditions will naturally be hospitalized longer. In contrast, statistics such as the mean, which reflects the central tendency, appear less Table 7 The optimal ten combinations of statistics obtained by the genetic algorithm and their prediction performance for 72-hour mortality prediction Table 9 The optimal ten combinations of statistics obtained by the genetic algorithm and their prediction performances for 30-day mortality prediction frequently. When predicting the length of hospital stay, the stability of the patient's condition is the most important factor; thus, statistics that indicate the dispersion tendencies of time series function better.
Results of disease prediction
We treat disease prediction as a multilabel classification task and calculate the AUROC and AUPRC. Table 14 shows the performances of single statistics for disease prediction. On this task, a comparison of the results shows that the mean, median, Q1, Q3 and other statistics that reflect centralized trends have the best effect. In contrast, the effects of statistics that reflect the dispersion tendency are not very good. The performances of skew and kurt, which reflect the shape of the time series distribution, are the worst. This result shows that if only one statistic is used for patient disease prediction, the shape of the distribution is unimportant; the level of the value is more important.
The corresponding prediction performances of combinations of multiple statistics are shown in Table 15. Among the five commonly used combinations, it is surprising that the single mean statistic works best-even better than combinations of multiple statistics. From the optimal ten combinations obtained by the genetic algorithm shown in Table 16, we can see that the mean statistic appears in almost all the combinations, indicating its core role in disease prediction. Furthermore, min, max, and range are evenly distributed among the multiple combinations. We speculate that these metrics provide good auxiliary data for disease prediction; however, using these statistics alone does not result in good prediction.
In summary, through the analysis of the prediction performances of different prediction tasks based on single statistics, commonly used combinations of statistics, and approximately optimal combinations of statistics obtained by the genetic algorithm, we discovered many interesting and clinically significant phenomena. We have indirectly demonstrated the rationality of using various combinations of statistics that were applied in previous research. Additionally, we found the statistics that are extremely important in clinical prediction tasks, which can provide guidance for future research.
Discussion
In the experiments, we used a genetic algorithm to obtain combinations with approximately optimal prediction results for different prediction tasks. Taking 72-hour mortality prediction as an example, the 5-fold cross-validation genetic algorithm was repeated 5 times to obtain 25 groups of combinations. Each group corresponds to multiple statistics, and the prediction performance varies among the different combinations. Which statistics appear most frequently and which statistics will achieve better prediction results are meaningful research questions. In the previous chapter, we performed a rough analysis. In this chapter, we quantitatively analyse the frequency of each statistic in the optimal combinations and the mean values of indexes under different tasks. Since we chose random forest as the classifier in the experiments, it is necessary to verify the performances of other classifiers based on the obtained statistics. So we also discuss this issue. Tables 17,18,and 19 show the results of each statistic regarding patient mortality, length of hospital stay and disease prediction, respectively. Frequency represents the number of occurrences of a statistic in the 25 combinations, and Mean_AUROC and Mean_AUPRC represent the average AUROC and AUPRC for all the combinations in which the statistic appears.
In the mortality prediction task, the statistics with the highest frequency for 72-hour short-term mortality prediction are min, max, Q1 and Q3. The mean_AUROC and mean_AUPRC values corresponding to median and Q1 are high, while first are low. Statistics that embody the dispersion tendency, such as min and max, play a central role in short-term mortality prediction, while statistics such as first are more irrelevant to patients' physiological status information. For the in-hospital mortality prediction task, min and std occurred most frequently, and min and max achieved the highest Mean_AUROC and Mean_AUPRC, respectively. For the long-term mortality prediction task, min, std, and kurt performed best. Kurtosis and skew measures have rarely been used in previous studies to measure the shapes of physiological time series distributions. However, the experiments in this paper show that these two statistics provide supplementary information and should not be discarded. Apart from this lack, we can clearly see that the statistics widely used in previous studies have indeed played a better role in predicting mortality. When predicting the length of hospital stay, range appears most often, and its effect is the best. In the disease prediction task, the most frequent occurrence is std, but the measures that Table 16 The optimal ten combinations of statistics obtained by the genetic algorithm and their prediction performances for disease prediction perform the best are statistics that reflect the central tendency.
To verify whether the combinations of statistics obtained in this paper can also obtain good prediction results using other classifiers, we select logistic regression, SVM and decision tree. We compare the prediction performance of the optimal combination of statistics and the commonly used combinations of statistics under different prediction tasks by multiple classifiers. Tables 20, 21, 22 and 23 show the results of the 72hour, in-hospital, 30-day, and 1-year mortality prediction, respectively. Tables 24 and 25 show the results of the length of hospital stay and the disease group prediction.
In the task of mortality prediction, regardless of shortterm, in-hospital or long-term prediction, from a horizontal perspective, the decision tree has a poor prediction effect. The performance of SVM is similar to random forest, but the time complexity is high. Logistic regression is usually able to achieve higher AUPRC. The time complexity of the random forest is low, and it can obtain the best prediction effect in most cases compared to other classifiers. This is why we choose the random forest as the classifier at the stage of calculating the fitness value by the genetic algorithm. Vertically, patient representation based on the best combination of statistics has achieved the best prediction results in most cases compared to the commonly used combinations of statistics.
A single statistic such as mean and first is less effective than the combination of multiple statistics. In the cases where the optimal combination of statistics does not achieve the optimal effect, the combination of [min, max, mean, std] has achieved the optimal effect many times. On the one hand, it shows that the statistical combinations obtained by random forest and the analysis of effective statistics are also applicable to other classifiers. On the other hand, it also reflects the scientific nature of the commonly used combinations of statistics such as [min, max, mean, std].
In the length of stay prediction task, the MSE of random forest is much smaller than the MSE of logistic regression. The MSE corresponding to the optimal combination is smaller than the commonly used combination, and much smaller than the MSE corresponding to a single statistic. In the disease prediction task, the optimal combination of statistics only performs best when the random forest is used as a classifier. When logistic regression and decision tree are used as classifiers, the performance based on a single statistic 'mean' is the best. Although the optimal combination of statistics do not achieve the best prediction effect, in the results of random forest, we can also find that the effect of mean and optimal combination of statistics is not much different. It is also consistent with the conclusion that the statistic 'mean' plays an important role in disease prediction. In general, the effective statistical combinations based on random forest in this paper can also achieve better prediction results when selecting other classifiers. It shows that the discussion of effective statistics under different prediction tasks in this paper has a strong generalization ability.
Conclusion
In this paper, we summarized 14 statistics that describe the characteristics of physiological time series, of which three involve aspects of the central tendency, dispersion tendency, and distribution shape. Then, we evaluated the performances of these summary statistics of physiological time series as features for clinical prediction tasks, including patient mortality, length of hospital stay and disease prediction. We performed experiment on patient representations based on both single statistics and commonly used combinations of statistics. To find the combinations of statistics with the best prediction performances under different tasks (limited by the high time complexity of global search), we used a cross-validation-integrated with a genetic algorithm to obtain the combinations of statistics with approximately optimal performances. A quantitative analysis was performed on each statistic in the optimal combinations. Through in-depth analysis of the experimental results, we have reached the following conclusions: (1) As the prediction time becomes longer, the prediction performance becomes increasingly worse.
Using data acquired only within 24 hours after the patient entered the ICU was insufficient to make reasonable long-term mortality prediction.
(2) Statistics that reflect centralized trends, such as mean and median, play an important role in almost all mortality prediction tasks.
(3) For short-term mortality prediction, statistics that show dispersion tendency are also representative, such as min, and max. Cross-features such as range may contain more information. (4) For the length of hospital stay prediction task, the statistics that reflect the dispersion tendency perform better. The length of hospital stay is closely related to the stability of the patient's physiological state: unstable patients have a higher probability of staying longer. (5) For the disease prediction task, statistics that reflect the centralized trend, such as the mean, make larger contributions to the prediction result. The mean represents the average level of different predictors is sig-nificantly correlated with judgements concerning whether the patient's condition is due to a specific disease. (6) Commonly used combinations of statistics such as [min, max, mean] and [min, max, mean, std] achieve good prediction results in most cases; thus, these experiments help to verify the rationality of previous research. (7) Skew and kurt, which reflect the shape of a distribution, perform poorly when used individually as features for prediction, but they appear frequently in the optimal combinations, indicating that they can play a role as supplemental information.
Although we evaluated the effect of statistics of physiological time series under different prediction tasks, some limitations still exist. This paper considers the central tendency, dispersion tendency and distribution shape when choosing statistical features but does not fully consider latent characteristics, such as periodicity. Moreover, due to limitations in the sampling frequencies of some of the clinical predictors, the analysis of kurt and skew, which describe shape of a distribution, was insufficient. Furthermore, these experiments were applied only to patient mortality, length of hospital stay and disease prediction. Research on other clinical tasks still needs to be performed. In future work, we plan to correct the deficiencies of this study and design a more suitable patient representation method and model to improve the results of clinical task prediction.
|
2020-03-06T15:37:21.520Z
|
2020-03-05T00:00:00.000
|
{
"year": 2020,
"sha1": "e5f6852781d8068431b31ec9b9f97828ef8210ec",
"oa_license": "CCBY",
"oa_url": "https://bmcmedinformdecismak.biomedcentral.com/track/pdf/10.1186/s12911-020-1063-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "e5f6852781d8068431b31ec9b9f97828ef8210ec",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
202728280
|
pes2o/s2orc
|
v3-fos-license
|
Editorial: Psychosocial Interventions for Suicide Prevention
HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Editorial: Psychosocial Interventions for Suicide Prevention Jorge Lopez-Castroman, Raffaella Calati
Psychosocial approaches to the understanding and prevention of suicidal behavior have been wished-for since the pioneer work of Emile Durkheim (nineteenth century), who studied suicide to demonstrate how social factors played a key role in the genesis of human behavior. Many years passed before his ideas took hold, but in the last few decades initiatives to reduce the rates of suicidal behavior are arising throughout the world. This Research Topic was conceived to provide a picture of current psychosocial approaches to prevent suicidal behavior. Our aim was attained thanks to a variety of remarkable contributions (11 articles) ranging from suicide survivors to social media coverage or medical training.
To start off, Cramer and Kapusta point out the necessity of articulating a multi-level approach to suicide prevention. The Social-Ecological Suicide Prevention Model contemplates risk and protective factors at four different levels: individual, interpersonal, community, and societal. The authors posit that such a model would help to build up an integrative theory of suicidal behavior and rationalize the application of suicide prevention policies. Their conclusions are linked to the perspective paper by O'Connor and Portzky that collected expert opinions to identify the key challenges and developments in suicide research and prevention. Far from been complacent, numerous major challenges are discussed. One of them concerns prevention through web and social media and it is treated very practically by Notredame et al. in another paper. Building on recent evidence, this paper outlines a "digitally augmented prevention policy" and describes three main functions of preventive actions through the web and social media: gatekeeper, communication outreach, and intervention outreach. Of course, applying such a model will imply a multidisciplinary effort and changes in our usual practice.
Some papers have focused on populations surrounding suicidal persons and deserving attention. This is especially the case of suicide survivors, who have rarely been described in detail despite their exposure both to a suicide model and to unusual levels of complicated grief. Bellini et al. provide a rare and detailed account on the mental health state of several suicide survivors participating in psychological autopsies. Because they might become suicide survivors, caregivers also need attention as important collaborators providing support in many different formats and being able to detect the transition from suicidal ideation to behavioral enaction. As pointed by Le Moal et al., few studies have focused on caregivers to improve suicide prevention. Medical students constitute another important population given their role in the community as future providers of care and access to care. Recent reports from different countries confirm that medical studies are commonly associated with elevated levels of stress, depressive symptoms and suicidal ideation. Gramaglia and Zeppegno discuss how medical training can enhance self-recognition of mental distress, reduce barriers to treatment-seeking and increase favorable attitudes toward suicidal behavior through personal experience. Finally, Brown et al. have studied the effect of a workshop for gatekeeper training in the attitudes, skill confidence and knowledge in a large sample of school staff. Although participation in this day and a half workshop induced a change in negative attitudes toward suicidal behaviors, the change did not last long. Participants, especially teachers, felt nonetheless more able to deal with suicidality even 6 months after the workshop.
Méndez-Bustos et al., undertook a systematic review of any observational paper investigating psychotherapeutic interventions to reduce suicide risk. These papers were not previously considered in prior meta-analytic studies and they provide interesting insights given the problematic adherence of suicidal patients to randomized controlled trials. The most relevant conclusions involve the need to increase the quality and exhaustiveness of reports in this domain and the interest of group and web therapies. The second systematic review and meta-analysis found that adolescents with self-harm were more engaged with treatment when they received any psychological therapy compared to treatment as usual (Yuan et al.) Treatment engagement was defined as attending four or more sessions and treatment as usual involved typical follow-up appointments with no structured therapy. This finding updates a previous meta-analysis that found no difference in engagement between the same groups, and supports the interest of structured psychological treatments in this population since non-adherence is rather the rule that the exception.
Interesting aspects of psychosocial interventions for suicide prevention are treated in mini-reviews. Prada et al. didactically describe the components of dialectical behavioral therapy (DBT) and how each of them has a precise effect in the reduction of self-harm and suicide attempts among patients with borderline personality disorder. The study of each module of a multicomponent therapy, such as DBT, is the key to optimize future interventions. Since a one-fits-all treatment does not seem to be sufficiently effective in suicide prevention, we might need to design personalized treatments selecting the most convenient components. The second mini-review (Zeppegno et al.) contemplates psychosocial interventions for older adults. Not only the population of older adults is increasing very fast in developed countries, they are also more exposed that any other age group to the risk of suicide and more vulnerable to pharmacological treatments. However, only few studies had specifically addressed the issue of psychosocial interventions to this age group.
Following the lines traced by the contributions to the Research Topic, we can conclude that psychosocial interventions are effective but the challenges ahead are still important. Firstly, a personalization of treatment approaches depending on age, gender, or psychopathology is needed. Focusing future studies in the components of psychosocial interventions would help to determine which treatment program works better depending on the set of symptoms or the characteristics of a person. Finally, the advancement of the field requires innovation, notably through new computational methods (e.g., machine learning to predict treatment response), and new technologies (e.g., web-based or mobile phone-based tools), to encompass different levels of risk factors extending from the society to the individual in danger. In that quest, the role of the social environment that surround a suicidal person is not to be forgotten.
AUTHOR CONTRIBUTIONS
JL-C wrote the manuscript. JL-C and RC edited and revised the manuscript.
|
2019-09-24T13:14:49.355Z
|
2019-09-24T00:00:00.000
|
{
"year": 2019,
"sha1": "2c6a7be4d0dddafae581554c8d9f71fc99d92cf1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2019.02191/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2c6a7be4d0dddafae581554c8d9f71fc99d92cf1",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
251138386
|
pes2o/s2orc
|
v3-fos-license
|
Clinical Consensus on Diagnosis and Treatment of Patients with Chronic Exertional Compartment Syndrome of the Leg: A Delphi Analysis
Aim Defining universally accepted guidelines for the diagnosis and treatment of chronic exertional compartment syndrome (CECS) is hampered by the absence of high-quality scientific research. The aim of this Delphi study was to establish consensus on practical issues guiding diagnosis and treatment of CECS of the leg in civilian and military patient populations. Methods An international expert group was queried using the Delphi technique with a traditional three-round electronic consultation. Results of previous rounds were anonymously disclosed in the questionnaire of rounds 2 and 3, if relevant. Consensus was defined as > 70% positive or negative agreement for a question or statement. Results The panel consisted of 27 civilian and military healthcare providers. Consensus was reached on five essential key characteristics of lower leg CECS. The panel achieved partial agreement regarding standardization of the diagnostic protocol, including muscle tissue pressure measurements. Consensus was reached on conservative and surgical treatment regimens. However, the experts did not attain consensus on their approach of postoperative rehabilitation and preferred treatment approach of recurrent or residual disease. A summary of best clinical practice for the diagnosis and management of CECS was formulated by experts working in civilian and military healthcare facilities. Conclusion The Delphi panel reached consensus on key criteria for signs and symptoms of CECS and several aspects for conservative and surgical treatment. The panel did not agree on the role of ICP values in the diagnostic process, postoperative rehabilitation guidelines protocol, or the preferred treatment approach for recurrent or residual disease. These aspects serve as a first attempt to initiate simple guidelines for clinical practice.
Introduction
The chronic exertional compartment syndrome (CECS) is considered a disabling overuse injury that may occur in myofascial compartments of lower or upper extremities of active individuals, athletes, or military service members. This syndrome manifests itself upon the performance of repetitive movements and is usually reported as an exerciserelated muscular pain, causing a sensation of pressure or tightness that lessens after cessation of provocative activities. These symptoms are thought to result from a reversible elevation in intracompartmental pressures (ICPs), secondary to a mismatch between expansion of muscular tissue
Key Points
Defining universally accepted guidelines for the diagnosis and treatment of chronic exertional compartment syndrome (CECS) of the leg is hampered by the absence of high-quality scientific research.
A comprehensive overview of current state-of-theart opinions of a large group of professionals who are deeply involved in the daily care and treatment of CECS patients is provided.
Outcome of this Delphi analysis on CECS may serve as a platform to initiate simple guidelines for clinical practice.
open, a minimally invasive, or an endoscopic technique [4]. Moreover, the partial removal of fascia (fasciectomy) is used in some military populations [19], and is advised in cases of residual or recurrent disease [20]. However, a clear treatment algorithm and clinical guideline are not available, whereas presentation of treatment outcomes in the scientific literature is far from standardized [21].
Defining universally accepted guidelines for the diagnosis and treatment of CECS of the leg appears to be hampered by the absence of robust empirical evidence. Randomized controlled trials are currently not available and the level of evidence is often limited to level 3 or 4 [21]. The aim of this Delphi study was to establish consensus on practical issues guiding diagnosis and treatment of CECS in civilian and military patient populations.
Study Design
Opinions of an international expert group were collected by the Delphi technique as initially developed by Dalkey and Helmer in 1963 [22][23][24][25][26]. The consensus process was conducted with a traditional three-round electronic consultation (Linstone 1978, literature search for statement development) [27] between May 2020 and July 2021. Statements were attained regarding diagnosis and management of CECS of the leg in both (recreational) athletes and military service members. An online survey platform was used (Sur-veyMonkey®, Momentive inc., San Mateo, CA, USA) and individualized links were sent to all participants. Anonymity was warranted for all rounds. Consensus was defined prior to the investigation and fixed at a response rate of > 70% of the panel and a positive or negative agreement of > 70% for a question or a statement [26,27].
Panel Selection
Panel participants were considered experts on CECS of the leg on the basis of their scientific track record by the authors. Sports medicine physicians, surgeons, clinical investigators, and physiotherapists actively treating patients with exerciserelated leg pain in civilian and/or military patient populations were considered eligible members for an international study group. A portion of the panel members was suggested by all co-authors based on these criteria. Additionally, all invited panelists were asked to suggest potential panel within a relatively noncompliant fascia [1]. However, there is increasing evidence that CECS is a multifactorial problem and more than just a temporary elevation of ICP [2]. CECS is mostly encountered in the anterior compartment of the leg. However, reliable incidence or prevalence rates are not available [3,4].
The diagnostic pathway of CECS starts with a suspicion that is raised after a suggestive patient's history in combination with painful palpation of affected muscles, ideally immediately after symptom provocation. In clinical practice, an invasive needle or catheter manometry may be used to confirm the presence of CECS [3]. These ICP measurements yield absolute pressure values before, during, and/or after exercise, but diagnostic cut-off criteria differ substantially [2,[5][6][7][8]. Moreover, the execution of an ICP measurement suffers from interobserver variability in the absence of a standardized test-protocol [9,10]. In addition, the invasive nature of these measurements comes with a risk of incorrect needle placement, hematoma formation and nerve damage [11,12]. Alternative diagnostic tests are currently not widely used in the diagnostic work-up, because the evidence for their use is of low quality or the practicality for clinical use low [13]. Conventional radiographs and MRI scans may be used to exclude alternative diagnoses [3].
The natural course of CECS was shown to cause persistent symptoms over time [14]. Traditionally, management of CECS starts with conservative measures. Gait retraining [15][16][17] and botulinum injections [18] may have positive outcomes. If conservative interventions fail or if a patient experiences severe symptoms, surgical treatment is considered. Fasciotomy is the described surgical intervention, opening the fascia enveloping the affected muscle with an members as well. Upon invitation, a clear explanation of the objectives of the study and specific instructions for member participation were provided and consent was obtained. All panel members were asked to confirm their expertise with respect to the diagnosis and management of CECS, as well as to estimate their annual case load of ICP measurements and/or surgeries.
Search Strategy, Data Extraction, and Statement Development
The literature search was conducted in PubMed, EMBASE, Web of Science, Cochrane, CENTRAL, and Emcare using the keywords "chronic exertional compartment syndrome," "anterior compartment," "posterior compartment," "peroneal compartment," "exertional leg pain," "medial tibial pain," "overuse injuries," "diagnosis," "therapy," "surgical treatment," and "conservative treatment." All related MeSH terms, synonyms and plurals were entered. Studies published between 1 January 1970 and 1 May 2020 were eligible. In addition, relevant publications identified outside this strategy were added manually, based upon recommendations by co-authors. A core group with the study's facilitator (SV) and primary researchers was formed. The facilitator screened all titles and abstracts for relevance. Articles were included if they defined, described, or recommended appropriate clinical information related to CECS of the leg, including (1) history-taking questions that aided in the differential diagnosis of exercise-related leg pain; (2) physical examination and special tests; (3) indications for and types of diagnostic investigations; and (4) treatment interventions. All members of the core group independently performed a second screening to verify the completeness of the initial list. Additional relevant publications were provided by the core group. Full texts of studies were retrieved and reviewed for eligibility.
After extensive review of full-text publications, a structured questionnaire for the first Delphi round was developed by the study's facilitator and a member of the expert panel. Relevant questions were formulated from included studies and covered best practices on how to diagnose and manage patients presenting with possible CECS of the leg. The list of questions was evaluated by the core group independently. Each member grouped questions according to themes to develop the final list of questions. Differences were resolved by discussion. All items and questionnaires were tested by all co-authors to identify ambiguities and to improve on feasibility of administration [23].
Round 1
The first questionnaire consisted of 28 multiple choice questions. Questions covering a specific statement were answered using an ordinal 5-point Likert scale (strongly agree, agree, neither agree nor disagree, disagree, strongly disagree). Questions with nominal answer categories were provided with the opportunity for comments and suggestions in an open text box. Response frequencies for each item were calculated and entered anonymously into a database by the study's facilitator. Ordinal or nominal answer categories with > 70% agreement from the panel were accepted and omitted from the development of questions or statements for the subsequent rounds. Statements not meeting a 70% agreement were modified according to feedback provided by the expert panel and redistributed to the panel members for round 2.
Round 2
A second questionnaire containing 22 multiple choice questions was created with the purpose to further specify the answers of the panel members. Experts were now asked to judge statements with "agree" or "disagree," or to answer questions with fewer possible nominal answer categories. Throughout this round, all panel members were informed on the summarized scores and comments that were obtained in round 1. Thus, panel members could reflect upon the group results and adjust their opinion, while preserving the anonymity of their responses. Questions and statements not reaching consensus were retained for discussion in round 3.
Round 3
This final questionnaire consisted of 11 questions. As only a small increase in degree of consensus was expected during this stage of the survey, four open questions were implemented to clarify dispersion in previous answers. The panel members were asked to revise previous answers or to specify reasons for remaining outside of the consensus. Non-responders in rounds 1 and 2 were excluded from this final round.
Statistical Analysis
Statistical analysis was executed using SPSS statistics (v26, IBM Corporation, Armonk, NY, USA). All the panel members answers were registered in an electronic data file provided by the online survey platform. Descriptive statistics were used to present the data by frequencies (percentage). A sub-analysis comparing non-surgical with surgical panel members was performed for the questions regarding recurrent or residual disease, using Fisher's exact test. For this analysis, p values (two-sided) ≤ 0.05 were considered significant.
Delphi Panel Members
A total of 40 experts were identified and invited by email to join as a panel member. Seven potential panel members did not participate due to technical reasons (18%; digital invitations bounced n = 3, remained unopened n = 4). Another six candidates did not consider themselves an expert. Therefore, the international Delphi consensus panel was performed with 27 civilian and military care providers (sports medicine physician n = 12, surgeon n = 12, physiotherapist n = 2, and clinical investigator n = 1; Table 1). The first round was completed by all panel members, whereas rounds 2 and 3 were completed by 24 members (89%). The majority of the panel members were involved with civilian patient populations (89%) and were located in Europe (67%; Table 1).
Patient's History
The panel members agreed that both pain (25/27; 93%) and tightness (24/27; 89%) during exercise are essential clues in the history of patients suspected of CECS. Additionally, the specific location of symptoms (27/27; 100%), activity modification (22/27; 81%), and the type of provocative activity (20/27; 74%) were considered conditional aspects of a patient's history. The provocative activity was further specified in the second round as "involvement in sports or activities that require repetitive activation of the same muscles," to which all panel members agreed as conditional (24/24; 100%). Cramping, weakness, and paresthesia during exercise were not considered essential symptoms (21/24; 88%). In addition, duration of symptoms in the leg was not considered conditional for the diagnosis (21/14; 88%).
Physical Examination
Initially, no signs upon physical examination were considered essential for diagnosis, although "pain induced by treadmill provocation" was close to consensus (18/27; 67%). Following suggestions from an open textbox, this statement was changed to "symptoms induced by provocative activities" for the second round. This change in wording was considered essential by 82% (22/24) of the panel. Signs such as tenderness upon palpation of the symptomatic compartment (10/27; 37%) or presence of a muscle herniation (6/27; 22%) were not considered essential.
The panel did not reach consensus on any system used for the scoring of symptoms during physical examination or symptom provocation. Approximately half of the panel preferred a Visual Analogue Scale (14/24; 52%) or a Numeric Rating Scale (13/24; 48%). Moreover, the proposal to use either a Visual Analogue Scale or a Numeric Rating Scale in the absence of a standardized scoring system for CECS symptoms did not reach consensus either (18/24; 67%).
Intracompartmental Pressure Measurements
The panel agreed that an ICP measurement is conditional for the diagnosis ( It was agreed upon that the ICP value obtained 1 min after a provocative exercise is most meaningful and therefore preferred (20/24; 80%). The panel also thought that ICP measurements ought to be performed in symptomatic compartments of both legs (17/24; 71%). Where the deep posterior compartment is measured, a medial approach is preferred (21/24; 87%), whereas correct tip placement using ultrasound is required (17/24; 71%).
The panel did not achieve consensus with respect to leg and patient positioning. In the first round, supine positioning was most popular (19/27; 71%), although panel members also preferred a standing (5/27; 19%) or a sitting position (7/27; 26%). In the second round this question was rephrased as "are ICP measurements best performed supine compared to sitting or standing." This statement was agreed upon by 16 of the 24 panel members (67%). Reasons to perform an ICP measurement in a standing position included: (1) a supine position would falsely decrease the ICP by counteracting the effect of gravity, (2) a standing position is similar to the provocative situation, or (3) local diagnostic criteria were developed using a standing position.
Another aspect without consensus was the ICP cutoff value. The majority of panel members either used the Pedowitz criteria [8] or part of these (13/24; 54%). Nine panel members (38%) indicated they were using locally established cut-off values, or ICP values different from the Pedowitz criteria [2, 5, 7].
Alternative Diagnostic Tests
Aside from ICP measurements, the panel considered "symptom provocation by use of a treadmill test and repeat physical examination" a conditional test for the diagnosis (21/27; 78%). The majority of the panel did not use additional diagnostic examinations (e.g., conventional radiography, computerized tomography scans, or magnetic resonance imaging) to confirm the presence of CECS (23/24; 96%).
Diagnosis of Chronic Exertional Compartment Syndrome
Panel members reached consensus that signs and symptoms are the essential aspects of the diagnostic work-up (23/26; 88%). Based on the questions covering the diagnostic workup in rounds 1 and 2 of the Delphi procedure, five key criteria were formulated (Textbox 2). All panel members considered these criteria key to the clinical diagnosis of CECS (24/24; 100%). In addition, six members (25%) suggested adding the reversible or transient character of CECS symptoms to this list. More specifically, they claimed that symptoms resolved within minutes of rest, rather than hours or days. After introduction of the key criteria in round 3, ICP measurements were still regarded essential by 38% of Fig. 1 Consensus was attained regarding a minimally invasive fasciotomy for CECS of the lower leg anterior compartment including a single 2-cm longitudinal skin incision for introduction of the fasciotome (see also Textbox 3) the panel (9/24). Three of these panel members indicated that if more evidence was available regarding the diagnostic accuracy (i.e., sensitivity and specificity) of these criteria, the measurement of ICPs might become non-essential.
TEXTBOX 2
The following items were agreed on and considered key criteria for the diagnosis of chronic exertional compartment syndrome 1. The patient is involved in sports or activities that require repetitive activation of the same muscles.
2. The patient reports to experience pain during exercise.
3. The patient reports to experience tightness during exercise. 4. The patient indicates to prematurely cease or avoid specific sports or activities due to pain. 5. Signs and symptoms can be induced by provocative and repetitive activities during physical examination.
Surgical Treatment
The surgeons in the panel agreed that a fasciotomy of the anterior compartment can be performed safely and effectively using the entrance of a single small incision, 1-2 cm laterally from the tibial crest, at the distal two-thirds of the lower leg (10/12; 83%; Fig. 1). The surgical panel did not agree on the type of fasciotome that was required for the minimally invasive fasciotomy (Textbox 3). Additionally, they also agreed that a fasciotomy of the lateral compartment can be performed safely and effectively using two incisions, each 5-8 cm in length, located in between the fibular head and the lateral malleolus (11/12; 92%). In case both compartments require opening during the same session, 83% (10/12) agreed this can be done safely and effectively using a single 4-to 6-cm incision, at the intramuscular septum between the anterior and lateral compartment. No consensus was reached on whether or not compartments other than the affected ones require preventive surgery [for instance, the lateral compartment in case only the anterior compartment was affected (4/12; 33%) or vice versa (5/12; 42%)].
Visualization and localization of the superficial peroneal nerve during fasciotomy of the anterior compartment remains a topic of discussion, as only 42% (5/12; round 1) and 67% (8/12; round 2) of the surgeons considered this necessary. Moreover, ultrasound was not considered an effective tool for visualization of the superficial peroneal nerve (5/12; 42%).
No consensus was reached on the instruments that should be used during surgery. An overview of suggested instruments can be found in Textbox 3.
Postoperative Rehabilitation
Consensus was reached on the statement that a standardized institutional rehabilitation protocol should be used postoperatively (21/24; 88%), not necessarily adjusted for the compartment requiring the fasciotomy (17/24; 71%). However, the panel members could not agree on the type of restrictions in such a protocol. Permissive weight bearing was allowed according to 67% (16/24), whereas 33% (8/24) favored unrestricted weight bearing. In the first 2 weeks, sports participation is not allowed by 46% (11/24), but permitted if not painful by 50% (12/24).
Recurrent Disease
No consensus was achieved on management of recurrent or residual disease after surgical treatment. In case of recurrent CECS, panel members considered the following actions:
Discussion
This three-round Delphi analysis established consensus on several aspects regarding the clinical practice guidelines for diagnosis and treatment of lower leg CECS in both civilian and military patient populations. A panel with 27 international expert members reached consensus on five key sign and symptom criteria of CECS. Cessation of provocative activities and gait retraining were considered valuable components for a conservative treatment program. All surgeons on the panel agreed on the size and location of the incision for the fasciotomy. No consensus was reached on the diagnostic role of muscle compartment pressures (ICP). Additionally, opinions on postoperative rehabilitation protocol and treatment of recurrent or residual CECS varied considerably, although there was consensus these aspects should be standardized. Current diagnosis and management of patients with CECS occurs in the absence of universally accepted guidelines and standardized protocols. Earlier literature already identified a large heterogeneity in evidence on diagnosis [5,6] and treatment [21,29] of CECS. The current study is a first attempt forwarded by expert-based evidence initiating simple guidelines for clinical practice. The experienced panel members are considered leading figures in the field of exercise-related leg pain having declared ample work experience with CECS. The international character of the panel lends credit to the statements on which consensus was achieved.
The most prominent result of the current study was the consensus attained on the five proposed key criteria summarizing crucial signs and symptoms of CECS (Textbox 2). Several studies attempted to identify prognostic clinical factors reflecting diagnosis of CECS. Examples are post-effort muscle hardness and presence of a fascial hernia [30,31]. Also, participation in running or skating, pain recidivism upon performing the same exercise, and the absence of pain at rest were also mentioned [31]. The panel members of the current study agreed that CECS was considered likely if the patient (1) is involved in activities that require repetitive activation of the same muscle(s), (2) reports pain during exercise, (3) reports tightness during exercise, (4) prematurely ceases or avoids specific activities, and (5) can induce symptoms by performing provocative activities during physical examination. Contrary to the previously published literature is the lack of consensus amongst the panel members with regard to tenderness upon palpation over the affected compartment or the presence of a muscle herniation (37% and 22%, respectively). The expert panel also indicated that, aside from the ability to induce symptoms by provocative activities, no other signs were considered essential for the diagnosis of CECS. Future evaluation of the currently proposed key criteria, in combination with previous literature, can aid in the development of a proficient screening tool for daily clinical practice.
Panel members did not reach consensus on the role and execution of ICP measurements in the diagnostic process of CECS. This disagreement could reflect the ongoing debate in current literature, in which the value of test protocols and cut-off values are repeatedly questioned [5,6,9,32]. The panel achieved consensus on the preferred timing of ICP measurement (1 min after completion of a symptom provocation test), number of legs and compartments (bilateral, only affected compartments), and route of catheter administration in suspected deep posterior compartment (medial approach). Yet, the ongoing lack of consensus on cut-off values and patient positioning is worrisome. However, several members of the panel reported that ICP values could be considered as complementary or non-essential once the anamnestic key criteria strongly suggest CECS. If the proposed set of specific key criteria is validated or alternative noninvasive tests become available, invasive ICP measurements will become obsolete.
The postoperative rehabilitation protocol and treatment of recurrent (or residual) disease necessitates future research. The lack of agreement amongst rehabilitation protocols was already identified in previous literature [3]. For example, surgeons encourage early mobilization of the limb in an attempt to reduce scar tissue formation [33]. However, the timing of mobilization or initiation of exercises varies in current literature from directly after surgery to somewhere in the first 7 days [34][35][36][37][38][39]. Also, the use of analgesics is not univocally prescribed in these studies. This lack of agreement was confirmed by current findings, as no consensus could be reached on weight bearing and sports participation after surgical treatment. The panel, however, was not specifically tabulated on the encouragement of early mobilization or the use of analgesics throughout rehabilitation. In addition, the experts did not agree on the most appropriate approach for recurrent or residual disease after surgical treatment. Interestingly, this variation in answers was present amongst both non-surgical and surgical physicians. The variation in guidelines in current literature, in combination with the absence of consensus in the current study indicate that future research should also focus on these specific issues of postoperative rehabilitation.
The current study has limitations that characterize every Delphi analysis. First, selection of the Delphi panel members was dependent on the scientific network and subjective judgment of the authors. Bias might have been introduced by the exclusion of potential panel members due to technical reasons (18%). Secondly, although this analysis method is an accepted methodology for gaining level 5 evidence, the present findings require confirmation in future research. The technique is based on expert opinions and does not replace scientific reports with original research. Also, the currently used 70% threshold level of consensus is debated as a scientific rationale is not available [27]. Nevertheless, the present study does provide a comprehensive overview of current state-of-the-art opinions of a large group of professionals who are deeply involved in the daily care and treatment of CECS patients.
Conclusion
An international multidisciplinary expert panel reached consensus on five key characteristics for the diagnosis of CECS in the leg. The panel achieved partial agreement on statements regarding ICP measurements and conservative and surgical treatment of CECS. No consensus was reached with respect to postoperative rehabilitation guidelines, nor the preferred treatment for recurrent or residual disease. The
|
2022-07-29T13:27:44.115Z
|
2022-07-29T00:00:00.000
|
{
"year": 2022,
"sha1": "734c60aa3f5291d61ce6e5c094ef879300d39e51",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40279-022-01729-5.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "734c60aa3f5291d61ce6e5c094ef879300d39e51",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249926380
|
pes2o/s2orc
|
v3-fos-license
|
Conformally Schwarzschild cosmological black holes
We thoroughly investigate conformally Schwarzschild spacetimes in different coordinate systems to seek for physically reasonable models of a cosmological black hole. We assume that a conformal factor depends only on the time coordinate and that the spacetime is asymptotically flat Friedmann-Lema\^{\i}tre-Robertson-Walker universe filled by a perfect fluid obeying a linear equation state $p=w\rho$ with $w>-1/3$. In this class of spacetimes, the McClure-Dyer spacetime, constructed in terms of the isotropic coordinates, and the Thakurta spacetime, constructed in terms of the standard Schwarzschild coordinates, are identical and do not describe a cosmological black hole. In contrast, the Sultana-Dyer and Culetu classes of spacetimes, constructed in terms of the Kerr-Schild and Painlev\'{e}-Gullstrand coordinates, respectively, describe a cosmological black hole. In the Sultana-Dyer case, the corresponding matter field in general relativity can be interpreted as a combination of a homogeneous perfect fluid and an inhomogeneous null fluid, which is valid everywhere in the spacetime unlike Sultana and Dyer's interpretation. In the Culetu case, the matter field can be interpreted as a combination of a homogeneous perfect fluid and an inhomogeneous anisotropic fluid. However, in both cases, the total energy-momentum tensor violates all the standard energy conditions at a finite value of the radial coordinate in late times. As a consequence, the Sultana-Dyer and Culetu black holes for $-1/3
Introduction
A sufficiently isolated black hole in the universe should be well approximated by an asymptotically flat and stationary black-hole solution. By the black-hole uniqueness theorem, it is known that the only asymptotically flat and stationary black hole with a regular and simply-connected event horizon in the Einstein-Maxwell system is the Kerr-Newman black hole. (See [1] for a review.) The discoveries of the black-hole thermodynamics [2] and the Hawking radiation [3] are major milestones in gravitational physics based on the uniqueness theorem, which means that one can understand quite general properties of an asymptotically flat and stationary black hole only through the study of the Kerr-Newman black hole.
However, the assumption of stationarity is not justified for dynamic black holes growing rapidly by absorbing the surrounding matter. Moreover, when the radii of the event horizon and the cosmological horizon are relatively close, as in the case of primordial black holes just formed in the early universe, the assumption of asymptotic flatness is not justified either. Such a cosmological black hole must be modeled by a dynamical black-hole solution which is asymptotic to an expanding universe. (See [4,5] for reviews.) The Schwarzschildde Sitter solution is surely the best known model of such a cosmological black hole, which is asymptotic to the de Sitter universe. Another example is the Einstein-Straus model [6,7] (often discussed in the context of the "Swiss-cheese" model), which connects the Friedmann-Lemaître-Robertson-Walker (FLRW) solution with a dust fluid at a timelike hypersurface to the interior Schwarzschild spacetime. Meanwhile, McVittie's spherically symmetric and asymptotically FLRW perfect-fluid solution [8] had been a candidate to describe a cosmological black hole after Nolan's study in his series of papers [9][10][11]. Finally in 2012, Kaloper et al. showed that the maximally extended McVittie spacetime describes a cosmological black hole in the case where the scale factor obeys a(t) ∝ exp(H 0 t) as t → ∞ with a positive constant H 0 [12]. (See also [13].) In addition to these solutions, several conformally Schwarzschild spacetimes have been proposed as cosmological black-hole models. Thakurta proposed already in 1981 a conformally Schwarzschild spacetime in terms of the standard Schwarzschild coordinates with a conformal factor that depends only on the coordinate t [14]. However, it has been exposed that this Thakurta spacetime has a curvature singularity at r = 2M and does not describe a cosmological black hole [15,16]. In 2005, Sultana and Dyer successfully constructed a black-hole spacetime in terms of the Kerr-Schild coordinates which is asymptotic to the flat FLRW universe filled with a dust fluid [17]. They showed that the corresponding energymomentum tensor can be interpreted as a combination of a dust fluid and a null dust. But unfortunately, their interpretation of matter is not valid in the whole spacetime because there is a region where tangent vectors of the orbits of a dust fluid and a null dust become complex. Subsequently, McClure and Dyer investigated a conformally Schwarzschild spacetime constructed with the isotropic coordinates [18]. However, this spacetime can be transformed into the Thakurta spacetime by a coordinate transformation as pointed out in [19], and therefore it does not describe a cosmological black hole either. In 2012, Culetu studied a conformally Schwarzschild spacetime constructed with the Painlevé-Gullstrand coordinates and found that it can be a model of a cosmological black hole and the corresponding matter field is an anisotropic fluid [20].
A cosmological black-hole solution with physically reasonable matter fields would be a useful model in the study of primordial black holes, which are important candidates for dark matter. The effect of the cosmic expansion may change the properties of black hole from the stationary case, and consequently affect the analysis for primordial black holes in cosmology. In fact, the black-hole thermodynamics and the Hawking radiation of a cosmological black hole are highly non-trivial problems. Conformally Schwarzschild cosmological black holes are quite useful in this context because the event horizon can be a conformal Killing horizon as well. Jacobson and Kang defined the temperature T JKSD of a conformal Killing horizon based on the argument that the temperature of a black hole should be conformally invariant because the Hawking radiations of a conformally coupled scalar field are identical from a Schwarzschild black hole and its conformal cousins sharing the same event horizon [21]. Sultana and Dyer also arrived the same definition of temperature in a different approach [22]. The effect of the cosmic expansion on Hawking radiation has been analyzed using the Einstein-Strauss model as a model without matter accretion and the Sultana-Dyer black hole as a model with accretion [23]. Contrary to claims in [21,22], the effective temperature of the Sultana-Dyer black hole evaluated from the Hawking radiation is time-dependent and modified from T JKSD [23]. As these examples show, to identify cosmological black-hole solutions with a physically reasonable matter field and clarify their properties not only contributes to the fundamentals of black-hole physics but also to modern cosmology.
In the present paper, we will thoroughly investigate conformally Schwarzschild spacetimes with a conformal factor as a function only of the "time" coordinate in different coordinate systems of the Schwarzschild spacetime. In particular, we will focus on the spacetimes which are asymptotic to the flat FLRW universe filled by a perfect fluid obeying a linear equation state p = wρ with w > −1/3. The organization of the present paper is as follows. In Sec. 2, we will summarize mathematical results to study the conformally Schwarzschild spacetimes in the subsequent sections. In Secs. 3 and 4, we will clarify the global structures and the corresponding matter fields of the Sultana-Dyer class and the Culetu class of cosmological black holes, respectively. Summary and discussions will be given in the final section. In Appendix A, we present several non-conformally Schwarzschild spacetimes as other candidates of a more general cosmological black-hole spacetime.
Our conventions for curvature tensors are [∇ ρ , ∇ σ ]V µ = R µ νρσ V ν and R µν = R ρ µρν . The signature of the Minkowski spacetime is (−, +, +, +), and Greek indices run over all spacetime indices. Throughout this paper, a dot on the scale factor a denotes differentiation with respect to its argument. We adopt units such that c = G = = k B = 1.
Preliminaries
In this section, we summarize mathematical results to study spherically symmetric and conformally Schwarzschild spacetimes in the subsequent sections.
Spherically symmetric spacetime
The most general four-dimensional spherically symmetric spacetime (M 4 , g µν ) is given by where y A (A = 0, 1) are coordinates in a two-dimensional Lorentzian spacetime (M 2 , g AB ) and dΩ 2 := dθ 2 + sin 2 θdφ 2 . The areal radius R(y)(≥ 0) is a scalar on (M 2 , g AB ). The Einstein tensor of the spacetime with the metric (2.1) is given by where a two-tensor G AB and a scalar G on (M 2 , g AB ) are given by Here (2) R is the Ricci scalar of (M 2 , g AB ) and we have defined (DR) 2 := g AB (D A R)(D B R) and D 2 R := g AB D A D B R in terms of the covariant derivative D A on (M 2 , g AB ). Hence, in general relativity, the corresponding energy-momentum tensor T µν (= G µν /8π) is given by with T AB = G AB /8π and p t = G/8π.
For a spherically symmetric spacetime (2.1), the Misner-Sharp quasi-local mass m MS [24] is defined by which satisfies Properties of m MS have been fully investigated in [25]. The Misner-Sharp mass converges to the ADM mass at spacelike infinity in an asymptotically flat spacetime.
Trapping horizon
In the present paper, we adopt spherical slicings to identify trapped round spheres and trapping horizons defined by Hayward [26] 1 . Let k µ (∂/∂x µ ) = k A (∂/∂y A ) and l µ (∂/∂x µ ) = l A (∂/∂y A ) be two independent future-directed radial null vectors in the spacetime (2.1) satisfying k µ k µ = l µ l µ = 0 and k µ l µ = −1. The expansions along those null vectors are given by where A := 4πR 2 is the surface area of a two-round sphere with the areal radius R given by y A = constant (A = 0, 1) and we have defined In terms of θ ± , a trapped (untrapped) round sphere is defined by a two-round sphere with θ + θ − > (<)0 and a trapped (untrapped) region is the union of all trapped (untrapped) round spheres. A marginal round sphere is a two-round sphere with θ + θ − = 0 2 . Since the metric g AB on (M 2 , g AB ) can be decomposed in terms of k A and l A as Thus, an untrapped (trapped) region is given by (DR) 2 > (<)0, or equivalently R > (<)2m MS by Eq. (2.6). A marginal round sphere is given by (DR) 2 = 0, or equivalently R = 2m MS .
It is noted that a (un)trapped round sphere and marginal round sphere are defined with respect to a given SO(3) group defining the spherical symmetry as emphasized in Ref. [27]. Such a SO(3) group is unique in generic spherically symmetric spacetimes, however, there are spacetimes where the SO(3) group is not unique, such as flat, (anti-)de Sitter and Friedmann-Lemaître-Robertson-Walker spacetimes. In such a spacetime with higher symmetry, the locations of trapping horizons and (un)trapped regions depend on the choice of the SO(3) group.
Using the degrees of freedom to interchange such that θ + ↔ θ − , one may set θ + = 0 on a marginal round sphere without loss of generality. Then, a marginal round sphere is Finally, a trapping horizon is the closure of a hypersurface foliated by marginal round spheres [26] 3 . All the possible types of trapping horizon are summarized in Table 1. Table 1: Types of trapping horizon Σ given by θ + = 0 and their interpretations. The inequality θ − | Σ < 0 means that the ingoing null rays converge on the trapping horizon Σ, while the inequality L − θ + | Σ < 0 means that the outgoing null rays are instantaneously parallel on Σ but diverging just outside Σ and converging just inside. Thus, a future outer trapping horizon corresponds to a black-hole horizon among others [26,28]. On the other hand, a past inner trapping horizon and a past outer trapping horizon correspond to a cosmological horizon and a white-hole horizon, respectively. The inner Cauchy horizon in the non-extreme Reissner-Nordström black hole is a future inner trapping horizon. As an example, Fig. 1 exhibits four different types of trapping horizons in the Schwarzschildde Sitter spacetime with a metric given by where M and Λ(> 0) satisfy M > 1/(3 √ Λ) so as to generate two Killing horizons at r = r 1 (> 0) and r 2 (> r 1 ), which are trapping horizons as well. Here it should be noted that all the results in Sec. 9 in the textbook [29] cannot be applied directly to the Schwarzschildde Sitter spacetime and other cosmological black-hole spacetimes. For example, the fact that outer trapped round spheres in the regions VII and VIII in Fig. 1 can be seen from the future null infinity does not contradict to Proposition 9.2.8 in [29]. This is because these black-hole spacetimes do not satisfy the condition (4) in page 222, so that they are not weakly asymptotically simple and empty, which is an assumption in Sec. 9 in [29].
Here we prove that the location of a trapping horizon and its type are invariant concepts with spherical slicings. The following proposition is a generalization of the claim in [30] and essentially the same as Result 6.1 in [27], whereas we provide an explicit proof on the independence from the choice of the pair of the future-directed radial null vector fields 4 .
Proposition 1
With spherical slicings, the location of a trapping horizon Σ and its type with respect to a given SO(3) group of isometry in a spherically symmetric spacetime (2.1) are invariant under not only coordinate transformations on (M 2 , g AB ) but also the freedom in choosing a pair of future-directed radial null vectors k = k A ∂/∂y A and l = l A ∂/∂y A , which are regular on Σ.
Proof: Since the null expansions θ ± and their derivatives L ± θ ∓ are scalars on (M 2 , g AB ) by definitions (2.8) and (2.9), the values of θ ± and L ± θ ∓ at each point p ∈ (M 4 , g µν ) are unchanged under coordinate transformations on (M 2 , g AB ) if we fix a pair {k, l}. Therefore, the location of a trapping horizon and its type are invariant under coordinate transformations on (M 2 , g AB ) for a given {k, l}. So, what is to prove is independence from the choice of the pair of future-directed radial null vectors.
Let k = k A (∂/∂y A ) and l = l A (∂/∂y A ) be the original choice, where k A and l A are assumed to be finite on trapping horizons. Using the degrees of freedom to interchange such that k ↔ l, one may set θ k = 0 on a trapping horizon without loss of generality, where θ k := 2R −1 k A D A R. Now letk =k A (∂/∂y A ) andl =l A (∂/∂y A ) be a new choice of future-directed radial null vectors in the same coordinates on (M 2 , g AB ). Since there are only two independent future-directed radial null vectors at each spacetime point up to a multiplication factor and we imposek Al is a scalar on (M 2 , g AB ) and assumed to be positive for {k,l} to be future directed. Then, we obtain and assume that the positive function β is C 1 on a trapping horizon Σ defined by θ k = 0. Equation (2.11) shows that θk| Σ = 0 is equivalent to θ k | Σ = 0 and the signs of θl and θ l are the same on Σ. In addition, Eq. (2.12) shows Llθk| Σ = L l θ k | Σ . Hence, the location of a trapping horizon and its type are independent from the choice of the pair of future-directed radial null vectors.
It is noted that we need to fix the SO(3) group defining the spherical symmetry in Proposition 1 as emphasized in Ref. [27]. We also note that the regularity assumption on k and l on the trapping horizon Σ is indispensable to prove Proposition 1. As a concrete example, let us consider the Schwarzschild spacetime in the ingoing Eddington-Finkelstein coordinates with f (r) = 1 − 2M/r, of which trapping horizon Σ determined by f (r) = 0 coincides with the event horizon. Consider a pair of future-directed radial null vectors of which components are finite on Σ and satisfy k µ k µ = 0 = l µ l µ and k µ l µ = −1. The expansions θ k = √ 2f /r and θ l = − √ 2/r show θ l | Σ < 0, so that the trapping horizon is of the future type. In contrast, for a different pairk = f −1/2 k andl = f 1/2 l, of which components arek the expansions θk = √ 2f 1/2 /r and θl = − √ 2f 1/2 /r provide a wrong answer θl| Σ = 0. This is a consequence ofk being singular on Σ.
Lastly, the contrapositions of the following propositions [26] are useful to identify the region where the null energy condition or the dominant energy condition is violated. (See also [31,32].)
Kodama vector and Misner-Sharp mass
The general spherically symmetric spacetime (2.1) admits the Kodama vector K µ (∂/∂x µ ) = K A (∂/∂y A ) [33]. Here a vector K A on (M 2 , g AB ) is defined by where ǫ AB is a volume two-form on (M 2 , g AB ) satisfying and hence ǫ 01 = −1/ǫ 01 . K A is orthogonal to D A R and the minus sign in the definition of K A in Eq. (2.16) is to make K A future-pointing. The expression K µ K µ = K A K A = −(DR) 2 shows that K µ is timelike and spacelike in untrapped regions and trapped regions, respectively, and it is null on trapping horizons. Since the Kodama vector K µ reduces to a hypersurface-orthogonal timelike Killing vector if the spacetime is static, it generates a preferred time direction in untrapped regions in the general spherically symmetric spacetime.
In fact, the Misner-Sharp mass (2.6) is a locally conserved charge along an energy current vector J µ := −T µ ν K ν associated with a Kodama observer (of which orbit is timelike only in untrapped regions). Here we have J µ (∂/∂x µ ) = J A (∂/∂y A ), where One can show that J µ is divergence-free (∇ µ J µ = 0) [32] and then the integral of −J µ over a spacelike hypersurface Π with boundary gives an associated charge Q J := − Π J µ dΠ µ , where dΠ µ is a directed surface element on Π 5 . One may use another expression dΠ µ = u µ dΠ with a future-directed unit normal u µ to Π and a surface element dΠ on Π. In fact, the charge Q J is identical to m MS up to an integration constant. Suppose that Π is defined by y 0 = t 0 =constant and then we have u µ dx µ = −(1/ −g 00 )dy 0 . Then, using g 00 = g 11 / det(g AB ), we obtain where y 1 = b 1 and y 1 = b 2 correspond to boundaries on Π. It is reasonable to set y 1 = b 1 correspond to a regular center if it exists.
Here we note that the Kodama vector K µ itself is divergence-free (∇ µ K µ = 0) [32], so that it is also a locally conserved current. Its associated charge Q K := − Π K µ dΠ µ is the volume 4πR 3 /3 inside a two-round sphere with the areal radius R.
Compatible matter field in general relativity
In the present paper, we will consider a matter field compatible with a given spacetime (M 4 , g µν ) in general relativity, of which energy-momentum tensor is determined through the Einstein equations as T µν = G µν /(8π). According to the Hawking-Ellis classification, an energy-momentum tensor T µν is classified into four types depending on the properties of its eigenvectors as shown in Table 2 [29,34,35]. In general relativity, any spherically symmetric spacetime is compatible with an energy-momentum tensor of type I, II, or IV.
Here η (a)(b) is the metric in the local Lorentz frame and the spacetime metric g µν is given by ν . η (a)(b) and its inverse η (a)(b) are respectively used to lower and raise the indices with brackets.
In the most general spherically symmetric spacetime (2.1), one may introduce {E where the basis one-forms {E (α) Here η (α)(β) is the metric in the local Lorentz frame on (M 2 , g AB ). Then, non-zero compo- ). In a region with G (0)(1) = 0, the Hawking-Ellis type of the corresponding energy-momentum tensor (2.5) is of type I. In a region with G (0)(1) = 0, we can use the following lemma [36].
Lemma 1 The Hawking-Ellis type of the energy-momentum tensor (2.5) is type I if
The standard energy conditions consist of the null energy condition (NEC), weak energy condition (WEC), dominant energy condition (DEC), and strong energy condition (SEC). Using the local Lorentz transformation, one can write T (a)(b) in a canonical form [29,34,35]. In a spherically symmetric spacetime, the canonical form of type I is (2.26) Here ρ, p r , and p t are interpreted as the energy density, radial pressure, and tangential pressure, respectively, and equivalent expressions of the standard energy conditions are NEC : ρ + p r ≥ 0 and ρ + p t ≥ 0, (2.27) WEC : ρ ≥ 0 in addition to NEC, (2.28) DEC : ρ − p r ≥ 0 and ρ − p t ≥ 0 in addition to WEC, (2.29) SEC : ρ + p r + 2p t ≥ 0 in addition to NEC. (2.30) The canonical form of type II is with ν = 0 and equivalent expressions of the standard energy conditions are NEC : ν ≥ 0 and ρ + p t ≥ 0, (2.32) WEC : ρ ≥ 0 in addition to NEC, The type-IV energy-momentum tensor violates all the standard energy conditions.
For example, let us consider the flat FLRW spacetime with a conformal time t, in which the line element is written as: (2.36) Adopting the following orthonormal basis one-forms: we obtain so that the corresponding energy-momentum tensor T µν is of the Hawking-Ellis type I (2.26) This matter field can be interpreted as a perfect fluid in the comoving coordinates: In the present paper, we assume that the scale factor obeys a power law a(t) = a 0 |t| α , where a 0 and α are constants. In general relativity, such a conformal factor with α = 2/(3w + 1) is a solution for a perfect fluid obeying an equation of state p = wρ. Then we have and the standard energy conditions are summarized in Table 3.
We will also use the following proposition [36] in the subsequent sections.
Proposition 5 For an energy-momentum tensor (2.5) in an orthonormal frame, all the standard energy conditions are violated if
expressions of the standard energy conditions are given by
The Schwarzschild spacetime revisited
The Schwarzschild vacuum solution is written in the most well-known diagonal coordinates {t, r, θ, φ} as where M is a constant. We refer to the coordinates (2.47) as the Schwarzschild coordinates.
Hereafter, we assume that M is positive corresponding to the black-hole case and then r = 0 is a spacelike curvature singularity. In the coordinate system (2.47), the event horizon r = 2M is a coordinate singularity. As a result, the domain of the coordinate r is given by 0 < r < 2M and 2M < r < ∞. Therefore, although the Schwarzschild coordinates (2.47) cover the region I, II, III, or IV in the maximally extended Schwarzschild spacetime shown in Fig. 2, they do not cover the event horizon r = 2M.
The Eddington-Finkelstein and the Kruskal-Szekeres coordinates
In the spacetime with the metric (2.47), we introduce an outgoing null coordinate u and an ingoing null coordinate v such that where r * is the tortoise coordinate defined by The domains of u and v are −∞ < u < ∞ and −∞ < v < ∞, respectively. In the outgoing Eddington-Finkelstein coordinates {u, r} and the ingoing Eddington-Finkelstein coordinates {v, r} on (M 2 , g AB ), the line element in the Schwarzschild spacetime is written as respectively. Since the metric and its inverse are both regular at the event horizon r = 2M, the domain of r is 0 < r < ∞ in the Eddington-Finkelstein coordinates (2.51) and (2.52). Nevertheless, they cover a half of the maximally extended Schwarzschild spacetime with M > 0. In fact, the coordinates (2.51) cover the region I+IV or II+III in Fig. 2, while the coordinates (2.52) cover the region I+II or III+IV.
In order to cover the maximally extended Schwarzschild spacetime in a single coordinate system, we introduce the Kruskal-Szekeres coordinates {U, V } on (M 2 , g AB ) such that in which the line element in the Schwarzschild spacetime is written as where the function r(U, V ) is implicitly given from Since U and V defined by Eq. (2.53) satisfy U < 0 and V > 0, the relation (2.55) shows that the coordinates {U, V } cover the region r > 2M. Nevertheless, since the metric (2.54) is analytic at UV = 0 corresponding to r = 2M, the spacetime is analytically extended into the region of U ≥ 0 or V ≤ 0. Accordingly, the Kruskal-Szekeres coordinates (2.54) defined in the domains −∞ < U < ∞ and −∞ < V < ∞ cover the entire maximally extended Schwarzschild spacetime.
In order to draw the Penrose diagram shown in Fig. 2, one needs to introduce new null coordinatesŪ andV such that of which domains are −π/2 <Ū < π/2 and −π/2 <V < π/2. Now the Schwarzschild spacetime is embedded in finite domains ofŪ andV and the line element on (M 2 , g AB ) is written as ds 2 2 = (cosŪ cosV ) −2 ds 2 2 , where The spacetime with the metric (2.57) provides the Penrose diagram in Fig. 2, which is a conformal completion of the maximally extended Schwarzschild spacetime by attaching the boundaries atŪ = ±π/2 andV = ±π/2.
Isotropic coordinates
With a new radial coordinate σ defined by the Schwarzschild metric (2.47) is written in the isotropic coordinates as where σ = M/2 is a coordinate singularity. Since the areal radius r = r(σ) given by Eq. (2.58) takes a minimum value r = 2M at the "throat" σ = M/2, the coordinates (2.59) cover only regions with r > 2M, which are the regions I and III in Fig. 2, and σ = 0 and σ = ∞ correspond to two distinct spacelike infinities.
Painlevé-Gullstrand coordinates
With a new coordinate τ defined by which satisfies dτ = dt + 2M/rdr/(1 − 2M/r), the Schwarzschild metric (2.47) is written in the Painlevé-Gullstrand coordinates as Non-zero components of the inverse metric on (M 2 , g AB ) are It is noted that τ =constant is a spacelike hypersurface but ∂/∂τ is timelike (spacelike) in a region with r > (<)2M. Since the metric and its inverse are regular at r = 2M, the domains of τ and r are −∞ < τ < ∞ and 0 < r < ∞, respectively. As a result, the Painlevé-Gullstrand coordinates (2.61) cover the regions I+II or III+IV in Figs. 2 and 3(a).
Lemaître coordinates
By a coordinate transformation from the Painlevé-Gullstrand coordinates (2.61), one obtains the Schwarzschild metric in the Lemaître coordinates; τ is a timelike coordinate and χ is a spacelike coordinate everywhere and their domains are −∞ < τ < ∞ and χ > τ . The hypersurface-orthogonal Killing vector ξ µ generating staticity in an untrapped region is given by of which squared norm is The areal radius r = r(τ, χ) given by Eq. (2.63) is a monotonically increasing function of χ and a monotonically decreasing function of τ . A curvature singularity r = 0 and the event horizon r = 2M correspond to τ = χ and χ − τ = 4M/3, respectively. Since the metric and its inverse are regular at r = 2M, the Lemaître coordinates (2.64) cover the region I+II or III+IV in Figs. 2 and 3(b).
Kerr-Schild coordinates
With a new coordinate η defined by Non-zero components of the inverse metric on (M 2 , g AB ) are and hence η =constant is a spacelike hypersurface. Since the metric and its inverse are regular at r = 2M, the domains of η and r are −∞ < η < ∞ and 0 < r < ∞. As a result, the Kerr-Schild coordinates (2.68) cover the region I+II or III+IV in Figs. 2 and 3(c).
Conformally Schwarzschild spacetime
In subsequent sections, we will study a variety of spacetimes (M 4 , g µν ) which are conformally related to the Schwarzschild spacetime (M 4 ,ḡ µν ) as g µν = Ω 2ḡ µν . Although a conformal transformation does not change light-cone structure, it does change the nature of the coordinate boundaries in the Penrose diagram. For example, in a spacetime which is conformally related to the Schwarzschild spacetime in the standard Schwarzschild coordinates (2.47), r = 2M in Fig. 2 can be a curvature singularity or null infinity depending on the form of Ω. Similarly, a future null infinity in Fig. 2 can be an extendable boundary in the coordinate system on (M 4 , g µν ). Since the conformal factor Ω 2 may introduce a new curvature singularity or infinity in (M 4 , g µν ), its global structure may be quite different from the Schwarzschild spacetime (M 4 ,ḡ µν ).
In addition, the conformal factor Ω 2 generally introduces a different matter field in (M 4 , g µν ). With the vanishing Einstein tensorḠ µν ≡ 0 of (M 4 ,ḡ µν ), the Einstein tensor G µν of (M 4 , g µν ) is given by where ∇ µ is a covariant derivative on (M 4 , g µν ) [39]. In general relativity, the right-hand side of Eq. (2.70) is identical to the energy-momentum tensor 8πT µν in the spacetime (M 4 , g µν ).
Furthermore, the conformal factor may change the spacetime symmetries. If (M 4 ,ḡ µν ) admits a Killing vector ξ µ satisfying a Killing equation L ξḡµν = 0, one obtains Hence, the conformally related spacetime (M 4 , g µν ) admits a conformal Killing vector ξ µ satisfying a conformal Killing equation L ξ g µν = 2ψg µν , where ψ is given by Then, a conformal Killing horizon Σ is defined in (M 4 , g µν ) in a parallel way to a Killing horizon as a null hypersurface where the conformal Killing vector ξ µ becomes null [22,40].
With a suitable conformal factor Ω 2 , a conformal Killing horizon Σ in the spacetime (M 4 , g µν ) and a Killing horizon in the Schwarzschild black-hole spacetime (M 4 ,ḡ µν ) may coincide with the same event horizon. If a black-hole spacetime (M 4 , g µν ) is static and asymptotically flat, the Hawking radiations of a conformally coupled scalar field from these two conformally related black holes are the same. For this reason, Jacobson and Kang argued that the surface gravity and temperature of a black hole, which characterizes the Hawking radiation, should be conformally invariant [21]. Then, they defined the surface gravity κ and temperature T JKSD on Σ in terms of the conformal Killing vector ξ µ by On the other hand, Sultana and Dyer defined the surface gravity κ SD by in a different approach [22]. Although both κ and κ SD reduce to the same surface gravity if ξ µ is a Killing vector, κ SD is not conformally invariant and satisfies κ = κ SD − 2ψ [21] 7 .
Nevertheless, Sultana and Dyer [22] defined the temperature by T SD := (κ SD − 2ψ| Σ )/2π, which is identical to T JKSD and conformally invariant. Since T JKSD is conformally invariant, with the same normalization of ξ µ for the Schwarzschild black hole (Ω ≡ 1), one obtains T JKSD = 1/(4M) for a conformally Schwarzschild black hole (M 4 , g µν ). However, it was shown that the effective temperature of the Sultana-Dyer cosmological black hole evaluated from the Hawking radiation is time-dependent and modified from T JKSD [23].
As an alternative definition of a dynamical black hole, one could consider a futureouter trapping horizon. In terms of the Kodama vector K µ (∂/∂x µ ) = K A (∂/∂y A ) with Eq. (2.16), the surface gravity κ TH and temperature T TH on an outer or degenerate trapping horizon Σ are defined by [28] In the static case, K µ and κ TH reduce to a hypersurface-orthogonal Killing vector ξ µ and the surface gravity on a Killing horizon, respectively. It has been reported that the Hawking temperature of any future outer trapping horizon in a spherically symmetric spacetime derived by a Hamilton-Jacobi variant of the Parikh-Wilczek tunneling method coincides with T TH [41].
Unsuccessful models of a cosmological black hole
In the present paper, we will investigate various conformally Schwarzschild spacetimes which are asymptotically flat FLRW universe filled by a perfect fluid obeying a linear equation state p = wρ with w > −1/3 to seek for cosmological black-hole solutions. In particular, we will consider the Thakurta spacetime (2.76), the McClure-Dyer spacetime (2.77), the Sultana-Dyer class of spacetimes (3.1), and the Culetu spacetime (4.1). Before moving to the analyses of the latter two spacetimes in the subsequent sections, here we show that the Thakurta spacetime and the McClure-Dyer spacetime are identical and they do not describe a cosmological black hole based on the previous works [15,16].
Actually, in spite that the Thakurta spacetime (or equivalently the McClure-Dyer spacetime) is distinct from other two spacetimes unless the conformal factor a 2 is non-constant, it has been misidentified with the Sultana-Dyer class of spacetimes by incorrect coordinate transformations disregarding the integrability conditions in some papers [42][43][44]. When a coordinate transformation y = y(ȳ) on (M 2 , g AB ) is defined in terms of differential displacements such that dy = F A (ȳ)dȳ A , the functions F A (ȳ) must satisfy an integrability condition ∂F 0 /∂ȳ 1 = ∂F 1 /∂ȳ 0 .
Thakurta class
The Thakurta spacetime [14] is a conformally Schwarzschild spacetime constructed with the Schwarzschild coordinates (2.47) given by which is asymptotic to the flat FLRW spacetime as r → ∞. The global structure of the Thakurta spacetime with M > 0 and a(t) = a 0 t α (α > 0) has been clarified in [15,16].
In this case, t = 0 and r = 2M are curvature singularities and the maximally extended spacetime given in the domains of 0 < t < ∞ and 2M < r < ∞ does not admit neither event horizon nor future outer trapping horizon. As the Penrose diagram drawn in Fig. 4 shows, the Thakurta spacetime does not describe a cosmological black hole.
McClure-Dyer class
The McClure-Dyer spacetime [18] is a conformally Schwarzschild spacetime constructed with the isotropic coordinates (2.59) given by
Sultana-Dyer class
In this section, we investigate the following conformally Schwarzschild spacetime constructed with the Kerr-Schild coordinates: which is asymptotic to the flat FLRW spacetime as r → ∞. Non-zero components of the inverse metric are given by Sultana and Dyer studied the spacetime (3.1) in detail with a(η) = η 2 [17]. In this section, we assume M > 0 and study a more general case a(η) = a 0 |η| α , where a 0 and α are positive constants. In the spacetime, η is a timelike coordinate everywhere and we define the future direction by increasing η.
Global structure
Under our assumptions, the spacetime with the metric (3.1) is analytic except at η = 0 and r = 0. In fact, both η = 0 and r = 0 are curvature singularities as the following Ricci scalar R blows up: .
On the other hand, Eq (3.6) is integrated to give r = −(η − η 0 ), which shows η → −∞ as r → ∞, as seen in Fig. 3(c). Equations (3.4) and (3.6) give C a(η) 2 =k 0 . (3.11) With a(η) = a 0 |η| α , Eq. (3.11) is integrated to give Since |λ| → ∞ holds as η → −∞ (and hence r → ∞) along a future-directed ingoing radial null geodesic, (η, r) → (−∞, ∞) is a past null infinity. As a result, the Penrose diagram of the spacetime (3.1) with a(η) = a 0 |η| α (α > 0) is drawn as in Fig. 5. It is clear that a maximally symmetric spacetime given in the domains 0 < η < ∞ and 0 < r < ∞, which corresponds to the portion consisting of I and II, describes an asymptotically flat FLRW cosmological black hole with the event horizon at r = 2M. While r = 0 corresponds to a black-hole singularity, η = 0 corresponds to a big-bang singularity. Its time-reversal spacetime consisting of III and IV, with the future direction defined by decreasing η, describes a white hole in the flat FLRW collapsing universe. If the coordinate boundary (η, r) → (−∞, 2M) is regular and then extendable, the maximally extended spacetime consisting of I', II', III', and IV' also describes a cosmological black hole. Even in that case, since this portion is not covered by a single coordinate system (3.1), the regular metric at the event horizon (η, r) → (−∞, 2M) could be nonanalytic and allow for lower differentiability. Hereafter, we will focus on the cosmological black hole corresponding to I+II.
Matter fields
Now let us identify the matter field to give the corresponding energy-momentum tensor T µν (= G µ /8π) in the cosmological black-hole spacetime in the domains 0 < η < ∞ and 0 < r < ∞. Non-zero components of the Einstein tensor for the metric (3.1) are given by G 00 =ȧ r 3 a 5 −4M(r + 3M) + 3r(r + 2M) 2ȧ a , (3.13) 14) Adopting the orthonormal basis one-forms we obtain where ρ F , p F , µ, P , and Ω are given by If the NEC is satisfied in the asymptotically flat FLRW region, ρ F + p F ≥ 0 holds. Under this condition together with M ≥ 0 andȧ ≥ 0, we can show the following proposition.
Now let us check the DEC for
Therefore, for α ≥ 2, the DEC is equivalent to 0 < η ≤ η W (r).
Lastly, to check the SEC for α > 0, we compute For 0 < α ≤ 2, the SEC is equivalent to 0 < η ≤ η N (r) because the first term in the large bracket is non-negative. For α > 2, T S ≥ 0 is equivalent to Since the right-hand side is negative, the SEC is equivalent also to 0 < η ≤ η N (r) for α > 2. Table 4: Regions where the energy-momentum tensor T µν (= G µν /8π) respects the energy conditions in the domain η > 0 in the Sultana-Dyer spacetime (3.1) with M > 0 and α > 0.
We have clarified the energy conditions for T µν in the Sultana-Dyer spacetime (3.1). The results are summarized in Table 4, which shows that all standard energy conditions are violated in the region with a finite r for sufficiently large η. In particular, Eq. (3.30) gives η N (2M) = 28(α + 1)M/9 on the event horizon r = 2M. Since at least the NEC should be respected on and outside the event horizon, the Sultana-Dyer class of spacetimes may be a proper model of a cosmological black hole in general relativity only in the early time 0 < η < 28(α + 1)M/9. Next we will consider decompositions of T µν into physically motivated matter fields.
Sultana-Dyer-type decomposition
In [17], Sultana and Dyer demonstrated in the case of α = 2 that the Einstein tensor (3.14)-(3.16) is compatible with a combination of a dust fluid and a null dust. This decomposition of T µν can be generalized to a combination of a perfect fluid and a null dust for arbitrary α > 0 as and Here k µ k µ = 0 and u µ u µ = −1 hold and b(η) is defined by In the asymptotically FLRW region r → ∞, T µν A becomes homogeneous and T µν B → 0 is realized. In the Sultana-Dyer case (α = 2), in particular, T µν A becomes a dust fluid (p A = 0).
However, the problem of this Sultana-Dyer-type decomposition is that u µ and k µ become complex in the region where the following inequality is satisfied Therefore, the decomposition (3.42) is justified to describe a cosmological black hole, corresponding to the region I+II in Fig. 5, only in the domain η ≤ η max .
A global decomposition
Actually, the Einstein tensor (3.14)-(3.16) is also compatible with the following energymomentum tensor that is a combination of a perfect fluid and a type-II null fluid:
Properties as a cosmological black hole
Now we study properties of the cosmological black-hole spacetime with the metric (3.1) with a(η) = a 0 η α (α > 0) which corresponds to the region I+II in Fig. 5 (η > 0).
Trapping horizon
Here we identify the locations of trapping horizons and their types. Consider a futuredirected outgoing and ingoing radial null vectors k µ and l µ given by respectively, which satisfy k µ k µ = l µ l µ = 0 and k µ l µ = −1. With the areal radius R = ar, the expansions (2.8) and (2.9) are computed to give Note that η + (r) > 0 holds only in the domain 0 < r < 2M.
Temperature of a cosmological black hole
Since the conformal Killing vector ξ µ = (1, 0, 0, 0) in the Sultana-Dyer spacetime satisfies ξ µ ξ µ = −a 2 (1 − 2M/r), the event horizon r = 2M is a conformal Killing horizon. Its temperature (2.73) is computed to give which is the same as the temperature of a hypersurface-orthogonal Killing vector in the Schwarzschild black-hole spacetime.
In this section, we have fully investigated the Sultana-Dyer class of spacetimes (3.1) and the corresponding matter field with a(η) = a 0 η α (α > 0). In fact, this class of spacetimes can be generalized further to be non-conformally Schwarzschild as shown in Appendix A.1. The generalized solution, which is conformally related to the Husain solution [46], is also a candidate of a more general cosmological black-hole spacetime.
Culetu class
In [20], Culetu studied a conformally Schwarzschild spacetime with M > 0, constructed with the Painlevé-Gullstrand coordinates (2.61) such as which is asymptotic to the flat FLRW spacetime as r → ∞. Non-zero components of the inverse metric are given by r , g θθ = g φφ sin 2 θ = 1 a 2 r 2 . Culetu found that r = 2M is not a curvature singularity and pointed out that the spacetime can be a model of a cosmological black hole. In addition, in spite that there is a non-zero offdiagonal component G 1 0 of the Einstein tensor, he showed that the corresponding matter field may be interpreted as an anisotropic fluid. In the Culetu spacetime, τ is a timelike coordinate everywhere and we define the future direction by increasing τ . In this section, we will focus on the case with a(τ ) = a 0 |τ | α where a 0 and α are positive constants.
It is noted that, by a coordinate transformation (2.63), the Culetu spacetime can be expressed in the Lemaître coordinates as Although the FLRW limit M → 0 is singular, the spacetime is asymptotic to the flat FLRW spacetime as χ → ∞, which is confirmed with a radial coordinate r = (2M) 1/3 (3χ/2) 2/3 .
Global structure
Under our assumptions, the spacetime with the metric (4.1) is analytic except at τ = 0 and r = 0, while the spacetime with the metric (4.3) is analytic except at τ = 0 and τ = χ.
As a result, the Penrose diagram of the Culetu spacetime (4.1) with a(τ ) = a 0 |τ | α (α > 0) is drawn as in Fig. 8. It is clear that a maximally extended spacetime given in the domains τ > 0 and r > 0, which corresponds to the portion consisting of I and II, represents an asymptotically flat FLRW cosmological black hole with the event horizon at r = 2M. While r = 0 corresponds to a black-hole singularity, τ = 0 corresponds to a big-bang singularity. Similarly to the Sultana-Dyer class (3.1), the coordinate boundary (τ, r) → (−∞, 2M) corresponds to a finite affine parameter along a radial null geodesic. Therefore, if it is regular, the maximally extended spacetime consisting of I', II', III', and IV' also describes a cosmological black hole. Even in that case, similar to the Sultana-Dyer class of spacetimes, differentiability of the metric at the event horizon (τ, r) → (−∞, 2M) is a non-trivial problem. Hereafter, we will focus on the cosmological black hole corresponding to I+II.
Matter fields
Now let us identify the matter field to give the corresponding energy-momentum tensor T µν (= G µν /8π) in the cosmological black-hole spacetime in the domains 0 < τ < ∞ and 0 < r < ∞. For this purpose, the Lemaître coordinates (4.3) are useful because τ is timelike everywhere and the Einstein tensor G µ ν is diagonal such that .
Hence, the corresponding energy-momentum tensor T µν is of the Hawking-Ellis type I and can be identified as an anisotropic fluid given by . (4.19) This result was obtained by Culetu in the Painlevé-Gullstrand coordinates (4.1) [20].
By a coordinate transformation (2.63), the energy-momentum tensor (4.16) is written in the Painlevé-Gullstrand coordinates (4.1) as With a(τ ) = a 0 |τ | α , we obtain 8πa 2 (ρ + p r ) = 2α(α + 1) which show ρ + p r ≥ 0 and ρ + p r + 2p t ≥ 0 in the domain τ > 0. Then, from the following equivalent expressions: we can identify the regions where the energy-momentum tensor (4.16) respects the energy conditions in the domain τ > 0. The results are summarized in Table 7, which shows that all the standard energy conditions are violated in the region with a finite r for sufficiently large τ . In particular, the NEC is violated on the event horizon r = 2M in the late time τ > 2(α + 1)M. This implies that the Culetu spacetime may be a proper model of a cosmological black hole in general relativity only in the early time.
In fact, the energy-momentum tensor (4.16) can be interpreted as a combination of a homogeneous perfect fluid and an inhomogeneous anisotropic fluid such that Up to now, we have assumed the form of the conformal factor such as a(τ ) = a 0 τ α (α > 0). Even with a more general form of a(τ ), it is shown under weak conditions that all the standard energy conditions are violated near the singularity r = 0.
Properties as a cosmological black hole
Now we study a cosmological black hole described by the Culetu spacetime (4.1) with a(τ ) = a 0 τ α (α > 0) in the domain τ > 0 and r > 0.
Trapping horizon
Here we identify the locations of trapping horizons and their types. Consider a futuredirected outgoing radial null vector k µ and a future-directed ingoing radial null vector l µ given by which satisfy k µ k µ = l µ l µ = 0 and k µ l µ = −1. With the areal radius R = ar, the expansions (2.8) and (2.9) are computed to give which show that trapping horizons associated with k µ and l µ are respectively given by τ = τ + (r) and τ = τ − (r), where (4.42) Note that τ + (r) > 0 holds only in the domain 0 < r < 2M and then both τ + (r) and τ − (r) are monotonically increasing functions with τ + (0) = τ − (0) = 0.
Misner-Sharp mass
The Misner-Sharp mass (2.6) for the Culetu spacetime with the metric (4.1) is computed to give Differentiating Eq. (4.53) with respect to r, we obtain which shows ∂ r m MS < 0 in the region of τ > αr r 2M .
(4.56) Figures 9 and 10 show that the event horizon r = 2M is in an untrapped region for τ > τ − (2M)(= αM). Since the contraposition of Proposition 4 asserts that the DEC is violated in an untrapped region with ∂ r m MS < 0, the DEC is violated on the event horizon r = 2M for τ > 2αM. This is consistent with Table 7 showing that the DEC is violated everywhere for 0 < α ≤ 1/2 and on the event horizon in the late time τ > 4(2α − 1)M/7 for α > 1/2 because 2αM > 4(2α − 1)M/7 is satisfied.
Temperature of a cosmological black hole
Since the conformal Killing vector ξ µ = (1, 0, 0, 0) in the Culetu spacetime with the metric (4.1) satisfies L ξ g µν = 2(ȧ/a)g µν and ξ µ ξ µ = −a 2 (1 − 2M/r), the event horizon r = 2M is a conformal Killing horizon. Its temperature (2.73) is computed to give which is the same as the temperature of the Schwarzschild black hole.
Next, we derive the temperature of the future outer trapping horizon τ = τ + (r). In the Culetu spacetime (4.1), the Kodama vector (2.16) is given by (4.58) The temperature (2.75) of τ = τ + (r) defined by Eq. (4.42) is computed to give where j(w) is a dimensionless function of w := r/(2M) defined by The function j(w) can be shown by cases to be positive in the domain 0 < w < 1 (corresponding to 0 < r < 2M) for α > 0. Hence, T TH is a positive function of r in the domain 0 < r < 2M and it diverges as r → 0 (τ → 0) and converges to zero as r → 2M (τ → ∞).
In this section, we have fully investigated the Culetu spacetime and the corresponding matter field. Actually, the Culetu spacetime can be slightly modified to be non-conformally Schwarzschild as shown in Appendix A.2, which is also a candidate of a cosmological blackhole spacetime.
Summary
In the present paper, we have fully investigated various conformally Schwarzschild spacetimes which are asymptotically flat FLRW universe filled by a perfect fluid obeying a linear equation state p = wρ with w > −1/3. Among them, as shown in Sec. 2.4, the Thakurta spacetime (2.76) constructed with the standard Schwarzschild coordinates and the McClure-Dyer spacetime (2.77) constructed with the isotropic coordinates are identical. Therefore, according to the results in [15,16], these spacetimes do not describe a cosmological black hole.
In Sec. 3, we have clarified that the region with η > 0 and r > 0 of the Sultana-Dyer class of conformally Schwarzschild spacetimes (3.1) constructed with the Kerr-Schild coordinates and a(η) = a 0 |η| α (α > 0) describe a cosmological black hole, where η = 0 and r = 0 are curvature singularities. We have shown that the corresponding matter field in this cosmological black-hole spacetime can be interpreted as a combination of a homogeneous perfect fluid and an inhomogeneous null fluid. Different from the interpretation by Sultana and Dyer [17] as a combination of a perfect fluid and a null dust, this novel interpretation of matter is valid in the whole spacetime. While the homogeneous perfect fluid is identical to the one in the background FLRW universe, the inhomogeneous type-II null fluid violates the NEC near the black-hole singularity at r = 0 as shown in Table 5. As summarized in Table 4, the total energy-momentum tensor violate all the standard energy conditions in the region with a finite r for sufficiently large η. We have also shown that the domain η < 0 also describes a cosmological black hole if the coordinate boundary given by (η, r) → (−∞, 2M) is regular.
In Sec. 4, we have clarified the global structure of the Culetu spacetime with the metric (4.1) constructed with the Painlevé-Gullstrand coordinates and a(τ ) = a 0 |τ | α (α > 0) and shown that the region with τ > 0 and r > 0 describes a cosmological black hole, where τ = 0 and r = 0 are curvature singularities. As Culetu pointed out in [20], the corresponding matter field in this cosmological black-hole spacetime can be interpreted as a single anisotropic fluid and can also be interpreted as a combination of a homogeneous (cosmological) perfect fluid and an inhomogeneous anisotropic fluid. The latter inhomogeneous anisotropic fluid obeys linear equations of state and violates all the standard energy conditions everywhere. Similarly to the Sultana-Dyer class of cosmological black holes, the total energy-momentum tensor violate all the standard energy conditions in the region with a finite r for sufficiently large τ as shown in Table 7. We have also shown that the domain τ < 0 in the Culetu spacetime describes a cosmological black hole if the coordinate boundary given by (τ, r) → (−∞, 2M) is regular.
Discussions
In the present paper, it has been clarified that the Sultana-Dyer class of spacetimes and the Culetu spacetime describe a cosmological black hole in the decelerating flat FLRW universe. However, they share a crucial property that the energy conditions are initially satisfied but later violated as solutions of the Einstein equations. In concluding this paper, we will show that the NEC is later violated on the event horizon, and consequently neither of these classes of spacetimes describe the evolution of a primordial black hole in general relativity after it gets smaller than the Hubble horizon. Since the qualitative properties of the Sultana-Dyer and Culetu cosmological black holes are similar, we will handle them together below. The conformal time t in the following argument stands for η and τ in the Sultana-Dyer metric (3.1) and Culetu metric (4.1), respectively.
Here we express the asymptotic background FLRW universe in terms of the cosmological timet as ds 2 = −dt 2 +ā(t) 2 (dr 2 + r 2 dΩ 2 ), (5.1) which is obtained from the metric (2.36) with the conformal time t by a coordinate transformation t = t(t) defined by a(t)dt = dt and a redefinition of the scale factor asā(t) := a(t(t)). If the background FLRW universe is filled with a perfect fluid obeying an equation of state p = wρ, we haveā(t) = b 0t β , where b 0 is a positive constant and β is given by β = 2/[3(1 + w)]. In the metric (2.36) with the conformal time t, we have a(t) = a 0 t α with α = 2/(3w + 1), and thus β = α/(1 + α) holds.
Note that w = 0 and w = 1/3 correspond to a dust fluid and a radiation fluid, respectively, and the DEC for the background universe requires −1 ≤ w ≤ 1. Our assumption w > −1/3 in this paper corresponds to α > 0 and 0 < β < 1. In the following, we assume −1/3 < w ≤ 1, or equivalently 1/3 ≤ β < 1, under which the background FLRW universe is decelerating and the DEC is satisfied there. Then, the relation betweent and t is which is obtained by integrating dt = dt/ā(t).
The location of the event horizon is given by r = 2M both for the Sultana-Dyer and Culetu cosmological black holes. For the Sultana-Dyer black hole, the NEC is violated (and hence all the standard energy conditions are violated) on and outside the event horizon in the period of η > 28(α + 1)M/9(= η N (2M)), where η N (r) is defined by Eq. (3.30). For the Culetu black hole, as shown in Table 7, this period of the NEC violation is given by τ > 2(α + 1)M. By Eq. (5.2), we express these two inequalities in a unified manner in terms of the cosmological time ast >t V , wherē with q = 14/9 for Sultana-Dyer and q = 1 for Culetu.
The areal radius of the event horizon is given by R(t, 2M) = 2Mā(t). Since the expansion of the background universe is decelerated, the event horizon initially larger but subsequently becomes smaller than the Hubble horizon H −1 (=ā/ȧ =t/β). In cosmology, the formation time of a primordial black hole is usually identified in order estimation with the time when the event horizon "enters" the Hubble horizon [45]. Thus, the formation timet =t F is determined by H −1 = 2Mā(t), which is solved to givē t F = (2βMb 0 ) 1/(1−β) .
(5.4) From Eqs. (5.3) and (5.4), we concludet F ≃t V for 1/3 ≤ β < 1 (or equivalently −1/3 < w ≤ 1), i.e., all the standard energy conditions are violated on the event horizon as soon as the primordial black hole forms. In other words, if the NEC is imposed on and outside the event horizon, neither the Sultana-Dyer nor Culetu metric describes the evolution of a primordial black hole after the horizon entry. Althought F ≪t V is realized for β ≪ 1, it corresponds to w ≫ 1 and then the DEC is violated in the background FLRW universe. To summarize, the results obtained in this paper firmly undermine the validity of the Sultana-Dyer and Culetu metrics as a model of primordial black holes. Since the McVittie spacetime (A.1) is not conformally Schwarzschild, it is not an easy task to clarify the global structure of the spacetime. After a huge effort by Nolan in his series of papers [9][10][11], it has been finally shown that the coordinate system (A.1) does not cover a maximally extended spacetime if the scale factor a(t) obeys a(t) ∝ exp(H 0 t) as t → ∞ with a positive constant H 0 , where the maximally extended spacetime describes a cosmological black hole [12,13]. In this appendix, we present another two non-conformally Schwarzschild spacetimes as candidates of a cosmological black-hole spacetime.
|
2022-06-23T06:43:27.684Z
|
2022-06-22T00:00:00.000
|
{
"year": 2022,
"sha1": "48afcbae4f14b2a3a48908d1f37e0095515412d2",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2206.10998",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "48afcbae4f14b2a3a48908d1f37e0095515412d2",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
257172740
|
pes2o/s2orc
|
v3-fos-license
|
Operational domain for the new 3MW/1000s ECRH System on WEST
,
Introduction
Radiofrequency heating systems are widely used in magnetically confined fusion devices. Three of them have been used simultaneously in Tore Supra [1]: Lower-Hybrid (LH), Ion Cyclotron Resonance Heating (ICRH) and Electron Cyclotron Resonance Heating (ECRH). In the upgrade to the WEST configuration [2], the 118 GHz ECRH system has been discontinued because its characteristics were not consistent with the new design of the machine. As a result, the former gyrotrons will be replaced by 105GHz/1MW/1000s gyrotrons to suit the new WEST geometry and power requirements. This modification also includes an upgrade of most of the EC components as a result.
The new ECRH project has been started in 2021. The EC power is expected to be available in WEST starting in 2023 with a limited power (1MW) to reach its full 3MW/1000s capability in a staged approach [3]. From a plasma scenario standpoint, this additional power source is predicted to improve access to H-Mode. In addition, it is expected that the flexibility and localization of the EC wave absorption in the plasma will be useful in the context of impurity and MHD control.
Various numerical code developments have been carried out in order to provide support to the use of ECRH power on WEST and to prepare its implementation. Firstly, the ray-tracing REMA code [4] has been interfaced with the IMAS WEST database, in order to allow simulations using plasma parameters from existing shots to be readily performed. The operational domain in terms of injection angles for the EC antenna has been established as a result. Following this development, further simulations have been made possible. The potential role of the EC power to prevent radiative collapse due to core tungsten impurity accumulation, investigated, e.g., in ASDEX Upgrade, where ECRH has been demonstrated to be an efficient method to flatten tungsten density profiles [5], has been analysed in the context of integrated modelling. Experimental data from shot 55025, featuring a radiative collapse, has been used in simulations using the RAPTOR code [6]. By comparing the LH and EC power depositions, the expected advantages provided by the new ECRH system have been assessed.
Improvements of the REMA raytracing code
The ray-tracing code REMA [4] allows electron cyclotron waves propagation and absorption in a plasma to be simulated in an accurate and numerically efficient fashion. It has been extensively employed, among other studies, to simulate Tore Supra experiments [7]. In order to facilitate its use as a tool for the design and analysis of future experiments in WEST, some further developments have been recently carried out, and are summarized in Figure 1.
The REMA code requires the magnetic equilibrium, plasma parameters and the EC antenna configuration as inputs. Based on general wave and ray-tracing theory, the code provides the toroidal and poloidal projections of the ray trajectories, as well as the power and driven current density profiles and the total fraction of absorbed power. To facilitate its use, REMA has been directly interfaced with the WEST IMAS database. This has allowed integrated modelling simulations of existing pulses including an additional source of EC power to be performed. It is worth mentioning that the results of these simulations are also written in the IMAS format in preparation of potential further simulations. The main objective of these developments is the determination of the operational domain for the EC antenna in terms of wave toroidal and poloidal injection angles. For this aim, a dedicated routine has been developed to sweep all mechanically reachable injection angles, and to build a synthetic view of the obtained results. As an illustration, Figure 2 displays the location of the EC absorption (in normalized radius) as a colour map over the whole toroidal/poloidal injection angle grid in the case of WEST shot 55799 [8]. Also shown are contours indicating the total absorbed power fraction. As an example, the outermost contour shows the 70% absorbed fraction in the first pass through the plasma.
Determination of the limits for the new ECRH system
As a preparation for the new ECRH system implementation on WEST, it is necessary to define its operational domain. It characterizes the injection angle limits, beyond which the waves are insufficiently absorbed by the plasma independently of its parameters. This is necessary for device safety, as poor absorption can result in levels of radiofrequency power in the vacuum vessel, potentially dangerous for the various systems installed in WEST.
Plasma profiles dependence for the wave absorption and propagation
In order to define those limits, the set of plasma conditions for which the operational domain is the widest needs to be identified. Any changes of the plasma parameter with respect to these conditions will be included in the maximum limits. This is why the propagation and absorption behaviour as a function of the plasma parameters needs to be precisely characterized.
As is well known, the electron density has a direct influence on the wave propagation properties. Indeed, larger densities result in larger wave refraction in the plasma, until a cut-off is attained and the wave is completely reflected. In WEST, because the magnetic field is high, the cut-off is expected at a local electron density of ne,limit~14×10 19 m -3 . Considering a typical lineaveraged electron density nl~6×10 19 m -3 , cut-off is rarely reached as ne,max is usually lower than ne,limit. In order to establish the widest operational domain, appropriate injection angles should be found, that compensate for wave refraction in the case of high-density plasmas.
In Figure 3, it is observed that similar absorption locations (in the vicinity of the resonance layer, shown with a black line) are obtained for different injection angles when the line-averaged density is varied in the range expected in WEST scenarios, i.e. from nl~2×10 19 m -3 to 9×10 19 m -3 . The modification of the magnetic field has a negligible influence on the wave propagation properties but has a strong influence on the resonance location. Indeed, if the 105 GHz frequency has been chosen in order to get central absorption for the nominal WEST magnetic field and oblique propagation (for current drive applications), decreasing the magnetic field moves the resonance towards the high field side.
Modifying the electron temperature has little influence on the wave propagation. Larger temperatures are known to increase the absorbed power fraction. On the other hand, close to the plasma edge, waves are generally poorly absorbed by the plasma. Increasing the temperature therefore allows absorption locations closer to the plasma edge to be obtained with a sufficient absorbed fraction, thereby enlarging the operational domain.
As a conclusion, the optimal conditions for which the injection limits are maximised are low density, high temperature and nominal magnetic field. This corresponds in WEST to a line-averaged electron density of around 2×10 19 m -3 , an electron temperature around 6keV and a central magnetic field around 3.7T.
Operational limits determination
With the global plasma conditions now being defined, it is useful to characterize the limits within which EC waves can be absorbed from the core to the edge, for all three steerable mirrors. For the middle mirror, it is possible to restrict the domain to the upper or lower part of the plasma, as they are approximately symmetric, except for the divertor area (see Figure 4). From the REMA simulations consisting of detailed injection angles scans described previously (see Figure 2), a global view of the system capabilities is obtained. Figure 5 shows results for the lower mirror, inside the limits defined by the antenna mirror movements. Repeating the process for the upper and middle mirrors, and refining the limits by running additional ray-tracing simulations, the operational domain, for every mirror has been obtained, as shown in Figure 6.
Restricted absorption domain for different plasma conditions
Whereas the previous global studies establish that outside of the determined limit, waves are poorly absorbed, more detailed analyses are needed to better characterize the acceptable injection angles depending on the plasma conditions. A study was started to have a general view over the characterization of satisfying injection angles with respect to absorption properties for the first harmonic. For this purpose, three different plasmas are considered: low-temperature (~1keV at the centre), moderate-temperature with additional heating (~3keV) and high-temperature (~6keV). For these temperatures, the objective is to estimate the magnetic field domain compatible with the ECRH system, i.e. with an acceptable domain of injection angles and a sufficiently wide range of possible absorption locations. Then, simulations have been run at the magnetic field limit for a low-density (nl~2×10 19 m -3 ) and a highdensity plasma (nl~9×10 19 m -3 ). The final domain with respect to wave absorption for the three different temperatures has been determined by combining the domains of low and high-density results. Figure 7 allows us to determine some general tendencies. First observable result that is highly expectable is that higher is the temperature and wider is the domain with acceptable absorption. That way, off-axis injection is restricted as the local temperature is lower than the core one. To maximise the absorption, it seems that a centred poloidal and toroidal injection is preferable.
Integrated modelling of WEST scenarios with EC power
The objective of the next study is to evaluate the potential positive impact of EC power in WEST scenarios, and to evaluate its influence compared to the other, already operating, radiofrequency heating systems (LH, in the present case).
Radiative collapse in WEST
As comprehensively presented in Ref. [9], WEST plasmas with LH power occasionally exhibit temperature collapses due to the power radiated by tungsten impurities in the plasma core exceeding the electron heat source. Shot 55025 displays an example of such a collapse (Figure 8). Around 9.5s, a fast decrease of the electron temperature limited to the central region (ρ<0.4) is observed. This coincides with a sudden increase of the tungsten density estimated from bolometry measurements, and a simultaneous decrease of the HXR (Hard X-ray) signal indicating a modification of the LH power deposition [9].
Impact of the ECRH system
The first study concerns the power deposition profiles. On one hand, the LH deposition profile has been simulated with a reduced model included in the METIS code [10]. The plasma kinetic profiles used have been fitted from experimental measurements. On the other hand, REMA has been run with adapted injection angles in order to obtain a centrally localized absorption in the conditions of WEST shot 55025. Figure 9 illustrates the differences between the computed LH and EC power deposition profiles. Firstly, typical LH deposition profiles, in the multi-pass regime typical of WEST, are radially wider radially than ECRH profiles. In addition, whereas ECRH allows power to be deposited in the core, the maximum of LH deposition is more off-axis. The second observation that makes ECRH interesting is the invariance of the deposition profiles with respect to the electron temperature decrease. When the collapse, characterized by a fast temperature decrease at t~9.5, occurs, the LH deposition moves more and more off-axis and the maximum location reaches ~0.7 at t~10s, thereby reducing the heat source on electrons. In comparison, the EC deposition profile is not affected. Its width and maximum remain constant, which allows efficient electron heating in the core to be maintained even in the presence of a radiated power increase caused by impurities. Integrated simulations using the RAPTOR transport code [6] have been performed in order to further assess the benefits of an EC power source in WEST scenarios. This code allows the evolution of the electron temperature depending on turbulent transport and heat source to be simulated. For the present simulations, the Bohm-Gyrobohm [11] transport model has been used. In order to compare the electron temperature evolution with LH and EC power, shot 55025 has been considered, at time t=8s, i.e. during the flat-top phase of the discharge. If most of the data was taken from experiment and diagnostics, METIS interpretative shot allowed to fill the missing data. First simulations have been run using the METIS output IMAS datafile and varying the W concentration from 0 to 5×10 -4 . Secondly, LH power (2.8MW) has been replaced by a central source of EC power, varied between 0 and 3MW and W concentration has also been varied. Resulting central electron temperature has been determined and is therefore displayed as a colour map in Figure 10. Firstly, for approximately equivalent injected power levels (2.8 MW for LH and 3MW for ECRH), it is observed that the electron temperature in the core gets twice higher with ECRH for low tungsten concentrations. At higher tungsten concentrations, it becomes around five times larger. The second quantitative observation is that the same electron temperature is obtained for EC power lever four times lower than the LH power level when the tungsten concentration is low. The difference is even larger for higher tungsten concentration. In terms of core electron heating, ECRH is therefore more efficient due to its more central deposition profile. This observation is not surprising, as the main role of LH waves in WEST is efficient current drive rather than central electron heating. ECRH, in principle, should prevent a radiative collapse at the levels of tungsten concentrations observed in some situations from occurring. These first studies therefore demonstrate the major benefit that can be anticipated from the future EC system in WEST.
Prospects
Additional integrated modelling simulations are now ongoing using RAPTOR, to better characterize the impact of the central heating and current drive from EC waves. The next objectives are to run refined simulations using the neural network-based transport model associated to the quasilinear gyrokinetic code QuaLiKiz (QLKNN) for turbulent transport [12] and LUKE to have a first principle based modelling of the LH deposition profiles [13]. Then, time-evolving integrated modelling of discharges exhibiting increasing of the core tungsten density in the presence of EC power will be performed. These simulations will be applied to a wide range of shots from the WEST database. Finally, in the future, the capabilities of the new ECRH/ECCD system to control MHD effects during the ramp-up and plateau phases in WEST plasmas will be studied.
|
2023-02-25T16:02:21.517Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "86a1d690a97cf65702e3babc66f02d944314256b",
"oa_license": "CCBY",
"oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2023/03/epjconf_ec212023_02006.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "261c12e4af74b1fda10e007cb9165c6ed70c8777",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
262634948
|
pes2o/s2orc
|
v3-fos-license
|
The Cross-National Diffusion of the American Civil Rights Movement: The Example of the Bristol Bus Boycott of 1963
S This paper is a case study of the bus boycotts of Montgomery in the US (1955-56) and Bristol in the UK (1963). Since the two movements seem to share a number of similarities, the aim of this paper is to determine if they are an instance of what the sociologists Doug McAdam and Dieter Rucht have called “cross-national diffusion” and to explain this phenomenon. The first part of this article will focus on the cultural similarities between African Americans and Afro-Caribbeans to show that they are a necessary condition for diffusion because they enabled black Bristolians to identify with the Civil Rights activists in the US. Then the second part will argue that the Bristol Bus Boycott is not a mere copy of its Montgomery source and that the specificity of the British context endowed it with new characteristics. Finally, the third part will demonstrate that the relational channel between the two movements played a crucial part in the diffusion process. Indeed it accounts for the rational choice of taking the Montgomery Bus Boycott as a model in spite of the differences between the two contexts, in an effort to generate propaganda
The Cross-National Diffusion of the American Civil Rights Movement: The Example of the Bristol Bus Boycott of 1963
Claire Mansour
"When someone demonstrates that people are not powerless, they may begin to act again."British Marxist historian Eric Hobsbawm.
When a group of people manages to induce institutional change to achieve their aims through the successful staging of a protest movement it generally convinces other groups of people to do the same.This process of social mimicry is linked to the concept of "diffusion" which is one of the basic tenets of sociology.In 1968, Elihu Katz broadly defined "diffusion" as: […] the acceptance of some specific item, over time, by adopting unitsindividuals, groups, communities -that are linked both to external channels of communication and to each other by means of both a structure of social relations and a system of values, or culture (Katz,78).
Doug McAdam and Dieter
Rucht then applied this definition to the analysis of social movements to explain the transfer of ideas and practices from one movement to another in a different country.Their theory of cross-national diffusion involves a group of adopters who will borrow one or several items from a group of transmitters through a combination of relational and non-relational channels provided that the adopters can identify with the transmitters (McAdam and Rucht,(56)(57)(58)(59)(60)(61)(62)(63)(64)(65)(66)(67)(68)(69)(70)(71)(72)(73)(74).Their model is based on a case study of the American and the German New Left which leads them to conclude that the tactics and ideology of the American New Left crossed the borders to Germany where they were adopted by the students.
Can the same be said about the Montgomery and Bristol bus boycotts?Even at first glance, the similarities between the two events are too striking to be a mere The Cross-National Diffusion of the American Civil Rights Movement: The Examp... Miranda, 10 | 2014 coincidence.On 1 December 1955, Rosa Parks refused to give up her bus seat to a white man, as required by the Alabama and Montgomery segregation laws.The bus driver subsequently called the police and she was arrested.The leader of the local NAACP (National Association for the Advancement of Coloured People), Edgar Daniel Nixon, saw her trial as the perfect opportunity to challenge the constitutionality of the bus segregation laws.In the two days between Parks's arrest and trial, the leaders of the black community formed the Montgomery Improvement Association, chose Martin Luther King as its president, and decided to launch a boycott of the city buses.This protest lasted for 381 days until it was called off on 20 December 1956 after the Supreme Court ruled that the bus segregation laws were unconstitutional forcing the city to pass a new ordinance allowing black citizens to sit anywhere they pleased.
As for Bristol, the initial spark for the boycott came in April 1963 when a young West Indian man called Guy Bailey was refused a job interview on the grounds of his skin colour despite the fact that he was well qualified for the post of bus conductor.But in the early 1960s there was no law in the UK forbidding racial discrimination, so the manager of the Bristol Omnibus Company, Ian Patey, was perfectly within his rights when he turned Bailey down.In fact, Paul Stephenson -who was both the spokesman of the West Indian Development Council and Bailey's teacher -had decided that his pupil would act as a test case to denounce publicly the Bristol Omnibus Company's colour bar against black bus crews.So when Bailey's job interview was cancelled, as expected, Stephenson called for a boycott of the network in protest.Just like Rosa Parks, Guy Bailey's impeccable profile made him the perfect test case for public exposure.The boycott lasted until 28 August when Ian Patey announced that the only criterion to recruit bus crews would be their suitability for the job.Thus, like the Montgomery protesters, the Bristolian activists achieved their aims.
The analogy between the two movements raises several questions.If one assumes that they are not isolated events and that they are both related, what, then, is the nature of the link between them?If it can be argued that the transfer of the Montgomery Bus Boycott to Britain is an instance of cross-national diffusion, which particular elements of the Afro-American protest were therefore adopted by their British counterparts?How can this phenomenon be explained?
To answer these questions, the first part of this article will show that the cultural similarities between the adopters and the transmitters are a necessary condition for diffusion because they enabled black Bristolians to identify with the Civil Rights activists in the US.Then the second part will argue that the Bristol Bus Boycott is not a mere copy of its Montgomery source and that the specificity of the British context endowed it with new characteristics.Finally, the third part will demonstrate that the relational tie between the two movements played a crucial part in the diffusion process since it accounts for the rational choice of taking the Montgomery Bus Boycott as a model despite the differences between the two contexts, in an effort to generate propaganda to force the bus company to lift the colour bar.
Cultural similarities as a necessary condition for identification
The transfer of elements from one movement to another requires some kind of connection between them in the first place.Most sociologists have argued that cultural The Cross-National Diffusion of the American Civil Rights Movement: The Examp... Miranda, 10 | 2014 similarities are instrumental in bringing about the diffusion process because they enable the adopters to identify with the transmitters.Once the adopters come to perceive themselves as similar to the transmitters, a bond develops between the two communities which will then mediate the transfer.Cultural similarities act as a nonrelational channel of diffusion, meaning that they do not depend on interpersonal contact to exist.The West Indian community in Bristol and its African American counterpart in Montgomery can be seen as sharing several cultural similarities.They had both been through the distant and real experience of slavery 1 and of course, they spoke the same language which also facilitated the transfer.
Similar social category 8
The black citizens of both Montgomery and Bristol were generally confined to the lowest social categories because of white racism and discrimination.Black immigrants arriving in Bristol lived mainly in the deprived area of Saint Paul's where they were exploited by slum landlords who took advantage of the housing shortage to charge exorbitant rents (Dresser, 1986, 7).Some historians believe that -since there were no laws against racial discrimination at the time -it was not unusual to see signs on the windows of some lodging houses saying "No Irish, no blacks, no dogs".They were also frequently refused service in shops or pubs.The rate of black unemployment was over twice that of whites 2 while those who worked were often relegated to menial jobs.
Although relatively new and much less ingrained than in the Southern States, racism stood in the way of working-class consciousness.British trade unions resisted the employment of black workers and the poorest white Bristolians could always find comfort in the idea that they were still a cut above the "coloureds".Whites on both sides of the Atlantic had similar prejudices, ranging from the notion that it was unclean to touch black people to the fear that they were lusting after white women (Fryer,143) Communities with strong social ties 9 Both the African American community in Montgomery and the West Indian community in Bristol had strong social ties with similar networks linking people through social, cultural and religious activities although they took a more informal shape amongst the latter.The Afro-Caribbean Bristolians also had their own separate churches and even if their pastors were reluctant to get involved on a political level, they would still encourage their flocks to attend marches and take part in the boycott once it was launched.The West Indian Association had been organising cultural events such as the recent celebration of Jamaican independence.They had also tried to meet with city councillors to raise the issue of racial discrimination in housing and employment but it (Marwick,239), later became its spokesman.Although he was not of West Indian origin -he was West African on his father's side and British on his mother's side -and had only recently arrived in Bristol, Stephenson used the existing social networks and organisations of the Afro-Caribbean community.He played a crucial part in providing a relational tie between the two movements and giving the initial impetus which triggered the identification process.
Collective identity and related practices
Through the identification process, the adopters come to define their collective identity as similar to that of the transmitters.According to McAdam and Rucht, the level of identification is proportional to the number of elements adopted (63).In other words, the more thorough the identification, the more extensive the diffusion process.
Because they strongly identified with the African American activists, black Bristolians adopted their tactics and their belief in non-violent direct action as the means to achieve their ends.As in Montgomery, the West Indian Development Council organised a boycott of the Bristol city buses which required participants to walk or cycle to work and back.But they also borrowed tactics which had been used by the American Civil Rights Movement after the Montgomery protest like marches and sit-ins.On 6 May 1963 they held what some believed was the first black-led march against racial discrimination in the United Kingdom which gathered between 50 to 200 people according to different estimates (Dresser,31).They also staged sit-ins at Fishponds Road in the north-east area of Bristol to prevent buses from accessing the city centre.
But despite all these similar features, the Bristol protest was far from being a mere copy of its American source of inspiration but took on new characteristics of its own shaped by the specificity of the British context.
Specific characteristics of the Bristol Bus Boycott
The British context Of course, Britain in the early 1960s was nothing like the American South.First, black Bristolians were mainly Commonwealth immigrants who had come in growing numbers after the Second World War thanks to the 1948 British Nationality Act, which granted them the right to work and settle in the United Kingdom.The case of the South Asian immigrants will not be dealt with in this article because although they benefitted from the gains of the Bristol Bus Boycott, they did not play an active part in its organisation.As for West Indian immigrants, they had been educated through the colonial system so many of them revered Britain as the "mother-country" and had come to consider themselves as Englishmen of sorts (Fryer,374).On their arrival, they were very disappointed to find that British society did not match their expectations.
Racial tensions escalated and anti-black riots erupted in several cities, most notably in Nottingham and London (in the area of Notting Hill) in 1958.In both cases, the events The Cross-National Diffusion of the American Civil Rights Movement: The Examp...
Miranda, 10 | 2014
took a similar pattern.The presence of a couple composed of a West Indian man and a white woman seemed to have sparked the riots, leading to episodes of fighting between local blacks and whites which were then distorted by sensationalist media coverage leading to the arrival of hundreds of anti-black rioters who roamed the streets and attacked the non-white residents (Pressly).Problematic race relations could no longer be ignored and the government retorted with the 1962 Commonwealth Immigrants Act which was aimed at restraining black presence by limiting immigration from Commonwealth countries to those who had employment vouchers.This meant that the source of the problem had been analysed as being the number of black immigrants, and not white racism.
The colour bar
13 But what comes to mind first in thinking about the American South in the early 1960s remains the Jim Crow system.Since 1896, the "separate-but-equal" doctrine had been given legal sanction by the Plessy v. Ferguson decision of the Supreme Court, thereby making segregation the official practice of the Southern States.In Britain, the colour bar was more insidious precisely because those who practised it would not admit they were doing so.In the 1950s British trade-unionists resisted the employment of black workers in many industries.They feared the threat of "cheap labour" undercutting their wages or breaking their strikes.They often set up quotas restricting their numbers to a maximum of five per cent, agreed with the management that "coloured" workers would not be promoted above whites, or that the rule of "last in first out" would not apply while there were still black workers that could be dismissed first (Fryer,376).White employees even organised strikes in protest, as in West Bromwich in 1955 where they objected to the employment of a black conductor.It was also common practice for unions to vote for the introduction of a colour bar to prevent their employers from hiring black labour 3 .After the Second World War, the National Union of Seamen managed to keep black workers off British ships (Fryer,367).These arrangements deliberately ignored the national stance of the Trades Union Congress which had first passed a resolution decrying racial discrimination in 1955, and then reaffirmed its commitment in 1959 (Wench,8).
Local context -the city of Bristol
Bristol's past as one of the leading British slave ports had left enduring traces ranging from streets named after famous slave traders to particularly difficult race relations (Fyer, 399).Only a year before the bus boycott, a fight had broken out between black and white stevedores which resulted in the dismissal of 60 black workers because the whites refused to work with them (Dresser, 2007).By 1963, the city numbered approximately 7,000 West Indians (Dresser, 1986, 39).While the sight of black bus crews was fairly common in other British cities such as London, Birmingham, Manchester or even neighbouring Bath ("Sir Learie Joins in Colour Bar Issue"), the state-owned Bristol Omnibus Company refused to employ black drivers and conductors.The General Manager, Ian Patey, claimed that hiring black labour would downgrade the job and convince the current white staff to seek work elsewhere (Dresser, 1986, 19).But the decision to enforce a colour bar did not originate with the management.In 1955, the Passenger Group of the Transport and General Workers' Union in Bristol, which The Cross-National Diffusion of the American Civil Rights Movement: The Examp... Miranda, 10 | 2014 represented bus drivers and conductors, had voted in favour of excluding black bus crews (Dresser, 1986, 12).Once more, this resolution from the local trade union branch was completely at odds with its national commitment to anti-racism and its public condemnation of apartheid in South Africa, as Stephenson would be quick to point out ("W.Indians 100 p.c. for Bus Boycott").Finally, if blacks represented 80% of bus users in Martin Luther King's city, they were far from forming such a large proportion in Bristol (Kelly) so the boycott would not put the same economic strain on the bus company 4 .Why, then, did the black British activists choose this form of protest?
In fact, despite all these differences it would seem that Stephenson and the West Indian Development Council decided to opt for a Southern-style boycott of the buses precisely because of its connection with the segregationist practices of the American South.
Diffusion as a rational choice
The adoption of the transmitter's collective identity and tactics is not an automatic process of mimicry but results from a rational choice.Not only were there elements of cultural similarities between the African Americans of the Civil Rights movement and the black Bristolian activists, but there was also a conscious process of drawing parallels between them.
The prestige of the Civil Rights Movement
Paul Stephenson -who was the only relational tie bridging the gap between the two movements -acknowledged that the Montgomery Bus Boycott had had a significant influence on him and had given him the inspiration to stage a similar protest in Bristol.
He declared an in interview in November 2005: You couldn't help but be impressed by Martin Luther King and what he was doing in America.But without Rosa Parks I'm not sure whether we would have embarked on our boycott.She was a huge influence on me.I thought if she could protest by not giving up her seat on a bus we could start a bus boycott ("Forty Years On: Due Credit for Civil Rights Pioneer").
The prestige of the leading Civil Rights figures not only convinced Stephenson to take action, but it also contributed to attracting activist support for the movement, and getting it favourable media coverage by giving it credibility and legitimacy.The mass media had made sure that the story of Rosa Parks reached the British coasts, and by 1963 Martin Luther King had become an international figure.Therefore, it might be no coincidence that on 28 August, while King was delivering his famous "I have a Dream" speech, the General Manager of the Bristol Omnibus Company announced that they had decided to lift the colour bar.
Extending the analogy to create propaganda
By identifying with the African American Civil Rights Movement, black Bristolians were by extension comparing the British authorities to their Southern American counterparts who officially practised segregation.It was part of Stephenson's strategy to generate propaganda to get national media attention and to shame the bus company and the local trade union into action.Some of the similarities between both movements The Cross-National Diffusion of the American Civil Rights Movement: The Examp...
Miranda, 10 | 2014
were carefully staged.On the first day of the boycott, the leaders of the West Indian Development Council called a press conference and Owen Henry deliberately climbed on a bus and went to stand at the back to draw a parallel between black Southerners forced to stay at the back and British bus conductors who generally stood there as well: "I boarded the bus there and stayed at the back… especially for the photographers to take a photograph of a black person at the back of the bus" ("Sir Learie Joins in Colour Bar Issue").As expected, this caught the imagination of the press and they added their own touch of media sensationalism, as demonstrated by this editorial from the Bristol Evening Post: "This is a time for Cool Heads.We want no Little Rock in Bristol" ("Sir Learie Joins in Colour Bar Issue").Little Rock is a town in Arkansas where in 1957 nine black students had to be escorted by armed federal troops to the previously all-white Little Rock Central High to protect them from a howling white mob.Of course both events were hardly comparable but it turned out to be very effective propaganda material.During the following days, several famous figures came out in support of the boycott such as various local Labour MPs (Tony Benn, Fenner Brockway and Stan Awbery) along with Harold Wilson who was at that time the leader of the Opposition.Sir Learie Constantine, a world-famous Trinidadian cricket player who had been appointed High Commissioner for Trinidad and Tobago, also joined the fight and played a very important part in vehemently condemning the colour bar.
A successful tactic 19 Finally, diffusion concerns mainly the practices of movements which come out victorious.The Montgomery Bus Boycott was seen as having brought about the desegregation of Southern buses.Regardless of the historiographical debate on the role of the Supreme Court and of the American legal system in general (Glennon, 59-112;Kennedy, 999, 1067), it is the perception that the protesters had triumphed that convinced people that their actions could have an impact on decision-makers.
Although the boycott did not produce integration, that was the perception, and perceptions may be more important than reality.The boycott came to exemplify the power of an African American community to mobilize and successfully resist and defeat segregation.A recalcitrant Montgomery yielded before the power of the people.
[…] The influence of this perception was enormous.
[…] Even though the boycott itself failed to integrate the city buses, it stimulated other communities to stand up against injustice (Glennon, 60).
20 The 1953 Baton Rouge Bus Boycott 5 fell into oblivion precisely because it ended with a compromise solution.The activists called off the boycott because they had managed to obtain substantial gains: opening all bus seats except two at the front which would be reserved for whites and two at the back for blacks ("First Civil Rights Bus Boycott").But they had fallen short of obtaining the complete desegregation of their city buses.That is why today, the Montgomery Bus Boycott is remembered as the event which initiated the Civil Rights Movement and gave it national prominence.It also explains why its tactics were adopted outside the United States.By choosing a successful strategy, the West Indian Development Council also influenced the perceptions of its potential recruits who would deem the protest more likely to succeed and join in greater numbers.
The Cross-National Diffusion of the American Civil Rights Movement: The Examp...
An endless chain of influence
It could also be argued that Martin Luther King had himself been inspired by Mohandas Gandhi and his strategy of non-violent resistance deployed both in South Africa when he struggled for Indian rights there 6 and during the movement for Indian independence.King visited India in 1959 and wrote an account of his experience in the black magazine Ebony in July of the same year in which he also acknowledged the significant influence of the Mahatma."While the Montgomery boycott was going on," he declared, "India's Gandhi was the guiding light of our technique of non-violent social change" (King,84).The leaders of the Montgomery Improvement Association had preached the doctrine of non-violence since the first day of the boycott, with King stressing that the only weapon they would use was "the weapon of protest" (Branch, 140).Gandhi's principle of Satyagraha, which can be literally translated into "Truth-Force", referred to the power of non-violence as being a spiritual force which could overcome physical strength.By choosing religious values as the ethos of their protest movements, both Gandhi and King were defining their practices as morally right while presenting their opponents and the systems they represented as morally wrong.Both leaders had read Henry David Thoreau's essay on civil disobedience -"Resistance to Civil Government" -and had endeavoured to apply its principle of non-cooperation with unfair, immoral systems.Gandhi's Salt March of 1930 was both an example of long spiritual march (Padyatra) and of boycott since it involved the refusal of paying the British tax on salt and the organisation of an alternative system of illegal salt production.The American Civil Rights Movement would later use the same techniques, as for instance during the march from Selma to Montgomery in 1965 and during the 1955-56 bus boycott.The African Americans refused to abide by the bus segregation laws of their city and set up their own efficient system of car pooling to replace it.After the city police commissioner threatened to arrest taxi drivers who undercut their fares to drive protesters, the leaders of the Montgomery Improvement Association managed to convince car owners to lend their vehicles to voluntary drivers providing up to 20,000 rides a day (Branch,146).But this successful protest was far from being the first boycott in history.The etymology of the verb "to boycott" can be traced back to the British Captain Charles Cunningham Boycott who demanded unfair rents from his Irish tenants and then evicted those who could not pay up.In 1880, Charles Stewart Parnell -the President of the Irish National Land League -asked the local farm labourers to refuse to harvest Boycott's crops, leaving them out to rot.The name stuck to the strategy which then spread quickly all over Ireland (Dooley,4,18).Although it is probably impossible to pinpoint the first use of the practice with certainty, it appears that it had already been used before 1880.The American colonists had purposefully avoided buying British goods over a hundred years earlier because of the duties imposed by Westminster, while radical Quakers had launched the free produce movement in the 19 th century, calling for people to stop purchasing products made from slave labour.Thus it would seem that the roots of this particular protest technique run deep in British colonial history.
Conclusion 21
The Bristol Bus Boycott is therefore related to the American Civil Rights Movement in so far as it is an instance of cross-national diffusion.Paul Stephenson, who had visited The Cross-National Diffusion of the American Civil Rights Movement: The Examp... Miranda, 10 | 2014 the US to study Martin Luther King's movement, acted as the relational tie directly connecting both communities.On his arrival in Bristol, he triggered the identification process of the adopters (the black Bristolians) with the transmitters (the African Americans).By doing so, he amplified the effects of the information already available through non-relational ties which had been relayed by the mass media and drew upon cultural similarities between the two communities.The transfer of tactics and ideology can be explained by the fact that they were associated with the personal prestige of Martin Luther King and because they were effective in generating propaganda to attract media attention and shame the authorities.But most of all, it was the perception that these practices would lead to social change which caused their diffusion.Although both the Montgomery Improvement Association and the West Indian Development Council managed to obtain guarantees that their demands would be met, they did not achieve complete, immediate change due to the reluctance of the local whites to comply.In the days following the inauguration of desegregated seating, white supremacists turned to violence to express their resentment including physical assaults, gunshots and even bombings (Kennedy, 1055).In practice, many African Americans continued to avoid taking the buses while most of those who did went to sit at the back to avoid any kind of friction (Kennedy, 1057).As for the Bristol Omnibus Company, the colour bar was immediately replaced by the introduction of a five per cent racial quota.In 1965, black bus crews remained a rare sight, representing less than two and a half per cent of the city's bus drivers and conductors and, by the early 1970s, the only positive change achieved was the increase of the quota to six per cent (Dresser, 1986, 48).
22 A similar process of cross-national diffusion occurred in Northern Ireland when Catholic activists decided to adopt the tactical repertoire, ideas, slogans and songs of the American Civil Rights movement.Of course cultural similarities between white Irishmen and black Southerners are not apparent, but it all the more highlights the process of conscious social construction of the protesters' collective identity.What is more, as black Bristolians had done before them, they also exploited the analogy to portray both their Loyalist opponents and the Northern Irish authorities in a very negative light.Thus, the American Civil Rights movement remains a source of influence both as an inspirational theoretical model for staging a successful protest but also because it summons images of a struggle between the progressive forces of racial equality and justice against the reactionary powers of racism and discrimination.NOTES 1.Although it should be noted that there had been no form of slavery on British soil since the Middle Ages and that the practice was abolished in the British colonies in 1833.Another specificity of the Afro-American experience is that, unlike the Afro-Caribbeans, they had no country to come back to and were bound to share their former masters' land.
3.
At the time most trade unions in Britain had closed-shop agreements forcing their management to recruit union members only.
4.
In early January 1956, the managers of the Montgomery City Lines admitted that they were facing impending bankruptcy (Branch, 150).
5.
In the city of Baton Rouge, Louisiana, the African American community staged a boycott of the city buses which lasted from 20 to 28 June 1953 ("First Civil Rights Bus Boycott").
6. Gandhi launched his first civil disobedience campaign while he was working as a lawyer in Pretoria where he was subjected to the discriminatory laws against "coloured" people.He formed the Natal Indian Congress in 1894 and organised a series of successful protest for Indian rights.
He came back to India in 1914 after having negotiated an agreement with the South African government.
ABSTRACTS
This paper is a case study of the bus boycotts of Montgomery in the US and Bristol in the UK (1963).Since the two movements seem to share a number of similarities, the aim of this paper is to determine if they are an instance of what the sociologists Doug McAdam and Dieter Rucht have called "cross-national diffusion" and to explain this phenomenon.The first part of this article will focus on the cultural similarities between African Americans and Afro-Caribbeans to show that they are a necessary condition for diffusion because they enabled black Bristolians to identify with the Civil Rights activists in the US.Then the second part will argue that the Bristol Bus Boycott is not a mere copy of its Montgomery source and that the specificity of the British context endowed it with new characteristics.Finally, the third part will demonstrate that the relational channel between the two movements played a crucial part in the diffusion process.Indeed it accounts for the rational choice of taking the Montgomery Bus Boycott as a model in spite of the differences between the two contexts, in an effort to generate propaganda intended to force the bus company to lift the colour bar.
The Cross-National Diffusion of the American Civil Rights Movement: The Examp...Miranda, 10 | 2014did not result in any concrete change.Therefore, in 1962, Owen Henry and Roy Hackett formed the West Indian Development Council to deal specifically with racial discrimination.Paul Stephenson, who had just come back from a three-month trip to the United States where he had studied the Civil Rights movement closely The Cross-National Diffusion of the American Civil Rights Movement: The Examp...
Dooley, Brian.Black and Green: The Fight for Civil Rights in Northern Ireland and Black America.London: Pluto Press, 1998.
|
2019-06-16T13:15:33.622Z
|
2014-12-31T00:00:00.000
|
{
"year": 2014,
"sha1": "14593f52c4fae84d5eef88e1b2bbb3e6d12ebe09",
"oa_license": "CCBYNC",
"oa_url": "https://journals.openedition.org/miranda/pdf/6360",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "107c64ec75fbda8cb1f915fbb57e8e178c9ebbf0",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Art"
]
}
|
115132449
|
pes2o/s2orc
|
v3-fos-license
|
Freshwater Diatoms as Indicators of Combined Long-Term Mining and Urban Stressors in Junction Creek (Ontario, Canada)
Sudbury (Ontario, Canada) has a long mining history that has left the region with a distinctive legacy of environmental impacts. Several actions have been undertaken since the 1970s to rehabilitate this deteriorated environment, in both terrestrial and aquatic ecosystems. Despite a marked increase in environmental health, we show that the Junction Creek system remains under multiple stressors from present and past mining operations, and from urban-related pressures such as municipal wastewater treatment plants, golf courses and stormwater runoff. Water samples have elevated metal concentrations, with values reaching up to 1 mg·L−1 Ni, 40 μg·L−1 Zn, and 0.5 μg·L−1 Cd. The responses of diatoms to stressors were observed at the assemblage level (metal tolerant species, nutrient-loving species), and at the individual level through the presence of teratologies (abnormal diatom frustules). The cumulative criterion unit (CCU) approach was used as a proxy for metal toxicity to aquatic life and suggested elevated potential for toxicity at certain sites. Diatom teratologies were significantly less frequent at sites with CCU values <1, suggesting “background” metal concentrations as compared to sites with higher CCU values. The highest percentages of teratologies were observed at sites presenting multiple types of environmental pressures.
Introduction
The region of Sudbury (400 km north of Toronto, Ontario, Canada) and its surroundings is well-known for its legacy of intense mining that resulted in vast ecological damage due to acidification and metal contamination. Among the seriously impacted aquatic ecosystems in close vicinity to the Sudbury mining activities is the Junction Creek system. This river and its tributaries were once the recipients of several untreated industrial and municipal effluents, as well as a sink for atmospheric deposition. The health of Junction Creek was impacted by the contamination and degradation in its watershed and showed highly impaired biological integrity [1]. Still nowadays, despite pollution control and rehabilitation actions having been undertaken, aquatic ecosystems in the region suggest slow recovery [2][3][4][5][6]. Mining activities are still present in the region, although under significantly more restrictive pollution control and regulation, and intensification of urban development represents a supplementary environmental threat. The Junction Creek system has been well studied in the past to assess its ecological degradation in response to mining activities, and its recovery following improved management of atmospheric deposition and wastewaters. However, to our knowledge, most studies focused on water chemistry, invertebrates and fish, leaving a gap in information on biofilms. Composed of algae, fungi, bacteria and protozoans embedded in a polysaccharide matrix, biofilms are a complex aggregation of microorganisms and constitute the basis of most lotic ecosystem food webs. Biofilm integrity is, therefore, essential in keeping a healthy biological status at the ecosystem scale, as it is a key entry point for contaminants into the trophic chain. For example, biofilms accumulate metals that are then susceptible to reach higher organisms through their diet [7], causing multiple deleterious effects on reproduction, behavior, fatty acid composition, survival, etc. (e.g., [8,9]). Intracellular metal concentrations in biofilms are proportional to free metal concentrations in the water, offering an interesting proxy to estimate bioavailable metals in the water column [10,11]. Diatoms (unicellular algae), often the dominant constituent of stream biofilms, are sensitive to changes in water chemistry and respond quickly to environmental fluctuations by changes in the structure of their assemblages (e.g., increase in pollution-tolerant species) [12]. Due to their sensitivity to fluctuations in water quality, their ubiquity, ease of sampling, and low analytical costs, this algal group is widely used as indicators of biological integrity and numerous diatom-based indices have been developed for routine assessment of overall ecosystem health (e.g., [12][13][14]). Diatoms have also been used to specifically reflect metal contamination, and metal-tolerant species are promising indicators of contamination (see Morin et al. [15] and references therein). Moreover, deformities in diatom frustules (silica shells) are used as a biomarker in response to environmental perturbations such as contamination by metals and organic compounds (e.g., [16][17][18]).
The purpose of this study was to combine chemical and biological monitoring for assessing health and ecological integrity of aquatic ecosystems in the Sudbury region, including Junction Creek and its tributaries, with focus on metal contamination. More specifically, the objectives of the study were (i) to evaluate overall stream biological integrity based on diatom assemblages, (ii) to assess changes in diatom assemblage composition with increasing metal contamination, and (iii) to investigate the presence of diatom deformities (teratologies) in response to metal contamination. The selected sites were also subjected to other environmental pressures such as nutrient loads that may act as additional stressors affecting the response of diatom assemblages, thus offering interesting conditions for multi-stress assessment. This particular study area is therefore an interesting example where environmental pressures such as urban activities may exacerbate stresses from past and present mining activities and thus affecting system recovery. This has been previously observed where a greater number of cumulative environmental stressors resulted in more significant impacts on diatom assemblages [19], although some antagonistically acting stressors have been evidenced (e.g., metals versus nutrients [20,21]). The present study provides groundwork for assessing stream biological integrity based on diatom descriptors, and brings valuable information to be used in further monitoring of the Junction Creek system recovery and health. Mining activities in Canada are expected to increase, especially in relatively pristine northern regions (e.g., the Quebec Plan Nord, the Ontario Ring of Fire, and the Northwest Territories Mining Initiative). Despite the fact that mining companies are subjected to comply with stricter environmental regulations under the Canadian Mining Act (operating since 1995) to ensure site rehabilitation after mine closure, ecosystems in proximity to mining operations are still at risk of physical, chemical and biological alteration. Monitoring past and present effects of mining on nearby ecosystems and assessing losses in ecological integrity and services offer strong support to further reduce emissions from industrial activities and to stimulate research on best management practices.
A Brief History of Mining Around Sudbury and the Resulting Ecological Damages
Sudbury has a long mining history, with its first smelter having been built at Copper Cliff in 1888. This region has one of the most productive nickel and copper mining operations in the world, with other metals such as zinc, cobalt, precious metals and platinum-group elements also currently mined and processed in the area. While mining companies are nowadays relatively more eco-aware, environmental preoccupations were not on the agenda before the 1980s. Open-air roasting (processing step) occurred, releasing sulfur dioxide. Atmospheric emissions were estimated at over 100 million of tons of SO2 and thousands of tons of metal particles [22,23]. Along with forest fires and clear-cut logging (large amounts of wood were necessary for roasting), this industrial process led to the destruction of nearly 20,000 ha of land and to about 80,000 ha of semi-barren landscape [23]. Outdoor roasting was common to the end of the 1920s when it was banned by the Ontario Government, following which three smelting plants were built (Copper Cliff, Coniston and Falconbridge). Smelter emissions in the Sudbury area were one of the world's largest point sources of SO2 emissions during the 1960s, accompanied by thousands of tons of emitted metal particles [24]. Metal contamination has been documented since the 1960s in the Junction Creek area and its surroundings [1].
Technological development and legislative control have led to a 90% reduction in SO2 and particulate matter emissions between 1967 and the 1990s [23,25]. A stack rising 380 m above the Canadian Shield floor was built in 1972 (Inco Superstack), spreading smelting fumes to a much larger area. Several rehabilitation actions were taken, such as liming and grassing of the barren areas, and replanting millions of trees. Life was also slowly reintroduced to the surrounding lakes and streams, as algae, zooplankton, zoobenthos and fish showed signs of recovery [26]. Since the 1970s, the health and integrity of the affected area markedly improved, and the region is now on a path to recovery. Colossal efforts were undertaken to rehabilitate and revive the area, with particular attention given to Junction Creek (e.g., abatement of mining and municipal untreated effluents, shoreline stabilization, tree-plantings) and have drastically improved the overall health of this region. However, anthropogenic inputs such as mining effluents, treated municipal wastewater, urban runoff, and air-born particles still pose a threat to the integrity of Junction Creek and nearby waterbodies. In addition, this system suffers from over 100 years of mining-related contamination now accumulated in sediments, as observed in the lakes along its course. For example, Kelly Lake (2.4 km 2 ) is a water body well-known for its contamination in copper, nickel, palladium, iridium, and platinum [27]. In addition to being metal-contaminated, Kelly Lake sediments are loaded with phosphorus, as Junction Creek used to be a point-source of raw sewage effluents [27]. A large creosote plant, in operation from 1921 to 1960, also contributed to the contamination of Kelly Lake sediments by polycyclic aromatic hydrocarbons (PAH) as waste materials sometimes leaked into Junction Creek [27].
About 7000 lakes were acid-damaged to the point of biological impairment by mining activities in the Sudbury area [28], and although many now show signs of recovery from acidification [24,29], metal contamination and other persistent ecological damages still impair their integrity. Biological recovery has been observed in fish, zooplankton, phytoplankton and zoobenthos, but remained at an early stage in many lakes lying in close proximity to Sudbury in studies conducted in the late 1990s and early 2000s (see review in Keller et al. [24], and references therein). On the other hand, analysis of long-term monitoring data (1988-2002) from 17 acidified lakes located about 200 km south-east of Sudbury suggests that benthic macroinvertebrate communities have recovered from acidification due to long-range transport of air pollutants [30]. Despite rehabilitation actions and improved physico-chemical properties, Junction Creek shows similar responses to what was observed in surrounding lakes where signs of biological perturbations are still present. For example, a study on macroinvertebrate assemblages from 2000 to 2008 suggests slow recovery in Junction Creek (Frood Branch) after diversion of acid mine drainage in 2000, when many large sensitive invertebrates were still lacking [2]. Although metal contamination has drastically been reduced in the region, Weber et al. (2008) also showed biological impacts with increasing metal concentrations (Cd, Cu, Rb, Se, and Sr) in fathead minnow and creek chub along a downstream gradient in Junction Creek.
Study Area
The study was conducted in streams and creeks of the Greater Sudbury area and its surroundings, characterized by Canadian Shield bedrock geology. This boreal region has a relatively flat topography, and a humid continental climate with long cold/snowy winters (six-months of snow cover) and warm/hot summers. At the time of sampling (September 2016), air temperature was warm (~20 °C) and water levels in the watershed were low, as recommended for diatom sampling [31].
A total of 19 sites were selected for this study, with nine sites positioned along an upstream/downstream gradient in Junction Creek (sites JC1-JC9; Figure 1). The Junction Creek system, which is 54 km in length, is a tributary of the Vermilion River, itself discharging into the Spanish River (tributary of Lake Huron). This watercourse flows through the City of Greater Sudbury, has five main tributaries (Nolin Creek, Copper Cliff Creek, Frood Branch Creek, Maley Branch Creek and Garson Branch Creek), and encompasses several lakes. In addition to potential contamination from mining effluents and atmospheric deposition, Junction Creek and its tributaries also suffer from other anthropogenic activities such as discharge from the Sudbury municipal wastewater treatment facilities (entering Junction Creek 200 m below the Copper Cliff Creek confluence), urban runoff, and golf courses. JC1 is located in the upper portion of Junction Creek, in the Garson community (now part of the Greater Sudbury area) and receives water from Garson Branch Creek carrying treated effluents from Garson mine. Junction Creek then flows through Greater Sudbury (JC2 to JC6) and receives waters from tributaries along the way. A sampling site was positioned on Frood Branch Creek (FBC), which reaches Junction Creek between JC4 and after JC5. Frood Branch Creek has a history of important acid mine drainage from the Frood/Stobie (oldest mine complex in Sudbury) mine tailings, but diversion construction in 2000 and reclamation action taken at the site greatly improved water quality [32]. While mining activities ceased at Frood mine in 2012, Stobie was still operating at the time of the present study (2016). Two sites were positioned on each branch of Nolin Creek (NC1 and NC2), and a third site was positioned where the branches merge (NC3) and discharge into Junction Creek between JC5 and JC6. The NC1 branch collects treated mining effluent from Nolin mine, while NC2 does not receive direct point-source effluents but may still be impacted by diffuse contamination. JC7 was sampled before Junction Creek enters Kelly Lake and is impacted by inflowing waters from Copper Cliff Creek (CCC) draining tailings and is receiving treated water effluents from Copper Cliff mine and smelter as well as effluents from a sewage treatment plant. A sampling site was positioned downstream of Kelly Lake outflow (JC8). The last site on the Junction Creek gradient (JC9) was positioned just after Mud Lake.
A reference site was selected on Maley Branch Creek (MBC), which extends well north and reaches Junction Creek before JC3. This site does not experience direct mine effluents, although it is still at risk of atmospheric deposition from mining activities and nutrient input from urban development and a nearby golf course. A reference site was also sampled on Veuve River (VR), near Markstay (about 40 km from Sudbury). It should be noted that here, the term "reference" suggests that the sites are minimally affected by mining activities, but they may still be experiencing certain anthropogenic pressures. Three other sites were selected on Coniston Creek (CC1-CC3), a tributary of Whatapitei River. Although the Coniston smelter closed in 1972, the slag pile has been left largely un-remediated and may contribute to the contamination of nearby aquatic ecosystems [33]. In addition, one of the sources of the creek is a wetland near a mining property in Falconbridge (where large slag piles are still present [33]). These sites may also be influenced by past and present atmospheric depositions from the Sudbury area (about 10 km away). These last three sites were therefore selected as least-impacted sites, i.e., outside of intense Sudbury activities but still at risk of mining and urban contamination to a certain extent.
Water and Biofilm Collection
Sampling was carried-out within three consecutive dry days and avoiding rain events prior to sampling with the purpose of collecting biofilms and water samples under low flow conditions. Samples for water chemistry analyses were collected in triplicates, and inadvertent sample contamination due to handling was verified by on-site preparation of field blanks using ultra-pure water. Material used for samples destined for the analysis of cations and dissolved organic carbon (DOC) was previously soaked for 24 h in nitric acid 10% (v/v), and rinsed eight times with ultrapure water. Material used for samples for anion concentration analyses was previously rinsed eight times with ultra-pure water. Water collected for anions, cations, and dissolved organic carbon (DOC) was collected in 20 ml polypropylene Nalgene bottles using syringes and polysulfonate filters (0.45 μm; VWR International). Samples collected for cations analyses were acidified to 2.6% nitric acid (v/v) (trace metal grade; Fisher). Water collected for total phosphorus (TP) was acidified to 0.2% sulfuric acid (v/v). Biofilms were collected from the top surface of 5-10 rocks (composite samples) using a new toothbrush at each site. Water and biofilm samples were stored in the dark at 4°C until they were processed. Conductivity, temperature and pH were measured on-site with portable instruments (Sevengo SG3, Mettler Toledo; Denver Instrument UP-10).
Water Chemistry and Diatom Assemblage Analyses
Anions (F − , Cl − , SO4 2− , NO3 − ) were analysed by ion chromatography (Dionex AutoIon; System DX300), TP was analyzed by persulfate digestion and manual colorimetry (SM 4500-PB), and DOC was analyzed using a total organic carbon analyzer (TOC-500A; Shimadzu). Cations (Na + , Mg 2+ , Al 3+ , K + , Ca 2+ , Mn 2+ , Fe 3+ , Ni 2+ , Cu 2+ , Cd 2+ , Pb 2+ , Zn 2+ ) were analyzed by inductively coupled plasma-atomic emission spectrometry (ICP-AES; Varian Vista AX CCD). Copper, cadmium, zinc and lead were also analyzed by inductively coupled plasma-mass spectrometry (ICP-MS; Thermo instrument model X7). Values lower than field blank values were excluded from subsequent analyses. Detection limits are presented in Table 1. Lyophilized biofilms were digested to remove organic matter and to clean diatom frustules from cell content. Biofilm subsamples were placed in 800 μL of 100% (v/v) nitric acid for 48 h, and 200 μL of hydrogen peroxide 30% (v/v) were added for another 48 h. Following complete digestion of organic material, samples were rinsed several times to remove nitric acid. Microscope slides were prepared for cleaned diatom observation using Naphrax ® as the mounting medium (refractive index: 1.74; Brunel microscopes Ltd., Wiltshire, UK). Diatom assemblages were observed under a Reichert-Jung Polyvar microscope equipped with differential interference contrast (magnification 1250×). A minimum of 400 diatom valves were identified on each slide and diatom assemblages were expressed as relative abundances of the species assemblage. Taxonomic identification mainly followed Lavoie et al. [34]. Diatom frustule deformations were noted and classified as (i) irregular valve shape, (ii) irregular raphe, (iii) irregular striae, (iv) mixed [35].
The Eastern Canadian Diatom Index (IDEC; Indice Diatomées de l'Est du Canada [12,36]) was used to evaluate general biological integrity of the sampling sites. The IDEC was specifically developed to estimate water quality in Quebec and Ontario streams in agricultural and urban areas, and mainly informs on trophic status (nutrients), salinity, pH and organic matter loads [12,36]. An IDEC value was calculated for each diatom assemblage using the IDEC-neutral, which is the recommended sub-index to use based on the characteristics of the studied watersheds (geology, surficial deposits [12,36,37]). IDEC scores range between 0 and 100, with low values indicating poor biological integrity. The IDEC provides an overall water quality evaluation, and was not developed for metal contamination assessment. The abundance of abnormal diatom valves (% teratologies) was used as a complementary proxy of diatom-specific response to metals, as well as the presence of diatom species known as tolerant to metal contamination. A canonical correspondence analysis (CCA) was performed using Canoco 4.5 [38] to explore the diatom assemblage-water chemistry relationships and to visualize site distribution. Only the taxa with an abundance of at least 1% in at least one sample were included in the CCA. Diatom data were square root transformed and rare taxa were down weighted prior to running the CCA. Indicator species analysis, an approach used to determine indicator species characterizing groups of sites (based on the species relative abundance and its relative frequency of occurrence in each group), was conducted with the method of Dufrêne and Legendre [39] using PC-ORD version 6 [40].
Toxicity Criteria and CCU Calculation
Cumulative criterion unit (CCU) [41] was calculated at each site as the sum of the ratios between metal concentrations in a sample and their toxicity criterion values (CCU = Σi(mi/ci), mi = total recoverable metal concentration, ci = criterion value for the ith metal). The metals included in the CCU calculation were Al, Cu, Cd, Ni, Pb, and Zn. Toxicity criteria were based on the Canadian water quality guidelines for the protection of aquatic life established by the Canadian Council of Ministers of the Environment [42]. The criteria were adjusted for water hardness to account for the competitive effect of major cations like magnesium and calcium for binding sites on cell membranes, which reduces metal toxicity (e.g., [10,11] Table 1. The criteria used in the present study differ from the US EPA guidelines [43]. However, the values are generally in the same order of magnitude and therefore comparable. Only the criterion for aluminum was based on the US EPA recent guidelines because it accounts for pH, DOC, and hardness [43], rather than pH only. Four categories of CCU were used following the thresholds proposed for biofilms [44], and later modified for diatoms [15]:
General Water Chemistry
Water chemistry data showed strong variability between sites for several parameters (Table 1). This is attributed mostly to anthropogenic activities, as the study area does not vary markedly in terms of geological characteristics or vegetation. Hardness values varied from 37.6 ± 0.2 mg CaCO3/L at VR to 1620 ± 8 mg CaCO3/L at CCC, where elevated values may in part reflect lime addition. For example, the sharp increase in hardness between JC6 and JC7 (256 ± 1 to 1018 ± 8 mg CaCO3/L) clearly illustrates the effect of lime addition coming from the Copper Cliff Creek input, and JC1 hardness value of 1010 ± 13 mg CaCO3/L reflects mining activities from the Garson mine. Observed values for natural hardness in the region are around 50 mg/L, or below [45]. A comparable value was obtained at our site VR considered as a reference (relative to mining pressure). The hardness value of 177 mg/L obtained at our other reference site (MBC) is comparable to the value of 122 mg/L observed by Davidson [46], but other studies reported lower values for this creek (23-59 mg CaCO3/L) [32,47]. The sites from Coniston Creek (CC1-CC3) have rather elevated hardness considering the fact that these sites do not receive direct lime-containing effluents from operating mines. However, large piles of tailings left on decommissioned sites in Falconbridge and Coniston may be leaching some contaminants, including Mg + and Ca + , into Coniston Creek and other nearby aquatic ecosystems.
Except for CCC and NC1 (with pH of 5.7 and 6.2, respectively), all sites had pH values above 6.5, reaching up to 8 at JC6. TP concentrations were relatively elevated along the Junction Creek gradient starting at JC2, with a particularly high value at JC9 (137 ± 1 μg P/L). High levels of phosphorus in the lower Junction Creek sites suggest nutrient inputs from the Sudbury wastewater treatment plant effluents discharging a few kilometers upstream of Kelly Lake. In addition, untreated sewage is still occasionally bypassed during heavy rainfall events [48]. Site CC3 on Coniston Creek also showed relatively elevated phosphorus, probably due to its location downstream of the Coniston municipal sewage treatment plant and a golf course. Sites MBC and VR, although selected as reference relative to metal contamination, showed TP concentrations suggesting some nutrient inputs, which is not surprising considering that they are both influenced, to different extents, by urban activities. Specifically, the MBC sampling site is located in a dense residential development with a golf course immediately upstream. VR is in the small municipality of Markstay and there seems to be very minimal human activity in the upstream portion of the watershed except for two farmlands and a golf course. However, Markstay is on the list of water and wastewater projects that were approved under the Canada-Ontario Clean Water and Wastewater Fund agreement [49] for improving wastewater infrastructures (anticipated starting date set for some time in 2017), which suggests that sewage water may not have been managed properly at the time of sampling. Aside from the two sites considered as references and JC4, NO3 concentrations were elevated at all sites, especially along Junction Creek (at JC7 to JC9, as well as at JC1). These elevated values may result from actual and past blasting activities in the mining areas (ammonium nitrate-based explosives) and/or may come from municipal wastewater effluents as previously mentioned. Sulfate concentrations also fluctuated markedly between sites, with a low value of 5.0 ± 0 mg/L at VR and a peak value of 1923 ± 7 mg/L at CCC. The highest SO4 values along the Junction Creek gradient were observed at JC1 and JC7, located downstream of tributaries receiving mining effluents (Garson Branch Creek and Copper Cliff Creek).
Metal Concentrations and CCU
The sites on Nolin Creek showed the highest concentrations for all metals except for Al. CCME water quality criteria were exceeded for Ni and Cu (Table 1, in bold). For example, Cu concentration at NC2 (38 ± 3 μg/L) was 11× higher than the CCME criterion. A press release in a local newspaper in the summer of 2015 reported the first sightings of fish in Nolin Creek since at least the early 1990s [50]. This is a sign that although metals are still present, the system is recovering. Nickel concentration (788 ± 3 μg/L) at FBC was more than 3× the criterion, while Cu did not exceed the CCME guideline at this site. Cu and Ni values in Frood Branch Creek were respectively 1170 μg/L and 4220 μg/L in 1999 [1], while values had drastically dropped by 2004 (respectively 54.3 and 224.8 μg/L) [32], following diversion work to stop mining from entering the watercourse. Interestingly, our values from 2016 indicate that Ni increased compared to the reported value from 2004, while Cu markedly decreased (5.72 ± 0.17 μg/L). Although Cu concentration at the reference site VR was not elevated, the water quality criterion was exceeded by a factor of almost 2×, likely due to the low water hardness at this site. Cadmium concentration only exceeded the water quality criterion at site NC2.
CCU values ranged between <1 and 20 ( Table 1). The highest CCU values were obtained for the Nolin Creek sites (NC1-NC2-NC3) and Frood Branch Creek (FBC). CCUs along Junction Creek were relatively stable and low, with values generally <1, except at JC6 and JC7 where they were slightly >1. Interestingly, the VR reference site showed a CCU value of 2.5, which is mostly attributed to the low hardness value influencing the criterion for Cu, as previously mentioned. As a general trend, the sites that were selected as references or least-impacted relative to metal contamination (MBC, VR, CC1, CC2, CC3, and upper portion of Junction Creek) represented "background" concentrations, except for VR and CC3. Copper Cliff Creek also obtained a low CCU score, which is surprising considering the mining activities in close proximity. Nickel and copper generally exceeded the CCME water quality criteria and consequently contributed the most to the CCU values.
Biotypology, IDEC Scores and Metal-Tolerant Taxa
The relative abundances of the dominant diatom species (more than 5% in at least one sample) observed in each of the 19 assemblages are presented as Supplementary material. While some diatom taxa such as Achnanthidium minutissimum and Nitzschia palea aff. debilis were abundant at many sites, other taxa were restricted to only certain sites. Diatom-based monitoring using the IDEC revealed that several sites were severely impaired, with very low index values and poor biological status (Table 1). A CCA was performed including diatom and chemistry data, with IDEC scores, % teratologies and CCU as passive variables. Site distribution on the ordination suggests three main groups characterized by particular diatom assemblages and reflecting distinct environmental conditions. The taxa dominating in each group (labeled groups 1, 2 and 3) are presented on the CCA (Figure 2). In addition, significant indicator species for each group are presented in Table 2. The environmental variables included in the CCA (excluding the passive variables) explained 39% of the variance in diatom species distribution (first two axes). Group 1, on the left-hand panel, was characterized by sites receiving treated mining effluents, with elevated metal concentrations and higher CCU values. On the lower panel, sites identified as Group 2 are reference or least-disturbed sites, and correspond to background conditions of the area (in terms of metals). Finally Group 3 (right-hand panel) discriminates the sites with the highest nutrient loads. Table 2. Significant indicator species for each of the three groups based on the method from Dufrêne and Legendre [39]. Indicator values range from 0 to 100 (excellent indicator). SD = standard deviation. Group 1, including NC1, NC2, NC3, CCC and FBC, was dominated by A. minutissimum complex, Brachysira vitrea, Nitzschia microcephala, Nitzschia palea, Encyonema silesiacum, and Navicula veneta. Group 1 sites were characterized by elevated metal concentrations, and their above-mentioned dominant diatom taxa are often reported in metal-contaminated sites [11,15,[51][52][53][54][55][56]. While these assemblages suggest metal contamination, they are also positioned at the lower end of the nutrient enrichment gradient on the CCA (and clustered at the higher end of the IDEC gradient), which suggests excellent water quality in terms of nutrient and ion enrichment. FBC, NC1, NC2 and NC3 were categorized as reference status (class A). Indeed, while nitrates are relatively elevated, phosphorus at those sites is low, which partly explains the good biological integrity (high IDEC scores despite metal contamination) generally observed for the sites in group 1. One should be careful with the interpretation in this situation because the IDEC scores most likely reflect the strong dominance of A. minutissimum and B. vitrea, together making up for 60-90% of the assemblages at these sites. While these species are indeed good indicators of lower nutrient concentrations [57][58][59], they are also known to be tolerant of metal contamination (see above references). However, other dominant taxa in this group can tolerate higher nutrient levels (e.g., Nitzschia palea, Navicula veneta, Encyonema silesiacum) which explains lower IDEC scores at CCC. Group 2 diatom assemblages had many species in common, and IDEC scores obtained mainly reflect the marked differences in the relative abundance of the A. minutissimum complex that fluctuated between <10% and >60% between sites. This taxon was also very abundant in diatom assemblages from group 1. It must however be noted that group 2 was dominated by a long and narrow form of A. minutissimum, while group 1 was dominated by a small and round form of A. minutissimum. These two forms of A. minutissimum may be different varieties of the species within the A. minutissimum complex, or morphological variants of the species as a response to environmental variables (e.g., [60]). The IDEC scores obtained for the group 2 sites varied from 6 (class D) to 83 (class A), but assemblages generally indicated poor biological integrity (classes C and D). Indeed, except for the A. minutissimum and Fragilaria capucina complexes, most species characterizing group 2 are indicators of low biological status based on the database used to develop the IDEC. The lowest index values (biological integrity class D) were observed at sites JC2, JC3, and JC4. Sites JC5, JC6, JC7, CC3, MCB and VR fell into class "C", also indicating degraded environments. The sites CC1, CC2 were categorized as slightly impaired, with an IDEC class B, while only JC1 in this group suggested reference status (class A). The IDEC informs on overall biological health, but mainly reflects eutrophication. It is, therefore, not surprising to observe low IDEC values at sites located downstream of small municipalities or in the Greater Sudbury area where nutrient levels are higher (IDEC scores correlated with TP; r = −0.6, p ≤ 0.05). Most species from group 2 are indicators of baseline or low metal concentrations (low CCU), as suggested by Morin et al. [15], although certain taxa from the A. minutissimum and F. capucina complexes were frequently observed in metal-contaminated conditions. However, the presence of metal-tolerant taxa does not necessarily suggest contamination, especially in the case of the two above-mentioned taxa, which are ubiquitous. Amphora veneta and Eolimna subminuscula dominated the assemblages at sites JC8 and JC9 (group 3) and were rare or absent at other sites, which explains that these sites clustered apart from the other sites on the CCA. A. veneta was reported as an indicator of moderate to low biological status [57,58], which is in agreement with the higher phosphorus concentrations observed and poor ecological integrity (class C and D) based on IDEC scores. E. subminuscula is also reported as a nutrient-tolerant species [57,61,62]. The other taxa characterizing group 3, such as small species identified here as belonging to the Eolimna minima complex and Nitzschia palea aff. debilis form 2 are indicators of nutrient-rich environments as well [36,57,59]. Gomphonema clavatum (sensu Krammer and Lange-Bertalot [63]) was also abundant at site JC9 (8%), but this species is usually not typical of high nutrients concentrations [63]. The low IDEC values observed for group 3 sites reflect the presence of nutrient-tolerant taxa. Interestingly, the dominant taxa from group 3 have also been reported in water bodies affected by mining activities [10,15,54], and references therein], although metal concentrations at sites JC8 and JC9 were not particularly elevated, being designated as CCU class L.
Diatom Teratologies as a Response to Stress
Very low proportions of deformed valves were observed at the reference or least-disturbed sites (CC1, CC2, CC3, MBC, VR), with values ranging from 0 to 0.7% (Table 1). As suggested by Morin et al. [64] and Arini et al. [65], deformity frequencies between 0.5 and 1% are considered as naturally occurring. With abnormal valve frequencies of 1-1.2%, it is difficult to confirm a specific response to metal contamination at sites JC1, JC2, JC4, JC5, JC6, JC7, NC1, and NC2, as these values are close to the estimated natural background. JC3, FBC and CCC showed low frequencies of teratologies, with values around 1.5%. These values are more likely to reflect metal contamination, although this is risky to confirm without replicated analyses accounting for inter-sample variability. Sites JC8, JC9 and NC3 revealed higher proportions of deformed diatom valves, with values reaching up to 8.7% at JC9. Deformities in such high numbers are very likely due to the presence of metals (or to unmeasured organic compounds or mixture of contaminants), and despite the absence of replication are expected to reflect a ''true'' response of the diatom assemblages. It is difficult to explain the high deformity frequency observed at JC8 and JC9 as metal contamination does not seem severe (based on a single water sample collected). However, it is possible that multiple stressors exerted pressure on the assemblage, leading to an increased sensitivity of the diatom cells. Differentially-acting stressors may have cumulative (synergistic or additive) deleterious effects on the individuals: either stressor may target certain cellular functions (e.g., detoxification), while the other stressor would reduce another metabolic pathway involved in frustule formation, with the effect of reducing the overall capacity of the cell to cope with the combined stressors and produce normal cells [15]. For example, the former creosote plant located along Junction Creek upstream of Kelly Lake contaminated the system with PAH. There is no data available on PAH concentrations for the present study, but Jaagumagi and Bédard [1] reported up to 4.54 μg/g in sediments in 1999 just above Kelly Lake. It is possible that diatom deformities at these particular sites are a response to organic contamination, as observed in other studies [66][67][68], or that metals and organic compounds have additive or synergistic effects leading to a stronger stress on diatoms. Sites JC8 and JC9 were also the sites showing the highest phosphorus concentrations, suggesting that eutrophication may act as an additional environmental stress as observed in a study combining metal and nutrient load effects on diatoms [19]. Another possible explanation for the high number of teratologies is the proneness of the present species to deformation as discussed in Lavoie et al. [17].
No correlation was observed between the % teratologies and metal concentrations or CCU values, but there was a significant difference in deformation frequency between sites categorized as CCU class B compared with the sites categorized as CCU classes L, M and H together (t = 1.82; n = 19 p = 0.048; Figure 3). This situation has been encountered in other studies (see discussion in Lavoie et al. [17]), where deformities were observed in higher proportions in contaminated sites compared to reference sites while a relationship between % teratologies and a gradient in metal contamination was lacking. The difficulty in directly relating % teratologies and abundance of metal-tolerant taxa with metal concentrations is due to multiple factors such as the variability in water chemistry, metal bioavailability, and species proneness to deformities [17]. Although correlations between % deformities and metal concentrations are sometimes unclear, the presence of teratologies is a red flag for environmental stress, suggesting that additional water quality measurements may be needed to highlight contamination from other sources and types than those initially analyzed. From a biomonitoring perspective, including the % deformities in a multi-metric index could broaden the range of anthropogenic impacts detected by current diatom indices and allow identification of the main pressures under multi-stress scenarios [69]. As a general trend, abnormal valve shape was the most frequent type of teratology encountered, although striae/fibulae aberrations were common at JC4, NC1 and NC2, and abnormal sternum/raphe were often observed at JC6 and JC7. Lavoie et al. [17] discuss the possible interest in considering the type of deformation in monitoring, where the nature and timing of environmental stressors may have an influence on the response. However, in this study, no trend or relationship was observed beyond the above-mentioned observation.
Conclusions
This study on water chemistry and diatom assemblages revealed that Junction Creek and its tributaries are under multiple stressors, both from present and past mining operations in the region, but also from urban development and related activities. Diatom assemblages reflected the contrasted environmental conditions in the area and the different types of pressures (metals and/or nutrients and/or salinity and/or PAH). As a general summary of water quality in the study area, it seems that the three Nolin Creek sites are the most contaminated by metals and are the main contributors to the metal loads in the lower portion of Junction Creek. As expected, these sites are dominated by metal-tolerant diatoms. Sites JC7-JC8-JC9 along the Junction Creek gradient seem to be the most nutrient-enriched based on phosphorus concentrations and on the presence of nutrient-loving taxa, reflecting past and present urban activities. The level of abnormal diatoms in the samples from sites JC8 and JC9 undoubtedly reflects a response to one or multiple stressors, and suggests that the lower portion of the watercourse needs to be further investigated.
Considerable efforts have been deployed to rehabilitate the Junction Creek watershed and to decrease SO2 emissions and airborne particles, leading to marked improvements in the chemical, physical and biological integrity of the system and surrounding water bodies. However, despite an obvious increase in water quality, the Junction Creek system is still relatively impaired. The extent of recovery differs among organisms, and the confounding effects of multiple anthropogenic activities renders difficult the task of "measuring" the success of rehabilitation actions. As Junction Creek and the nearby aquatic ecosystems are slowly recovering from their past industry-related pressures, water managers must now deal with rapid urban and residential development and their associated problems. Climate change will also be an important variable to consider in future monitoring of aquatic ecosystems in Sudbury and its surroundings. According to information from the Greater Sudbury Climate Change Consortium [70], it is estimated that Ontario will warm an average of 2 to 5 ºC within the next 75 to 100 years, with more frequent and severe extreme events such as floods and droughts. In the Greater Sudbury region, climate change is projected to result in an increase of 2 ℃ in summer and 1 ℃ in winter for the 2010-2039 period. These climate-related changes will certainly interact with environmental pressures and affect recovery processes and trajectories. Diatom-based monitoring is a reliable, sensitive and cost-effective approach for assessing aquatic ecosystem health; changes in diatom assemblage structure are quickly observed as a response to changing environmental conditions. As warming-induced effects on diatom communities were previously shown to interplay with metal stress [71], long term monitoring of the area's recovery is recommended. The present study lays the foundation for future diatom-based monitoring in the region, and will serve as a point in time reference for assessing further recovery (or potential degradation as a result of climate change) of the Junction Creek system. ones in which the author was not personally involved, are appropriately investigated, resolved, and documented in the literature.
|
2019-01-02T00:12:22.670Z
|
2018-02-21T00:00:00.000
|
{
"year": 2018,
"sha1": "433e074f63207907cfdaaf1cfc6cf469160e8d8f",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3298/5/2/30/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "79b0d6ad2a18e19a67d17638b1089c43ddb3cb44",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
244265651
|
pes2o/s2orc
|
v3-fos-license
|
Fixed Effects High-Dimensional Profiling Models in Low Information Context
Profiling or evaluation of health care providers, including hospitals or dialysis facilities, involves the application of hierarchical regression models to compare each provider’s performance with respect to a patient outcome, such as unplanned 30-day hospital readmission. This is achieved by comparing a specific provider’s estimate of unplanned readmission rate, adjusted for patient case-mix, to a normative standard, typically defined as an “average” national readmission rate across all providers. Profiling is of national importance in the United States because the Centers for Medicare and Medicaid Services (CMS) policy for payment to providers is dependent on providers’ performance, which is part of a national strategy to improve delivery and quality of patient care. Novel high dimensional fixed effects (FE) models have been proposed for profiling dialysis facilities and are more focused towards inference on the tail of the distribution of provider outcomes, which is well-suited for the objective of identifying sub-standard (“extreme”) performance. However, the extent to which estimation and inference procedures for FE profiling models are effective when the outcome is sparse and/or when there are relatively few patients within a provider, referred to as the “low information” context, have not been examined. This scenario is common in practice when the patient outcome of interest is cause-specific 30-day readmissions, such as 30-day readmission due to infections in patients on dialysis, which is only about ~ 8% compared to the > 30% for all-cause 30-day readmission. Thus, we examine the feasibility and effectiveness of profiling models under the low information context in simulation studies and propose a novel correction method to FE profiling models to better handle sparse outcome data.
INTRODUCTION
Unplanned readmissions following a hospital discharge are a major source of morbidity and mortality risk for patients on dialysis. The burden of hospitalization is particularly high for patients on dialysis, where the latest U.S. national data shows that the frequency of 30-day readmissions is 31.1%, which is more than double the frequency of readmissions seen in older Medicare beneficiaries without kidney disease (United States Renal Data System/USRDS [1]).
Profiling or evaluation of health care providers, such as hospitals, dialysis facilities, and nursing homes among others, serves multiple purposes, including (1) identifying providers with performance below standard by government agencies for regulatory or payment purposes and (2) conveying information and feedback to stakeholders (e.g., the public, patients, providers) regarding the quality of care among providers. The main focus of our work is objective (1), specifically with *Address correspondence to this author at the Department of Medicine, University of California Irvine, 333 City Blvd. West, City Tower, Suite 400, Orange, CA 92868, USA; E-mail: danhvn1@hs.uci.edu respect to the goal of identifying providers whose performances (e.g., 30-day readmission) are exceptionally worse (W) and not different (ND) relative to a reference, such as a national "average" standard. Also related to the inferential process of identifying/flagging providers with 30-day readmission rates W and ND from the national rate, it is of interest to obtain accurate estimates of provider-specific effects and associated quality metrics.
When the outcome, such as 30-day readmission, is not frequent and/or when there are relatively few patients within a provider, referred to as the "low information" context [2], estimation and inference for profiling models are understandably more challenging. This is the situation when the patient outcome of interest is cause-specific 30-day readmissions, such as 30-day readmission due to infections in patients on dialysis, which is only about ~8% compared to greater than 30% for all-cause 30-day readmission. Infectionrelated hospitalizations are serious adverse events that are oftentimes preventable. Hence, it is an important performance indicator that is carefully monitored in dialysis facilities. day unplanned hospital readmission are hierarchical logistic regressions of the form outcome ~ provider effects + patient case-mix effects. Thus, patient outcomes vary across providers due to variation in providers' quality of care (provider-specific effects) and variation in patient-level case-mix effects, which include demographics, comorbidities, and the type of index admission. Because of the nested data structure and the need to stabilize estimation, modeling provider effects as random effects (RE) has been used [2][3][4][5][6][7].
A justification for the use of RE models is that they provide stable provider effect estimates through shrinkage, although several inherent disadvantages have been noted. In particular, RE estimates are biased toward the overall provider average and biased in the presence of confounding between patient risk factors and provider effects [8]. Also, although the overall average error in estimation of provider effects is smaller because mean square error is minimized over the full set of provider effects in the RE approach, fixed effects (FE) estimates have smaller error for outlier 'providers whose effects are exceptionally large or small' [8], which are the providers we wish to identify. Our previous works also have shown that the benefit of stabilization comes at a severe cost in substantially biased provider effects estimation and, perhaps more important, at a substantial reduction in the power to identify W providers [9,10]. Our works and others have used high-dimensional FE models to identify substandard ("extreme") performance, especially for profiling 30-day readmission for dialysis facilities where the outcome is not sparse [3,[8][9][10][11][12][13][14][15]. However, the extent to which FE models are useful in the low information context has not been studied, which is the focus of this work. Thus, we assess the relative performance of the FE model proposed by He et al. [15], including the stability of provider-specific estimates and the ability to identify extreme providers in simulation studies. Briefly, the FE model of He et al. [15] is a high-dimensional parameter model with a unique fixed intercept for each provider and is used in assessing the performance of dialysis facilities [3,8,15]; see also Chen et al. [14] and Estes et al. [11,12] for recent dialysis facility profiling applications. Furthermore, in this work, we also propose and examine the performance of a novel corrected FE model estimation approach geared towards estimation under low information context, where the (uncorrected) FE model estimates of some provider-specific effects may be unreliable.
METHODS: HIGH-DIMENSIONAL FE PROFILING MODELS
We introduce the FE profiling model using the context of hospital readmission as an illustrative example. Let the binary outcome Y ij equal 1 if patient index discharge j in provider i results in a readmission within 30 days, for patient discharge j = 1, 2,…, N i in provider (dialysis facility) i = 1, 2,…, F . The FE profiling model (He et al. [15]) is In practice, the process of risk adjustment is complex and depends, in part, on policy objectives and the specific patient population (e.g., general population, dialysis population). However, we point out that it is critical to adequately risk-adjust for patient-level factors and avoid inclusion of variables (e.g., provider-level or patient-level variables) that are/may be related to the process of care (e.g., see [2,3,13]).
To avoid confusion, we emphasize that the model shown in (1) is not a collection of individual models (i.e., not a separate model for each provider), but rather a single model with high-dimensional parameters and requires simultaneous estimation for thousands of provider-specific effects/parameters ( {! i } i=1 F and ! ). For example, for profiling dialysis facilities the dimension of ! = (! 1 ,…,! F ) T is > 6,000 dialysis facilities across the U.S., and the dimension of ! is ~ 40. Standard estimation (e.g., maximum likelihood) and software fails; thus, He et al. (2013) proposed a feasible estimation method based on an alternating one-step Newton-Raphson that iterates between estimation of ! and ! i .
The summary performance index for each provider which incorporates patient-level risk factors ( Z 's) used in practice is the standardized readmission ratio (SRR). For FE model (1), given the provider and the patient case-mix effect estimates, denoted by ! i and ! , respectively, the estimated SRR for provider i is where p ij = g !1 (" i +# T Z ij ) is the estimated probability of readmission for patient j in provider i and
ESTIMATION AND INFERENCE PROCEDURES
In addition to the challenge of high-dimensional parameters, compounding difficulties are encountered in the low information context where the outcome is sparse, resulting in providers with few readmissions or even no readmission. For very small providers with few patients, there is very low information to assess performance. In extreme cases of providers with no or very low readmission, the FE estimation method [15] leads to unstable estimates for those providers. Thus, in the low information context, we propose a correction to the FE estimates for provider-specific effects.
FE Model Estimation
To describe our proposed FE corrected estimation for provider-specific effects, we first set the notation for the likelihood of the FE model (1) and briefly summarize the alternating Newton-Raphson algorithm proposed by He et al. [15]. For the FE model (1), Because direct maximization of (3) is not feasible with standard methods when F is large (e.g., F ! 6, 000 ), He et al. (2013) proposed an effective iterative algorithm that alternates between estimation of ! i given ! and estimation of ! given ! i using onestep Newton-Raphson updates. More precisely, estimation of the high-dimensional parameters ( !, " ) are feasible since the likelihood (3) can be written as Thus, given ! , ! i can be estimated via a Newton-Raphson procedure that depends only on one variable in the maximization of L i (! i , ") . Briefly, the estimation procedure proposed by He et al. (2013) is as follows.
The (m +1) th maximization step for ! , given (iv) The above steps are repeated until convergence, defined by and ! is some prespecified tolerance level. Denote these final uncorrected provider-specific estimates as . Programs in R, sample data, and tutorial are provided in the online Supplementary Materials. In our implementation, we choose ! (0) = 0
Corrected Estimation of Provider Effects
As described earlier, estimation of provider effects, i ! , for the FE model can be unstable for some providers in the low information context. Thus, we consider an approach to "correct" or stabilize FE estimates. We adapt the Firth correction in (standard) logistic regression [16,17] to the high-dimensional FE model (1). Recall that for the standard (non-hierarchical data) logistic regression model with N independent units, Firth's modified score equations [16] for estimation to reduce small sample bias is This is equivalent to using a penalized likelihood L * (! ) = L(! ) | I(! ) | "1/2 [17], where the penalty term | I(! ) | "1/2 is equivalent to Jeffreys' prior [18]. Applying this to logistic regression yields the modified estimation For binary outcome with small sample size, Firth's logistic regression has become a standard approach to reduce bias in the estimated regression coefficients.
We adapt this penalized estimation to the highdimensional FE model (1) to correct for unstable estimation of ! i for providers with low information. We first note that ! can be precisely estimated because it is based on data from all providers; therefore, penalization on patient-level risk factors is unnecessary. Direct application of the Firth's modified score to penalize ! = (! 1 ,…,! F ) is not feasible for FE profiling model (1) due to the challenge of calculating the score penalties. These are obtained via the diagonals of the N ! N hat matrix, which in dialysis population applications are in the order of N ! 500, 000 or larger. The size of N is many orders of magnitude larger for profiling applications in the general population. However, estimating ! with Firth's correction, for a fixed ! , is equivalent to sequentially estimating ! i individually, for a fixed ! , using Firth's correction. This is seen as follows. For a fixed ! , the hat matrix used in the estimation of ! with Firth's correction for a fixed ! , the estimation of ! using Firth's correction can be reduced to a sequence of estimations of a single parameter ! i by penalizing the score U i , using the weights h ij = w ij / j=1 N i ! w ij . More specifically, the provider-specific penalized score equations are We propose a simple correction to adjust the estimates from Section 3.1 of provider-specific effects, ! i 's, using the modified score U I * . More precisely, first, ! is fixed at the estimate resulting from Section 3.1, namely ! U . The provider effects ! i 's are then reestimated using the estimation procedure outlined in 3.1 with the following modifications. In Step (i), Note that when ! (0) is set to the zero vector, the initial value of ! (0) reduces to value previously noted in Step (i) in Section 3.1. In Step (ii), ! (m+1) is set equal to ! (m ) . In other words, ! is no longer estimated. Finally, the score in Step (iii) is modified by replacing U i with U i * .
Inference: Identifying Extreme Providers
In profiling, one of the main interests is to identify/flag providers that significantly deviate from the national norm (e.g., national average). The current public policy in the U.S. penalizes providers that perform significantly W than the national standard (SRR > 1 ). Thus, in practice, the goal is to flag/identify providers as W or ND from the national standard (SRR not different than 1). Better (B) providers (SRR < 1 ) are not penalized nor incentivized.
First, note that for a provider with an adjusted event rate that does not differ from the national norm, ! i = ! M , which implies SRR i = 1. When SRR i > 1 or SRR i < 1 , the event rate for provider i is greater than or less than the national norm, respectively. Thus, testing the null hypothesis H 0 : Simultaneously testing the null hypothesis for thousands of providers is computationally expensive. However, one can take advantage of the fact that ! and ! M can be estimated based on the large data from all providers. Hence, these parameters are estimated and fixed throughout the proposed algorithm below which is based on resampling responses under the null hypothesis. Since the global parameters ! and ! M are fixed, model fitting to the resampled data only requires estimation of provider-level effects ! i . This reduces the computational burden substantially since each ! i is estimated using only data from each provider separately. The steps of the procedure for each provider i are as follows.
estimation of ! i b only involves steps (iii)-(iv) in Section 3.1 for the uncorrected FE model since ! is fixed. For the correction method, the estimation proceeds as described earlier in Section 3.2; that is, the corrected estimation algorithm is applied to the b th dataset to obtain p ij b .
(3) A nominal two-sided p -value for the i th provider, P i , is calculated as , where T i O is calculated based on the original/observed data and I(A) denotes the indicator function for event A.
SIMULATION STUDY DESIGN
We designed simulation studies to assess the performance of the uncorrected and corrected FE model estimation methods, mainly with respect to (A) estimation of provider-specific effects, ! i 's and SRR i 's; and (B) identification of extreme providers relative to a reference. Data were generated from the model For the provider effects, {! i } i=1 F , 2.5% were underperformers (W: "worse") and 2.5% were overperformers (B: "better") whose effects, ! i 's, were equally spaced in the intervals [0.4,1.0] and [!1.0, !0.4] , respectively. The remaining 95% of providers, with effects not different (ND) from the reference, were generated from a N(0,! 2 ) distribution with ! 2 = 0.2 2 . Note that a constant ! 0 has been added to simulation model (4) to conveniently control the baseline rate of readmission (outcome data sparsity), where baseline rates of readmission considered were 20%, 10%, 5%, and 3% corresponding to ! 0 = log(1 / 13.5) , log(1 / 33) , log(1 / 73) , and log(1 / 126) , respectively. This setup conveniently regulates the level of outcome data sparsity. For each baseline readmission rate setting, 200 datasets were generated and the estimation (Section 3) and inference procedure (Section 3.3) was applied to each simulated dataset.
The provider volume of each generated dataset range from a minimum of 48 to a maximum of 195 patients on average, similar to real USRDS data in applications (e.g., see [14]). More specifically, the number of patients were generated from a truncated Poisson distribution following He et al. (2013), where the number of patients was taken to be (15) . This process mimics the sparse data structure of dialysis facility (provider) i in practice.
Estimation of Provider-Specific Effects and SRRs
The results for provider-specific estimates of ! i 's for the 125 (2.5%) under-performers ( ! i > 0 ) and 125 over-performers ( ! i < 0 ) for the case of 3% overall event rate (most sparse) are provided in Figure 1A where averages of ! i estimates over 200 simulated data sets are plotted. As expected, under this extremely low information context, provider effect estimates are unstable for the uncorrected FE method.
However, note that these providers are mainly the over-performers ( ! i < 0 ) with low or zero events ( j ! y ij 's are small) leading to "explosion" of the estimates (Figure 1A). It is important to note that these unstable estimates are in the direction of the true effect (negative direction for negative ! i 's, where ! i " #$ ). Also as expected, estimates for under-performers ( ! i > 0 ) are less unstable and more on target for the uncorrected FE method. The corrected estimation approach, which adapts the Firth's modified score equation for the FE model, largely eliminates the instability and estimates are on more target for the true ! i 's ( Figure 1B). Figure 2 (left column) shows estimates of ! i 's for increasing percentage of overall events, from 3% to 20% for the uncorrected FE method. Clearly, the frequency of unstable estimates for ! i < 0 decreased with increasing overall events, although unstable estimates are apparent even at a 10% event rate. However, the magnitude of the unstable estimates declined quickly (! i < 0 ) as the overall event rate increased (e.g., at 20%).
Next, we summarize results for estimation of the provider-specific SRRs. As describe in Section 2, SRR is the summary performance index for each provider used in practice which incorporates patient-level risk factors Z ij and their estimated effects, ! . More specifically, given the provider and the patient case-mix effect estimates for each approach, denoted by ! i * and ! , respectively, the estimated SRR for provider i is where p ij * = g !1 (" i * +# T Z ij ),p M ,ij * = g !1 (" M * +# T Z ij ) , * and denotes the uncorrected and corrected approach, namely U and C . Figure 3 (left column) summarizes the uncorrected FE model estimates of SRR for 3% to 20% overall outcome event. We note that even though specific ! i < 0 were unstable for highly sparse data (e.g., at 3% -10%; Figure 2), corresponding estimates of SRR's are stable overall and targets the true SRR, because SRR incorporates patients characteristics, their effects, as well as provider-specific effects as shown in (5); see Figure 3 (left column). Average SRR estimates for the corrected estimation performed well and are summarized in Figure 3 (right column). However, we note that for extremely sparse data (e.g., at 3%), the uncorrected approach slightly overestimate SRRs while the corrected approach slightly underestimate SRR for truly worse providers (true SRR > 1 ; Figure 4 -top). For truly better providers (true SRR < 1 ), both methods slightly over estimate the true SRRs, although more so with the corrected method. Differences in SRR estimates between the two methods are neglible as the overall percent of events increases (e.g., at 20%; Figure 4 -bottom).
Flagging Extreme Providers/Facilities
The overall performance of the uncorrected and corrected FE methods to identify extreme providers are assessed in terms of sensitivity (SEN) to correctly identify providers that under-perform (W: "worse"), over-perform (B: "better") relative to the reference standard (e.g., national reference), and specificity.
Specificity
(SPEC) refers to the correct identification/flagging of providers whose performances are not different from the reference standard (ND: "not different"). We note that provider assessment policies in practice focus on identifying under-performing providers (W providers) as those are tied to payment policy or regulatory goals. Figure 5 summarizes the distribution of SEN-W, SEN-B, and SPEC for varying levels of outcome sparsity, ranging from 3% to 20% overall outcome rate. For extremely sparse data of 3% and 5%, the uncorrected method has highest sensitivity to detect under-performing providers (higher SEN-W; left column). This is expected since the for truly worse providers, there are more outcome events ( j ! y ij ); see Figure 5 (left column). SEN-W rates were similar between uncorrected and corrected methods at 20% overall overall outcome rate.
Because the event counts are zero or low for truly better providers in the context of sparse outcome data, the unstable/poor estimation of provider effects from the uncorrected method results in lower sensitivity to detect over-performing providers (lower SEN-B) compared to the corrected method ( Figure 5 -middle column). However, note that the nominal SEN-B rates are low overall, as expected, compared to nominal SEN-W rates. This is expected in the low information context since B providers would have fewer readmissions, making it difficult to correctly identify B providers when the outcome is sparse. SPEC rates were high and similar between uncorrected and corrected methods (Figure 5 -right column).
As mentioned earlier, the main current objective of flagging "extreme" providers in profiling analysis focuses on identifying W providers and ND providers. Overall performance of the uncorrected and corrected estimation methods to identify truly worse (sensitivity -worse), truly better (sensitivity -better), and specificity (providers not different from the reference) across data sparsity of 3%, 5%, 10%, and 20% overall outcome event rate. Displayed is average for each SRR i estimate, averaged over 200 simulated data sets.
Providers that over-perform (B providers) are not relevant to current payment policy or regulatory objectives. Therefore, under this regime, it is of interest to ensure that there are no (or low rate of) false negatives that misclassify/flag B provider as W provider ( FN B!W ). Indeed, there are none, i.e., FN B!W = 0 across all levels of data sparsity (Figure 6), which is not surprising since W and B providers are on the opposite tails of the distribution of providers. This is true with the uncorrected FE model (as well as the corrected estimation method) since the direction of unstable estimates of ! i 's are in the same (negative) direction of true ! i (as pointed out earlier), despite the unstable provider-specific estimates. However, it is not uncommon for false negative classification of a B provider as a ND provider ( FN B!ND ). Although FN B!ND deceases with increasing percentage of overall outcome events as expected, FN B!ND is common for the extremely low information context (e.g., 3%, 5% overall event rate; Figure 6). We emphasize that high FN B!ND does not affect current public policy because over-performers are not incentivized and are consider "ND" providers anyway. Therefore, the FE profiling model, even uncorrected, is still useful in the low information context with respect to the current public policy goal of identifying W and ND providers. However, if the public policy goal evolves to also incentivize for better performance, then novel methods able to correctly identify B providers with high sensitivity are needed.
DISCUSSION
Seminal works by Kalbfleisch and Wolfe [8] and He et al. [15] show that FE model estimates have smaller error for outlier providers whose effects are exceptionally large or small, and these extreme providers are precisely the ones we wish to identify in profiling analysis. The high-dimensional FE models were then used to assess the performance of dialysis facilities (providers) with respect to all-cause hospital readmissions which are frequent outcomes in dialysis patients. Subsequently, our own works have elucidated several operating characteristics [9,10] of the FE profiling models and have been applied to assess the performance of dialysis facilities with respect to allcause 30-day readmissions [11,12,14]. However, to date there is no work that examines the performance of FE models in the low information context where the outcome is sparse. The current study starts to fill this gap in knowledge. Several findings from this study have important practical impact in the low information context. First, even though the provider-specific estimates with true ! i < 0 (truly B providers) are unstable, they are in the same direction as the true effects and the instability has moderated effects on the estimation of SRRs; i.e., SRRs are reasonably wellestimated and are the relevant quantities used in practice as they incorporate patient case-mix. However, if the provider-specific estimates, ! i 's, are themselves of interest, then our proposed correction method can be used to provide better estimates, especially corresponding to uncorrected ! i that are substantially less than zero. Second, the consequence of sparse outcome data impacts more directly inference for B providers because true over-performers are the ones that contribute no or few events (readmissions); however, this "deficit" in estimation does not greatly impact the identification of W providers/underperformers and ND providers, which is the current focus of profiling in practice. Development of novel methods that have better sensitivity for flagging B providers would be useful when public policies or regulatory goals incorporate an incentive for overperformers.
ACKNOWLEDGEMENT
This study was supported by a National Institute of Diabetes and Digestive and Kidney Diseases grant R01DK092232. The interpretation/reporting of the results presented are the responsibility of the authors and in no way should be seen as an official policy or interpretation of the U.S. government. We describe details of calculating the penalties for the Firth's modified score equations, adapted for the highdimensional FE profiling model in Section 3.2. For a fixed ! , the hat matrix used in the estimation of ! with Firth's correction
Dependence Structure of Covariates in Simulation Model
The correlation matrix and the standard deviation of the patient case-mix variables, Z ij1 ,…, Z ij15 , are summarized in Table 1.
Online Supplementary Materials: Analysis Example and R Codes
Example dataset, R codes, and tutorial for fitting the uncorrected and corrected models are publicly available at https://sites.google.com/view/usrds-modeling/software.
|
2021-10-19T15:26:16.364Z
|
2021-09-27T00:00:00.000
|
{
"year": 2021,
"sha1": "3ac0c6df4c46e7b60833adfc89499c3dec2bfb84",
"oa_license": "CCBY",
"oa_url": "https://lifescienceglobal.com/pms/index.php/ijsmr/article/download/8324/4479",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f77ac95aa51bde1707bcc15d28b0ec89f324977b",
"s2fieldsofstudy": [
"Economics",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Computer Science"
]
}
|
10130253
|
pes2o/s2orc
|
v3-fos-license
|
Accretion models of Sgr A*
The supermassive black hole in the center of our Galaxy, Sgr A*, is unique because the angular size of the black hole is the largest in the sky thus providing detailed boundary conditions on, and much less freedom for, accretion flow models. In this paper I review advection-dominated accretion flow (ADAF; another name is radiatively inefficient accretion flow) models for Sgr A*. This includes the developments and dynamics of ADAFs, and how to explain observational results including the multi-waveband spectrum, radio polarization, IR and X-ray flares, and the size measurements at radio wavebands.
Introduction
The center of our Galaxy provides the best evidence to date for a massive black hole (e.g., Schödel et al. 2002;Ghez et al. 2003), associated with the compact radio source, Sgr A* (see, e.g., Melia & Falcke 2001). Since the original discovery of Sgr A* in 1974, there have been intensive efforts in both observational and theoretical aspects, with dramatic progresses in the past few years. The reason why we are so interested in this object is because of its proximity, which allows us to determine observationally the dynamics of gas quite close to the BH, providing unique constraints on theoretical models of accretion flows. Before introducing the model, I first briefly review the main observational results of Sgr A*. As shown by the data points in Fig. 2, its radio spectrum consists of two components. The component below 86 GHz has a spectrum F ν ∝ ν 0.2 , while the high frequency component, the "submm bump", has a spectrum F ν ∝ ν 0.8 up to ∼ 10 3 GHz (e.g., Falcke et al. 1998;Zhao et al. 2003). Variability at centimeter and millimeter wavelength are detected with a timescale ranging from hours to years with amplitude of less than 100% (Zhao et al. 2003Herrnstein et al. 2004;Miyazaki et al. 2004). High level of variable linear polarization fraction (∼ 2% − 10%) at frequencies higher than ∼ 150 GHz puts a rotation measure upper limit of 7 × 10 5 rad m −2 , which argues for a low density at the innermost region of ADAF (e.g., Aitken et al. 2000;Bower et al. 2003; Marrone et al. 2006;Macquart et al. 2006;Quataert & Gruzinov 2000).
At IR wavelength, the source is highly variable. Genzel et al. (2003) detected Sgr A* at 1.6-3.8 µm, with a factor of ∼ 1 − 5 variability on timescales of ∼ 10 − 100 min. Similarly, at 3.8 µm, Ghez et al. (2004) found that the flux changes by a factor of 4 over a week, and a factor of 2 in just 40 min. If describing the IR spectrum with a power-law, Gillessen et al. (2006) found that the spectral index is correlated with the instantaneous flux but Hornstein et al. (2006) found it remains constant during the flare process.
Sgr A* has been convincingly detected in the X-rays (Baganoff et al. 2001(Baganoff et al. , 2003Goldwurm et al. 2003). The X-ray emission has two distinct components. In "quiescence," the emission is soft and relatively steady, with a large fraction of the X-ray flux coming from an extended region with a diameter ≈ 1.4 ′′ (Baganoff et al. 2001(Baganoff et al. , 2003. Several times a day, however, Sgr A* has X-ray "flares" in which the X-ray luminosity increases by a factor of a few -50 for roughly an hour. For the most flares, the spectrum is hard, with a photon index of Γ = 1.3 +0. 5 −0.6 . XMM, however, detected a very bright and soft flare with Γ = 2.5 +0.3 −0.3 (Porquet et al. 2003). Recent several multiwavelength campaigns found that there is no time lag between the IR and X-ray flares (Eckart et al. 2004(Eckart et al. , 2005Yusef-Zadeh et al. 2006). This strongly suggests a common physical origin. The short timescale argues that the emission arises quite close to the BH, within ∼ 10R S (where R S is the Schwarzschild radius). Sgr A* is extremely dim overall, with a bolometric luminosity of only L ≈ 10 36 erg s −1 ≈ 3 × 10 −9 L Edd .
2. Accretion flow models: from ADAF to RIAF Consider a steady axisymmetric accretion flow. The dynamics of the flow are described by the following height-integrated differential equations, which express the conservation of mass, radial momentum, angular momentum, and the energy of electrons and ions (e.g., Narayan, Mahadevan & Quataert 1998 ρv The quantities have their usual meaning. Specifically s describe the strength of the outflow and δ the fraction of the turbulent energy which directly heats electrons. The radiative processes include synchrotron and bremsstrahlung emission and their Comptonization. After the dynamics of the accretion flow is solved, we can calculate the emitted spectrum and polarization fraction and compare with the observed ones. The above set of equations hold for any accretion models including, e.g., the standard thin disk, except that the standard thin disk is one-temperature due to the strong coupling between electrons and ions so the two energy equations are unified into one. The ADAF solution is one set of solution in the regime ofṀ < ∼ α 2Ṁ Edd whereṀ Edd ≡ 10L Edd /c 2 is the Eddington accretion rate 1 . An ADAF has many distinct properties compared to the standard thin disk. The gas temperature is much higher, the flow is optically thin and geometrically thick. Perhaps most importantly, the radiative efficiency of an ADAF is very low, η ADAF ∼ 0.1Ṁ /(α 2Ṁ Edd ) (for details, see Narayan & Yi 1994Narayan, Mahadevan & Quataert 1998). This is the reason why ADAF models are so successful in explaining the dim feature of Sgr A* Manmoto et al. 1997).
There has been a significant change in the theoretical understanding of ADAFs over the past few years. First, global, time-dependent, numerical simulations reveal that very little mass available at large radii actually accrets onto the black hole and most of it is lost to a magnetically driven outflow or circulates in convective motions (e.g., Stone, Pringle, & Begelman 1999; Hawley & Balbus 2002), i.e, s > 0 in eq. (1). The physical reason for the presence of outflow is that since very little energy is radiated away, the Bernoulli parameter of an ADAF is almost zero 2 , much larger than that in the standard thin disk, therefore the gas is earlier to escape to infinity once they are perturbed (Narayan & Yi 1994;Blandford & Begelman 1999).
In the early version of an ADAF, δ is assumed to be very small, δ ≈ 10 −2 − 10 −3 , i.e., the turbulent energy only heats ions. However, there has been large theoretical uncertainties in how electrons are heated in the accretion flow (Quataert & Gruzinov 1999) and a larger δ seems to be physically more plausible since processes like magnetic reconnection is likely to heat electrons directly. A large δ is also required from the observational side. Quataert & Narayan (1999) show that when outflows are present, to produce the correct amount of emission of Sgr A*, the electrons in the accretion flow should be hotter than in the early version of ADAF to compensate for the decrease of radiation due to the decrease of density. Stated another way, δ must be larger, δ < ∼ 1. Sometimes, the "updated" version of ADAF with s > 0 and δ < ∼ 1 is called radiatively inefficient accretion flow (RIAF). In this review we simply use the original term: "ADAF".
When δ ≈ 10 −2 , almost all of the turbulent energy q + is stored in the flow as the entropy of ions and very little is transferred into electrons through Coulomb collision q ie , i.e., q ie ≪ q + . We generally think this is why the radiative efficiency of ADAF is so low. If δ < ∼ 1, which means that a large amount of turbulent energy will heat electrons directly, what is the radiative efficiency of an ADAF? In this case, as shown by Fig. 1, almost all of the turbulent energy heating electrons is stored in electrons as entropy, i.e., the electrons are advection-dominated. This ensures the low efficiency of an ADAF even though δ is large. We would like to emphasize that with the increase of accretion rate, the flow is no longer advection-dominated and the radiative efficiency rapidly increases (ref. Fig. 7 3. Application to Sgr A*
Outer boundary conditions
The outer boundary conditions include the temperature, density, mass accretion rate, and angular momentum at the outer boundary. The outer boundary is determined by the Bondi radius. For an uniformly distributed matter with an ambient density ρ 0 and sound speed c s , the Bondi radius of a black hole of mass M is R Bondi ≈ GM/c 2 s and the accretion rate isṀ Bondi ≈ 4πR 2 Bondi ρ 0 c s . Chandra observations of the Galactic center detect extended diffuse emission within 1 − 10 " . The inferred gas density and temperature are ≈ 100 cm −3 and ≈ 2 keV on 1 " scales (Baganoff et al. 2003). The corresponding Bondi radius R Bondi ≈ 0.04pc ≈ 1 " ≈ 10 5 R s and the Bondi accretion rate isṀ Bondi ≈ 10 −5 M ⊙ yr −1 . Given that the accreted gas should have some angular momentum (see below), the accretion rate should be somewhat smaller. The recent 3D numerical simulation for the accretion of stellar winds on to Sgr A* by Cuadra et al. (2006) obtainsṀ ≈ 3 × 10 −6 M ⊙ yr −1 . If gas were accreted at this rate onto the black hole via the standard thin disk, the expected luminosity would be L ≈ 0.1Ṁ Bondi c 2 ≈ 10 41 erg s −1 , 5 orders of magnitude higher than the observed luminosity. This is the strongest argument against a thin disk in Sgr A*.
The numerical simulation indicates that the angular momentum is large, with the circularization radius of being about 10 4 R s (Cuadra et al. 2006). This result casts doubt on the "small angular momentum" assumption of spherical accretion models (Melia 1992;). A number of authors have applied ADAF models to explain the observations of Sgr A*. In the early applications (e.g., Mahadevan et al. 1997), they neglect outflows and the direct heating of electrons by turbulence, i.e., s = 0 and δ = 10 −2 are adopted. These work can roughly account for the observed low luminosity and spectrum of Sgr A*. However, such models can't explain the observed high linear polarization. This is because the predicted density is too high and the rotation measure is ∼ 10 10 radm −2 in the region where high-frequency radio emission is produced, much higher than the upper limit of 7 × 10 5 rad m −2 obtained by Marrone et al. (2006). This problem can be solved if we take into account the presence of outflows, i.e, s > 0. In this case, most of the accretion gas is lost to the outflow thus the density at the innermost region is much lower.
Explaining the quiescent state
Another problem with the original ADAF model is that it under-predicts the low-frequency radio emission significantly. This can be solved by introducing another component to the model. One possibility is a jet (Falcke & Markoff 2000;Yuan, Markoff & Falcke 2002). Alternatively, some fraction of electrons in the ADAF may be in nonthermal distribution, due to acceleration processes in the accretion flow such as weak shocks and turbulent dissipation. Their synchrotron emission can explain the radio "excess". Fig. 2 shows the modeling result of an ADAF model with outflow and nonthermal electrons, taken from Yuan, Quataert & Narayan (2003, hereafter YQN03). The parameters areṀ out ≈ 10 −6 M ⊙ yr −1 , s = 0.27, δ = 0.55. The nonthermal electrons are assumed to be in a power-law distribution with the spectral index of p = 3.5. The injected energy in nonthermal electrons is equal to a fraction 1.5% of the energy in thermal electrons. The accretion rate close to the black hole isṀ ≈ 4 × 10 −8 M ⊙ yr −1 ≪Ṁ out . As shown in the figure, the submm bump is mainly due to the synchrotron emission from thermal electrons, the low-frequency radio and IR emissions from the synchrotron emission from nonthermal electrons, while the extended X-ray emission from the bremsstrahlung emission from the region around the Bondi radius. Since most of the accretion gas is lost into the outflow, the rotation measure is very small, RM ≈ 10 7 and < ∼ 5 × 10 5 rad m −2 at 230 GHz if we integrate along the equatorial plane and rotation axis, respectively. Note that they should be regarded as upper limits since we assume the magnetic field is fully coherent and point along the line of sight while the magnetic field should be predominantly toroidal.
As noted above, the submm emission is due to thermal electrons. Since the linear polarization of optically thick thermal synchrotron emission from a uniform medium is suppressed by exp[−τ ], where τ is the synchrotron optical depth, one also needs to check whether the model can roughly account for the magnitude of the observed linear polarization ( < ∼ 10%). YQN03 have calculated the linear polarization produced by the thermal electrons in their models, including optical depth effects and Faraday rotation. Fig. 3a shows τ as a function of radius for three frequencies and Fig. 3b shows by open circles the degree of linear polarization as a function of frequency when Faraday rotation is neglected. From the figure we see that thermal electrons can readily account for the observed level of linear polarization from Sgr A*. Markoff et al. (2001) showed that the flares are probably due to enhanced electron heating or acceleration, rather than a change in the accretion rate onto the black hole. The IR and X-ray flares in Sgr A* may be analogous with solar flares, in which magnetic energy is converted into thermal energy, accelerated particles, and bulk kinetic energy, due to magnetic reconnection. This speculation is confirmed by the 3D MHD numerical simulation by Machida et al. (2003). Unfortunately, even for solar flares, some important aspects such as the detailed acceleration mechanism and the energy distribution of heated/accelerated electrons remains unclear (Miller 1998). Based on this idea, YQN03 and Yuan, Quataert & Narayan (2004) propose that during the flare events, magnetic reconnection occurs in the innermost region of the ADAF. As a result, some electrons are heated/accelerated into a thermal/power-law distribution. Two models are proposed to explain the IR and X-ray flares in Sgr A*. In the first "synchrotron+SSC" model shown by Fig. 4, the electrons are accelerated into a single power-law distribution with the maximum Lorentz factor of γ max ∼ several hundred. The IR flare is due to the synchrotron emission while the X-ray flares due to the up-scattering of the IR photons by these electrons. In the second "synchrotron" model shown by Fig. 5, it is assumed that, some electrons are heated into a higher temperature while some electrons are accelerated into a (hard) power-law distribution with γ max ∼ 10 6 . The synchrotron emission of these two types of electrons are responsible for the IR and X-ray emissions, respectively 3 . I would like to emphasize that both models can explain the observations well and there are no strong arguments against any one given our poor knowledge of particle acceleration.
Understanding the flares
3.4. Using the size measurement to test the models Recent radio observations by the VLBA at 7 and 3.5 mm produce the high-resolution images of Sgr A*, and detect its wavelength-dependent sizes Shen et al. 2005). The measured size of images of Sgr A* at 7 mm is 0.712 +0.004 −0.003 mas by Bower et al. (2004), and 0.724 ±0.001 mas and 0.21 +0.02 −0.01 mas at 7 and 3.5 mm, respectively by Shen et al. (2005). This provides an additional test to accretion models of Sgr A*. Yuan, Shen, & Huang (2006) calculate the predicted size of the YQN03 model. The most important point they realize is that since the intrinsic intensity profile of the ADAF is not a Gaussian, as shown by the dashed lines in the second column of Fig. 6, thus it is impossible to directly compare the predicted intrinsic size with observation. Rather, they calculate the predicted intrinsic intensity profile of the ADAF models, then taking into account the scattering broadening toward the Galactic center to obtain the simulated image. They then compare the simulated image with the observed one. The third column in Fig. 6 shows the simulated image at three wavelengths and the fourth column shows the fitting results of the intensity profile by a Gaussian. The obtained sizes of the simulated images are 0.729 +0.01 −0.009 mas and 0.248 +0.001 −0.002 mas at 7 mm and 3.5 mm, respectively. The simulated size at 7 mm is in good agreement with the observed value by Shen et al. (2005) within the error bars but slightly larger than the observed size by Bower et al. (2004); the size at 3.5 mm is a little larger than the observation of Shen et al. (2005). Given that the size of the source may be variable ) and the uncertainties in the calculations, they conclude that the predictions of the YQN03 model are in reasonable agreement with the observations. They predict that GR effects may be detectable at 1.3 mm.
There are two alternative models of Sgr A*, namely the jet model of Falcke & Markoff (2000) and the coupled jet-ADAF model of Yuan, Markoff & Falcke (2002). One difference between these two models is that in the former the radio emission above ∼ 86GHz is produced by the nozzle of the jet while in the latter the contribution of the ADAF is significant. In the jet-ADAF model, the contribution of the emission from the ADAF can dominate over that from the jet under suitable parameters. Moreover, if the "old" ADAF in this model is "updated" (i.e, including outflow), the only difference between the jet-ADAF model and the YQN03 model is the origin of the radio emission below ∼ 86 GHz. In this case, the predicted size of Sgr A* by the jet-ADAF model will be consistent with the observations. On the other hand, the predicted sizes at 3.5 mm by the jet model is > ∼ 0.04 mas (Falcke & Markoff 2000), much smaller than the observed value, so some modifications of the model are required (Markoff et al. in preparation).
Summary
The supermassive black hole in our Galactic center represents a unique opportunity to probe the physics of accretion, especially at extremely low accretion rates. In this review, I first briefly introduce the dynamics and evolution of the ADAF. I then have tried to argue that this model can provide a reasonable explanation to most of the current observations.
Chandra observations tell us the density and temperature of the hot gas at ∼ 1 " ∼ 0.04 pc. This radius happens to be the Bondi radius where the gas is captured by the gravity of the center black hole and starts to be accreted. The Bondi accretion rate,Ṁ Bondi ∼ 10 −5 M ⊙ yr −1 provides a good estimation to the real accretion rate. As a comparison, the numerical simulation givesṀ ∼ 3 × 10 −6 M ⊙ yr −1 . Since the bolometric luminosity of Sgr A* is only 10 36 erg s −1 , the radiative efficiency should be very low, ∼ 5 × 10 −6 . The standard thin disk model (Shakura & Sunyaev 1976) is therefore ruled out immediately.
"Old" ADAF models can naturally explain such a low efficiency (e.g., ). However, these models fail to explain the high linear polarization at submm waveband because the density and further, the rotation measure, are too large. On the other hand, theoretical studies of ADAFs also indicates the presence of outflow (e.g., Stone, Pringle & Begelman 1999; Blandford & Begelman 1999). This feature is taken into account in the "new" ADAF models (or RIAF; YQN03). The YQN03 model can explain the quiescent state spectrum and the polarization, as shown by Figs. (2)&(3). The accretion rate close to the horizon is only 4 × 10 −8 M ⊙ yr −1 . So the low radiative efficiency of the ADAF in Sgr A* is partly because of the outflow, which contributes a factor of ∼ 10 −2 , and partly because of the energy advection, which contribute a factor of ∼ 5 × 10 −4 . The IR and X-ray flares are explained by the synchrotron and inverse-Compton emissions from the heated/accelerated electrons during the magnetic reconnection events in the innermost region of the ADAF (Figs. 4&5). Finally, the YQN03 model satisfactorily pasts the test of recent observations of the size of Sgr A* at 3.5 and 7 mm wavebands (Fig. 6). Moreover, this model predicts that the observation at 1.3 mm should be able to detect GR effects (Fig. 6).
|
2014-10-01T00:00:00.000Z
|
2006-07-07T00:00:00.000
|
{
"year": 2006,
"sha1": "70c515e7395b4b61855ebbd0e71d720fdf2a81d9",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/54/1/067",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "12f55a46b3467c1d0ab4e8b5a747c22e334c7f63",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
23931941
|
pes2o/s2orc
|
v3-fos-license
|
Prevention of chemically induced changes in synaptosomal membrane order by ganglioside GM1 and alpha-tocopherol.
Synaptosomal membrane order has been studied by analysis of light depolarization by fluorescent dyes intercalated within membranes following exposure to various environmental toxicants. Two probes were explored: 1,6-diphenyl-1,3,5-hexatriene (DPH), signaling predominantly from the lipid-rich membrane core, and 1-(4-(trimethylamino)phenyl]-6-phenyl-1,3,5-hexatriene (TMA-DPH), reporting from the more hydrophilic membrane surface. Chlordecone, a neurotoxic insecticide, decreased the anisotropy of either dye and this change could be prevented by prior treatment of synaptosomes with ganglioside GM 1 but not a-tocopherol. Exposure to an iron-ascorbic acid oxidizing mixture enhanced synaptosomal membrane order and this effect was blocked by preincubation with a-tocopherol but not ganglioside GM 1• While these interactions may have partially reflected additive anisotropy changes, the protective agents were also effective at concentrations where they did not in themselves modulate membrane order. Methyl mercuric chloride at concentrations up to 100 µM had no discemable effect upon membrane order. It is suggested that these changes in membrane order may underlie some of the previously reported variations in the content of ionic calcium and in the leakiness of synaptosomes.
Correspondence: S.C. Bondy, Southern Occupational Health Center, Community and Environmental Medicine, University of California at Irvine, Irvine, CA 92717, U.S.A. a-tocopherol can wholly or partially protect against the induced changes in [Ca 2 +];. These compounds also reduce the induced loss of fura-2 from synaptosomes [12]. Both of these potentially protective agents have some efficacy in mitigating neurotoxic damage effected in the intact animal [13][14][15]. In the case of a-tocopherol, the protective effect is believed to be due to its antioxidative properties, while the mechanism of ganglioside GM 1 -induced amelioration of chemically-induced damage is not as clearly understood.
The coincidence of elevated [Ca2+]; and increased membrane permeability suggests that elevated synaptosomal 'leakiness' may allow entry of calcium across the synaptosomal limiting membrane, activating the major phospholipid degradative enzymes (phospholipases) and leading to compromised integrity. This study was designed to ascertain the extent to which changes in [Ca 2 +]; and membrane permeability could be accounted for in terms of altered membrane order. While synaptosomal preparations represent a rather heterogeneous assembly, they have been of great utility in the study of presynaptic events. However, the changes in membrane order reported here may reflect the properties of cerebral plasma membranes in general since they cannot be precisely attributed to a single component. We have employed two fluorescent probes, one of which is lipophilic and primarily reports from the lipid core of the membrane (l,6-diphenyl-1,3,5-hexatriene, DPH) (16] while the other (1-( 4-( trimethylamino )phenyl]-6-phenyl-1,3,5-hexatriene, TMA-DPH) possesses a cationic group which confines the dye to the superficial hydrophilic membrane domain [17,18]. The neurotoxic agents methyl mercuric chloride and chlordecone have been utilized. In addition, a free-radical inducing iron and ascorbic acid mixture deleterious to synaptosomal integrity has been investigated. This latter state also induces synaptosomal 'leakiness' but does not elevate [Ca 2 +L (unpublished observations).
Preparation of synaptosomes
Adult male CR 1 CD rats (Charles River Breeding Laboratories, Inc., Wilmington, MA) 4 to 5 months old, weighing 290 g to 350 g were used. Rats were decapitated, the brains excised quickly on ice and the whole brain except the cerebellum and pons-medulla dissected out. Synaptosomes were made by the modification of Dodd et al. [19] of the method of Gray and Whittaker [20]. Briefly, after homogenization in 10 volumes of cold 0.32 M sucrose, the homogenate was centrifuged (1900 X g, 10 min, 0-4°C) and the supernatant laid over 1.2 M (10 ml). After high speed centrifugation (250000 x g, 25 min), the layer at the interface was collected, diluted 2.5-fold with 0.32 M sucrose and laid over 8 ml 0.8 M sucrose. After centrifugation again at high speed, the synaptosomal pellet was suspended in Hepes buffer (pH 7.4) to give a tissue concentration of 0.15 g-equivalentjml (about 1.6 mg/ml of protein). The composition of Hepes buffer was (millimolar): NaCl, 120; KCl, 2.5; NaH 2 P0 4 , 1.2; MgC1 2 , 0.1; NaHC0 3 , 5.0; glucose, 6.0; CaC1 2 , 1.0; and Hepes, 10.
Labeling with fluorescent probes
Three 1-ml aliquots of the synaptosomal suspension were each diluted with 5 ml of Hepes buffer (pH 7.4) and centrifuged (12 500 x g, 8 min). The pellets were resuspended in 4 ml Hepes buffer (pH 7.4) and then combined. One 2-ml aliquot was incubated with 5 µM DPH for 30 min at 37°C. Another 2-ml aliquot was incubated with 5 µM TMA-DPH for 15 min at 37°C. The DPH and TMA-DPH probes were dissolved in tetrahydrofuran (THF) and THF-water, 1: 1 ratio, respectively. Following incubation, the labeled synaptosome suspensions were kept on ice.
Fluorescence polarization
For fluorescence measurements, 0.5-ml aliquots of each fluorescent labeled synaptosome suspension were centrifuged in a microcentrifuge (16000 X g, 2 min). The synaptosomal pellet was resuspended in 2 ml of warm (37°C) Hepes buffer and allowed to equilibrate at 37°C for 10 min before fluorescent measurements were taken.
After control levels of fluorescence were determined, the agents were added and incubation was continued. Fluorescence was determined 15 min later. Synaptosomes were preincubated for 5 min with or without protective agents before addition of chlordecone or Fe/ascorbate. Fluorescence was determined 10 min later. Chlordecone and a:-tocopherol were added in dimethyl sulfoxide; control samples received an equivalent concentration of solvent (maximal final concentration was 0.5% v /v). Corrections for light scattering (membrane suspension minus probe) were made. Fluorescence in the ambient medium (after pelleting membranes) was negligible.
Fluorescence measurements were performed on a Farrand MKl spectro-fluorometer. Fixed excitation and emission polarization filters were used to measure fluorescence intensity both parallel (! 11 ) and perpendicular (I JJ to the polarization phase of the exciting light. I vv corresponds to both vertically polarized excitation and emission while I vtt corresponds to vertically polarized excitation and horizontally polarized emission. Excitation and emission wavelengths of 360 nm and 430 nm, respectively, were used with the band width of both monochromators at 10 nm. Cuvette temperature was maintained at 37°C with a circulating water bath. In measuring anisotropy, a correction factor ( G) for instrument asymmetry was considered. The G factor is the ratio of the sensitivities of the detection system for vertically and horizontally polarized light: G = Sv/Stt [21]. Sv and SH are the sensitivities of the emission channel for the vertical and horizontal components, respectively. The G factor was measured using horizontally polarized excitation. Its magnitude was sufficiently low and invariable that it was not taken into account. Fluorescence anisotropy (r) was determined [16] by the formula: Total anisotropy was sometimes divided into static and dynamic elements by use of the equation, formulated by Van der Meer et al. [22]: r""' = r 0 (r 5 ) 2 /[r 0 r 5 +(r 0 -r 5 ) 2 /m] where r. is measured steady state anisotropy, r 0 is the maximal anisotropy and r""' is the static component. The dynamic component rr is thus equal to r. -r""'. A membrane order function S, analagous to that obtained from electron spin resonance studies, was also derived [23]: S 2 = r""'/r 0 • The maximal anisotropy (r 0 ) of DPH was taken as 0.362 [16], and that of TMA-DPH as 0.39 (17].
Statistical analysis
Differences between groups were assessed by Fisher's least significant difference test after one-way analysis of variance. The acceptance level of significance was P < 0.05 using a two-tailed distribution.
Chemicals
The probe DPH and a-tocopherol succinate were obtained from Sigma Chemical Co. (St. Louis, MO). TMA-DPH was obtained from Molecular Probes (Junction City, OR). Chlordecone was obtained from Radian Corp. (Austin, TX). Ganglioside GM 1 was kindly donated by Fidia Corp.
(a) Chlordecone
Chlordecone treatment of synaptosomes led to a decrease in anisotropy at each of the sites that the two fluorescent probes were, respectively, reporting from ( Figs. 1 and 2). The observed changes were greatest in the DPH signal, reporting primarily from the membrane core.
Pretreatment of synaptosomes with 100 µM ganglioside GM 1 was found to block the decrease in DPH and TMA-DPH anisotropy caused by chlordecone (Fig. 1). However, such an effect may have been merely additive since 100 µM ganglioside GM 1 itself increased membrane order. In some cases, synaptomes were centrifuged down after incubation with ganglioside GM 1 , and resuspended prior to addition of chlordecone. This did not alter the results obtained, suggesting that these two compounds do not interact in solution. Other studies [24] and increased fluorescence polarization [25,26]. For this reason, a concentration of ganglioside GM 1 , 10 µM, that did not alter the depolarization signal using either fluorescent probe was also used in the pretreatment of synaptosomes prior to the addition of chlordecone. This concentration of TMA-DPH . . ganglioside GM 1 prevented the reduction in TMA~D~H anisotropy caused by chlordecone. However, no sumlar protective effect was apparent in DPH anisotropy ( Fig. 1).
Treatment of synaptosomes with 25 µM a-tocopherol did not significantly alter DPH anisotropy and depressed that of TMA-DPH. However, in neither case did this antioxidant compound change the subsequent synaptosomal response to chlordecone (Fig. 2).
(b) Methyl mercury
Incubation of synaptosomes with methyl mercuric chloride at levels ranging from 20 to 100 µM did not alter anisotropy values derived from the use of either probe (data not shown). The maximal concentrations of methyl mercury tested here were considerably higher than those previously shown to greatly elevate synaptosomal [Ca 2 +L and fura-2 leakage under identical incubation conditions [9,27].
Oxidizing conditions
The exposure of synaptosomes to an oxidizing mixture containing 5 µM increased the anisotropy value of both TMA-DPH (Fig . 3) and DPH (Fig. 4). This was especially pronounced for TMA-DPH, implying that the greatest changes were at the membrane surface. a-tocopherol succinate treatment decreased TMA-DPH anisotropy in a dose-related manner (Fig. 3). When this treatment was followed by the addition of Fe/ascorbate, there was a progressive decrease in Fe/ascorbate-induced anisotropy changes (Fig. 3). This largely reflected the sum of changes induced by Fe/ascorbate or a-tocopherol acting separately. The a-tocopherol succinate is not readily oxidized prior to its enzymic deesterification and this minimizes the likelihood of a direct interaction between this compound and the Fe/ascorbate mixture in free solution. In the case of DPH, a-tocopherol alone had no influence on anisotropy (Fig. 4). However, an inhibition of oxidatively-induced changes of anisotropy was found at all concentrations of a-tocopherol studied: 1-25 µM (Fig. 4). Thus, in this circumstance, atocopherol appeared to exert a truly protective effect in an interactive, non-additive manner. Ganglioside GM1 pretreatment did not affect this response to the Fe/ascorbate exposure. Although ganglioside GM1 alone elevated anisotropy, no evidence for interaction between GM 1 and Fe/ascorbate was found with either DPH or TMA-DPH (data not shown). Synaptosomes were incubated in a medium containing no added calcium salts. This low calcium condition has been stated to increase oxidative stress [28,29] and increase fura-2 leakage from loaded synaptosomes [27]. However, the anisotropy of DPH and TMA-DPH was not significantly altered by such treatment (data not shown).
The preincubation of synaptosomes with 10 µM deferoxamine completely blocked the effect of subsequent addition of Fe/ascorbate upon TMA-DPH anisotropy. Anisotropy changes were further evaluated in terms of dynamic and static components. This revealed that the major modulation was predominantly in r 00 , rather than rr, implying changes in membrane order rather than fluidity (Table I) that both Fe/ascorbate and chlordecone primarily affected membrane order (r 00 ) while the dynamic component (re) was altered in an opposite direction (Table I). a-Tocopherol treatment in the presence or absence of Fe/ascorbate altered r 00 in a manner similar to r., total anisotropy (Fig. 5). However, a-tocopherol effected a concentration-dependent increase in re when Fe/ ascorbate was present, trending toward the corresponding control value (Fig. 6).
Discussion
In the case of chlordecone, there appears to be a relation between induction of synaptosomal leakiness, calcium elevation [10) and decreased anisotropy of probes ( Figs. 1 and 2). Suggestion of a relation between the phenomena is enhanced by examination of the effects of potentially protective agents; ganglioside GM 1 prevents all these changes while a-tocopherol is not effective.
Methyl mercury also elevates synaptosomal leakiness and [Ca 2 +h [9], but has no effect on anistropy. This may be due to methyl mercury having a much more specific locus of action than chlordecone, perhaps at an ion channel [2,30].
The protection afforded by a-tocopherol pretreatment on several of the consequences of synaptosomal exposure to oxidizing conditions also suggests that the parameters studied are related. These data imply that the plasma membrane may be a significant target of diverse neurotoxic agents. Increased leakage of the anion fura-2 appears to be correlated with a pronounced change, either negative or positive, in membrane anisotropy. However, transmembrane passage of the calcium cation seems reciprocally associated with the degree of membrane order. Restoration of anisotropy values by potentially protective agents seems to be accompanied by both reinstatement of normal cell permeability and [Ca 2 +L. Decreased membrane order caused by chlordecone (Fig. 1), ethanol and anesthetics [31) or toluene [32] is related to a greater [Ca 2 +]i; 217 increase in membrane rigidity by peroxidative events or by ganglioside GM 1 occurs together with a somewhat diminished [Ca2+L [27].
Chlordecone caused the greatest perturbation of DPH anisotropy while Fe/ascorbate was most effective in altering TMA-DPH anisotropy. This may reflect the lipophilic nature of chlordecone and the water solubility of the oxidizing mixture. In the case of the agents potentially capable of preventing such membrane dislocations, no such relation between lipophilicity and the site of maximal effect could be made. Water soluble GM 1 predominantly influenced the DPH signal while lipophilic a-tocopherol reduced TMA-DPH anisotropy. The critical feature of these 'protective' agents may be their chemical reactivity rather than the geometry of their intercalation into membranes.
Lipid peroxidation has been reported to decrease membrane order as determined by electron spin resonance which primarily reports static structure [33). However, using pyrene, which largely signals dynamic components of membranes [34], peroxidation increases membrane rigidity [35]. This latter change has also been reported using fluorescence polarization [36]. These different types of analyses could account for several contradictions in the literature. The limiting fluorescence anisotropy, r 00 , reflects the amplitude of probe oscillation while re primarily signals the rotational state of the probe. An increasingly restricted fluorophore is able to rotate faster in view of its diminishing freedom of angular motion [22]. Limitation of velocity of rotational movement (spin) may allow increased angular oscillation of the dye within the membrane (i.e., degrees of arc). This accounts for r 00 and re changing in opposite directions in response to a perturbing stimulus.
Although statistically significant, the observed changes in anisotropy were relatively small in absolute terms. As is clear from enzyme studies, minor variations in the physical form of a protein can have much greater functional effects. Thus, changes in biological activity often have a greater magnitude than the structural alterations that underlie such changes. A change of membrane microviscosity of around 5% can cause a 50% change of membrane permeability to cations [37). The transmembrane fluxes of calcium, therefore, may be significantly affected by membrane anisotropy changes of the size reported here.
Several neurotoxic compounds have been reported to modulate the microviscosity of membranes including those derived from nervous tissue [38). In general, organic solvents including ethanol and anesthetics decrease order [39,40]. Heavy metal cations tend to increase membrane order [40,41) as does calcium [42,43] Lipid peroxidation has been reported to increase the rigidity of phospholipid bilayers [44) and erythrocyte membranes [33]. The relation of such observed changes to toxic damage in the animal is difficult to establish. However, the amelioration of disruption of membrane integrity by protective agents described here supports the likelihood that our data are relevant to manifestations of neurotoxicity in vivo. Ganglioside GM 1 and a-tocopherol pretreatment at concentrations where they produce no detectable changes in membrane order can block the effects of agents harmful to synaptosomes. This implies some degree of specificity of the protective process, perhaps because both the toxic and protective agents act at the same membrane location. a-Tocopherol is known to protect against several types of oxidatively induced damage to the central nervous system [45,46) and the concentration range of a-tocopherol used in this study is in a similar range to the 14 µM value reported present in rodent brain [50). Ganglioside GM 1 pretreatment has also been reported effective in blocking expression of neurotoxicant-induced damage [48,49). Exogenous ganglioside GM 1 can be taken up by cells in culture and functionally incorporated into the plasma membrane [50) and has been shown to become bound to cell membranes following systemic injection [51). However, the need for pretreatment with ganglioside prior to neurological insult may limit the therapeutic utility of such compounds.
General changes in membrane order may modulate the properties of specific neurotransmitter receptors [52,53) and the activity of enzymes activated by transmembrane signals [54). Such apparently selective changes may then result in distinctive neurological dysfunction.
|
2018-04-03T02:26:05.896Z
|
1990-07-24T00:00:00.000
|
{
"year": 1990,
"sha1": "928a8d8c6385ab6625a325db8ee01003ee4ae709",
"oa_license": "CCBY",
"oa_url": "https://cloudfront.escholarship.org/dist/prd/content/qt4bd7s7kf/qt4bd7s7kf.pdf?t=nsogsf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "4ffd96f3324e2261a3d29252645b7f0bd458d504",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
257552464
|
pes2o/s2orc
|
v3-fos-license
|
A Five Glutamine-Associated Signature Predicts Prognosis of Prostate Cancer and Links Glutamine Metabolism with Tumor Microenvironment
Glutamine has been recognized as an important amino acid that provide a variety of intermediate products to fuel biosynthesis. Glutamine metabolism participates in the progression of the tumor via various mechanisms. However, glutamine-metabolism-associated signatures and its significance in prostate cancer are still unclear. In this current study, we identified five genes associated with glutamine metabolism by univariate and Lasso regression analysis and constructed a model to predict the biochemical recurrence free survival (BCRFS) of PCa. Further validation of the prognostic risk model demonstrated a good efficacy in predicting the BCRFS in PCa patients. Interestingly, based on the CIBERSORTx, ssGSEA and ESTIMATE algorithms predictions, we noticed a distinct immune cell infiltration and immune pathway pattern in the prediction of the two risk groups stratified by the risk model. Drug sensitivity prediction revealed that patients in the high-risk group were more suitable for chemotherapy. Last but not least, glutamine deprivation significantly inhibited cell growth in GLUL or ASNS knock down prostate cancer cell lines. Therefore, we proposed a novel prognostic model by using glutamine metabolism genes for PCa patients and identified potential mechanism of PCa progression through glutamine-related tumor microenvironment remodeling.
Introduction
Prostate cancer (PCa) is the second commonly diagnosed male malignancies and one of the leading causes of male cancer-related death worldwide [1]. Radical prostatectomy is the most effective method of curing localized PCa, and it significantly improves the postoperative survival of patients. However, more than one quarter of patients still experience biochemical recurrence (BCR) after surgery, which subsequently progresses to distant metastasis and PCa-related death [2]. Current predictions of PCa prognosis after curative intent are based on clinical and pathological features whose predictive efficacy remains unsatisfying [3]. Therefore, it is crucial to predict and confirm the BCR status of patients in time in order to classify and manage BCR patients with individualized treatment plans.
As the most abundant and widely used amino acid in human body, glutamine plays an important role in maintaining cellular viability. Recent studies reveal that many types of tumor growth are highly dependent on glutamine, including non-small cell lung cancer, breast cancer and glioblastoma [4]. Likewise, glutamine is also one of the key nutrients 2 of 20 that drive PCa progression; patients with higher glutamine levels often have poorer prognoses [5]. A recent study reported that the survival of radiation resistant PCa cells and stem cells relies on a large amount of glutamine, and that the cells undergo growth inhibition after glutamine deprivation [6]. Androgen receptor (AR) can promote the utilization of glutamine in PCa and further improve the survival of PCa cells [7]. Moreover, glutamine metabolism also plays a key role in the dynamics of tumor microenvironment. Glutamine is essential for the immune function of macrophages, lymphocytes, neutrophils and other immune cells [8]. In the tumor microenvironment, both tumor cells and immune infiltrating cells undergo metabolic reprogramming [9]. For instance, tumor-associated macrophages could change the glutamine metabolic state in the tumor microenvironment by secreting IL-23, which promotes the proliferation and activation of regulatory T cells (Tregs) and subsequently inhibits the activity of anti-tumor immune cells [10]. Last but not least, tumorassociated fibroblasts (CAF) are able to compensate for the large demand of glutamine from tumor cells by upregulating the synthesis of glutamine [11]. Because glutamine plays an important function in tumor metabolism, we speculate that glutamine metabolism-based risk models may be helpful to predict and explore the prognosis of prostate cancer. Exploring glutamine metabolism may also help elucidate the mechanism of PCa progression and discover potential therapeutic targets and biomarkers.
In this study, we successfully established a glutamine-metabolism-related model that can predict the risk of BCR in patients with PCa. First, we downloaded transcriptomic data of PRAD from TCGA database to comprehensively analyze the screening of glutaminerelated genes and their prognostic value in PRAD. Subsequently, predictive models were constructed and validated in the GEO cohort. We also analyzed the differences in immune infiltration, drug sensitivity and mutation status among patients in different risk groups. Finally, we validated the expression of key genes in clinical samples by qPCR, and in vitro assays were carried out to evaluate the potential biological function of key glutaminerelated genes in PCa cells.
Collection of PCa Materials
The training set genes expression profile of 499 PCa samples and 52 controls were downloaded from the TCGA database (https://portal.gdc.cancer.gov, accessed on 1 April 2022). The validation set (GSE70769) data of PCa were obtained from Gene Expression Omnibus database (GEO: https://www.ncbi.nlm.nih.gov/geo/, accessed on 17 May 2022), which was used for the risk model validation. The corresponding clinical information regarding TCGA-PRAD patients was downloaded from the University of California Santa Cruz Xena (UCSC Xena) database (https://xenabrowser.net/datapages/, accessed on 1 April 2022). The corresponding mRNA expression data were analyzed with the "limma" package of R software (version 3.63, creators: Robert Gentleman and Ross Ihaka, city: Auckland, country: New Zealand) to obtain the differentially expressed genes (DEGs). The conditions of screening DEGs were |log2 Fold Change (FC)| > 1 and adjusted (adj.) p < 0.05. The glutamine-related genes were then downloaded from the GSEA database
Construction of Prognostic Model of Glutamine-Related Gene Characteristics
The differentially expressed glutamine-related genes (DEGRGs) in PCa were obtained by the intersection of PCa differential genes and glutamine-related genes. Univariate Cox analysis and lasso regression analysis (R package: "survival" and "glmnet") were used to screen the glutamine differential genes-related to the prognosis of PCa patients. First, we performed independent prognostic analysis on each gene, screened genes with p < 0.05 that were significant for prognosis, then performed lasso regression analysis on these significant differential genes and calculated the risk coefficient (coefi) of each differential gene. Finally, we obtained the risk coefficient of five differential glutamine-related genes. The sum of the expression of each gene in each sample multiplied by the corresponding risk coefficient was used as the risk score of each patient. Then, the median value of the patient's risk score was taken as the cutoff point, and the patients were divided into high-risk and lowrisk groups. The Kaplan-Meier survival analysis was used to compare the biochemical recurrence survival of patients in the high-risk and low-risk groups, and the area under the curve was calculated using the receiver characteristic (ROC) curve (R package, "timeROC") to evaluate the prediction ability of the model.
Construction of PPI Network
To exploring the functional linkages of each DEGRG with each other, we used the Search Tool for the Retrieval of Interacting Genes (STRING) database (URL: http:// string-db.org, accessed on 1 April 2022). Cytoscape software was used to visualize the PPI network.
Validation of External Data Sets
In order to test whether this model has the same prediction ability in other data sets, we verified the risk scoring model by using the GEO data set (GSE70769), which is similar to the above method. The sum of the coefi values of the above five genes multiplied by the expression values of each sample was used as the risk score of the patients, and the patients were divided into high-risk and low-risk groups using the median value as the cutoff point. Kaplan-Meier survival analysis was used to compare the biochemical recurrence survival of patients in the high-risk and low-risk groups. ROC curve was used to calculate the area under the curve to evaluate the prediction ability of the model.
Univariate Cox Analysis and Multivariate Analysis Identified Risk Models
Univariate Cox analysis and multivariate analysis (R package: "survival") were used to identify whether the risk score could serve as an independent prognostic factor for BCRFS in PCa patients.
Analysis of the Immune Infiltrate Landscape
To further differentially analyze the molecular characteristics of the scores from the high-risk and low-risk patient groups, immune cell infiltration patterns and immune status were assessed by CIBERSORTx, ssGSEA and ESTIMATE algorithms. The CIBERSORTx database is an online analytical tool that can estimate the abundance of 22 classes of immune cell infiltrates in a sample based on gene expression data. The ssGSEA algorithm from the "GSVA package" was used to estimate immune infiltration score levels in the tumor microenvironment.
Drug Sensitivity Analysis
IC50 (half maximal inhibitory concentration) was an important indicator for evaluating drug efficacy or sample treatment response. We predicted the chemotherapy response of each sample based on the largest publicly available pharmacogenomics database [genomics of drug sensitivity in cancer (GDSC), https://www.cancerrxgene.org/, accessed on 21 April 2022]. The prediction process was implemented by the R package "pRRophetic".
CellMiner (https://discover.nci.nih.gov/cellminer/home.do, accessed on 21 April 2022) is a database and query tool designed for the cancer research community to facilitate integration and study of molecular and pharmacological data. We applied the CellMiner database to demonstrate whether these risk DEFRGs can predict anticancer drug sensitivity.
Only FDA-approved drugs and drugs in clinical trials were included in the analysis. Spearman correlation analysis was performed to determine the correlation between the expression levels of glutamine-related genes and drug sensitivity.
Enrichment Analysis of Pre-Defined Gene Sets Based on the Low and High-Risk Model
The Gene Set Enrichment Analysis (GSEA) was implemented using the GSEA software (Versions:4.2.3). Specifically, we first divided the 422 samples into high-risk and low-risk groups thought the median value of the risk model score and matched the expression matrix of each sample. Then, we uploaded the expression matrix to the GSEA software, chose the Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) gene sets as the indicator gene sets.
Patient Preparation
A total of eight PCa tissues and seven normal benign prostate hypertrophy (BPH) tissues were collected by surgery or needle biopsy from the First Affiliated Hospital of Kunming Medical University. All participants provided written informed consent prior to the study. The experiment was approved by the Institutional Ethics Committee of the First Affiliated Hospital of Kunming Medical University. No PCa patients received any treatment prior to surgery. Finally, the tissues were frozen in liquid nitrogen and then stored in a −80 • C refrigerator pending further experiments.
Total RNA Extraction and Quantitative Real-Time Polymerase Chain Reaction (qRT-PCR)
The total RNA of eight PCa and seven prostate normal tissue samples were lysed with TRIzol reagent (yanshunbio, Shenzhen, China) and isolated following the manufacturer's instructions. The concentration and purity of the total RNA solution were quantified using the NanoDrop 5000 spectrometer (BioTeKe Corporation, Beijing, China). Before qRT-PCR, the isolated RNA was reverse-transcribed to cDNA. A total volume of 20 uL reaction system was reached using 1000 ng of total RNA: 4 uL 5× PrimeScript RT Master MIX; total RNA volume was calculated based on the concentration; and finally, 20 uL of reaction system was reached by adding enzyme-free water. After gentle mixing, the reverse transcription reaction was carried out under the following reaction conditions: 37 • C for 15 min, 85 • C for 5 s, and finally stored at 4 • C. The qRT-PCR reaction mixtures consisted of 2ul of cDNA solution, 10 uL TB Green Premix Ex Taq II (Tli RNaseH Plus) (2×), 0.8 uLeach of forward and reverse primer, 0.4 uL ROX Reference Dye II (50×) and 6 ul RAase Free dH2O. The PCR reaction process was performed under the following conditions: 40 cycles that each involved incubation at 95 • C for 65 s and 60 • C for 94 s. The forward and reverse primers of 5 key genes and GAPDH were shown in Supplementary Table S1. All primers were synthesized by Servicebio (Servicebio, Wuhan, China). The expressions of these genes were normalized by the expression of GAPDH, and the relative expression of 5 key mRNAs was determined using the 2 −∆∆Ct method.
Cell Culture
DU145 and 22Rv1 human prostate cancer cell lines were obtained from Kunming Animal Research Institute (Kunming, China). DU145 and 22Rv1 cells were cultured in RPMI-1640 (Gibco, New York, NY, USA) medium with 10% fetal bovine serum (FBS, BI, Haemek, Israel), 100 U/mL penicillin and streptomycin. The cells were grown in an incubator with 5% CO 2 at 37 • C.
Cell Proliferation Assays
Cell proliferation was determined by CCK8 reagents. The cells transfected with the control siRNA or ASNS siRNA (si-ASNS) and GLUL siRNA (si-GLUL) were trypsinized from the 6-well plates at 24 h after transfection, and 2000 cells in a volume of 100 uL complete medium or medium without glutamine were seeded into each well of 96-well plate, repeated in three holes. One plate of cells was cultured with complete medium, and the other one was cultured with no glutamine medium. The first day of seeding was treated as the 0 h time point. CCK8 assays were performed at 24 h intervals, and 10 µL of CCK8 reagents were added to each well and incubated for 1.5 h under normal cell culture conditions. Absorbance was measured at 450 nm with a microplate reader.
Screening and Identification of Genes Related to Differential Glutamine Metabolism in PCa
First, samples with incomplete clinical information were excluded from further analysis, and the clinical characteristics of the 422 PCa samples are listed in Table 1. A total of 5926 DGEs were recognized between PCa and prostate normal tissue after differential expression analysis by R package "limma". To identify genes related to glutamine metabolism, 91 glutamine-related genes were gathered from the GSEA database (Supplemental Table S3). Then, 40 DEGRGs of PCa were obtained by taking the intersection of TCGA DEGs and glutamine-related genes ( Figure 1A). The correlation heatmap showed the expression difference of 40 glutamine-related DEGs between prostate cancer tissue and normal prostate tissue ( Figure 1B). Finally, we determined the protein interaction among the 40 glutaminerelated DEG through PPI network analysis. ( Figure 1C).
Screening Glutamine Metabolism-Related Genes and Construction of Risk Scoring Model
First of all, we found nine genes that were significantly related to BCR-free surviva (BCRFS) by univariate Cox analysis ( Figure 2A). The five key genes (ATCAY, GLUL ASNS, CAD, FPGS) and the correlation coefficient were identified by Lasso regression (Tenfold cross-validation method) ( Figure 2B,C). The correlation analysis revealed that al five genes were independent risk factors of PCa ( Figure 2D,E). Then, based on the corre lation coefficient and expression values of the five genes (Table 2), a prognostic model fo predicting biochemical recurrence in each patient was constructed as follows: risk score =
Screening Glutamine Metabolism-Related Genes and Construction of Risk Scoring Model
First of all, we found nine genes that were significantly related to BCR-free survival (BCRFS) by univariate Cox analysis ( Figure 2A). The five key genes (ATCAY, GLUL, ASNS, CAD, FPGS) and the correlation coefficient were identified by Lasso regression (Tenfold cross-validation method) ( Figure 2B,C). The correlation analysis revealed that all five genes were independent risk factors of PCa ( Figure 2D,E). Then, based on the correlation coefficient and expression values of the five genes (Table 2), a prognostic model for predicting biochemical recurrence in each patient was constructed as follows: risk score = (−0.204340092 * ATCAY) + (−0.130197387 * GLUL) + (0.275033455 * ASNS) + (0.63782178 * CAD) + (0.679691817 * FPGS). Next, PCa patients in the TCGA cohort were divided into two groups (high-risk and low-risk) according to the median value of the risk model score.
As a result, we found that the proportion of PCa patients with biochemical recurrence were significantly higher in the high-risk group than those in the low-risk group. Consistent with our findings, we also found that ATCAY and GLUL were highly expressed in the low-risk group, while ASNS, CAD and FPGS were upregulated in the high-risk group ( Figure 2F). Subsequently, a Kaplan-Meier survival curve showed that BCRFS of patients in high-risk group was significantly shorter than those in the low-risk group (p < 0.05) ( Figure 2G). Additionally, the areas under the 1-year and 3-year time-dependent ROC curves were 0.697 and 0.72, indicating the good effectiveness of the prognostic risk score model at the 1-year and 3-year points. However, the AUC of the 5-year time-dependent ROC curve was 0.642, showing a poor effectiveness of the prognostic risk score model at the 5-year point ( Figure 2H).
Clinical Characteristics and Survival Analysis in Different Risk Groups
We conducted a correlation analysis between clinical characteristics and biochemicalrecurrence-free survival in PCa patients from TCGA cohort. We found that patients with Gleason score > 7, N1, stage ≥ T3 had shorter biochemical-recurrence-free survival ( Figure 3A). Additionally, the risk score was significantly higher in patients with biochemical recurrence, Gleason score > 7, N1 and stage ≥ T3 (p < 0.05), ( Figure 3B). Moreover, we also demonstrated that the predictive value of the risk score was more efficient in PCa patients with Gleason score > 7, N0 stage, stage ≥ T3 and age < 65, while the BCRFS was not statistically different among PCa patients with Gleason score < 7, N1 stage, T2 stage and age < 65 ( Figure 3C-F).
( Figure. 2F). Subsequently, a Kaplan-Meier survival curve showed that BCRFS of patients in high-risk group was significantly shorter than those in the low-risk group (p < 0.05) ( Figure 2G). Additionally, the areas under the 1-year and 3-year time-dependent ROC curves were 0.697 and 0.72, indicating the good effectiveness of the prognostic risk score model at the 1-year and 3-year points. However, the AUC of the 5-year time-dependent ROC curve was 0.642, showing a poor effectiveness of the prognostic risk score model at the 5-year point ( Figure 2H). prognosis by lasso regression analysis. Figure 2C represents the variables with non-zero coefficients. From top to bottom, the first line (yellow) represents FPGS, the second line (pale purple) represents CAD, the third line (pale gray) represents ASNS, the fourth line (pale blue) represents PFAS, the fifth line (pale yellow) represents NIT2, the sixth line (pale green) represents PYCR3, the seventh line (pale red) represents GLUL, and the eighth line (blue) represents ATCAY. To ensure the accuracy of the model, lambda.min (0.0120) was chosen as λ to construct the model and screen
Verification of Risk Scoring Model by External Cohorts
Since we evaluated the predictive efficacy of the risk model by using PRAD cohort in TCGA database, we further verified the predictive power of this model by downloading the PCa dataset (GSE70769) from the GEO database. Similarly, we divided all patients into high-risk and low-risk groups using the median value and according to the risk score
Verification of Risk Scoring Model by External Cohorts
Since we evaluated the predictive efficacy of the risk model by using PRAD cohort in TCGA database, we further verified the predictive power of this model by downloading the PCa dataset (GSE70769) from the GEO database. Similarly, we divided all patients into high-risk and low-risk groups using the median value and according to the risk score of each patient. Similar to the TCGA training set, a higher proportion of PCa patients with biochemical recurrence were observed in the high-risk group than in the low-risk group. Moreover, the expression pattern of ATCAY, GLUL, ASNS, CAD and FPGS in the high-risk and low-risk groups was the same as the training set ( Figure 4A). Similarly, Kaplan-Meier survival analysis showed that BCRFS of patients in the high-risk group was significantly shorter than those in the low-risk group (p < 0.05) ( Figure 4B). The areas under the 1-year, 3-year and 5-year time-dependent ROC curves were 0.643, 0.699 and 0.690, respectively ( Figure 4C). Combined with the above research results, this model is shown to have good prediction ability.
. Clin. Med. 2023, 12, x FOR PEER REVIEW 10 of each patient. Similar to the TCGA training set, a higher proportion of PCa patients biochemical recurrence were observed in the high-risk group than in the low-risk g Moreover, the expression pattern of ATCAY, GLUL, ASNS, CAD and FPGS in the risk and low-risk groups was the same as the training set ( Figure 4A). Similarly, Kap Meier survival analysis showed that BCRFS of patients in the high-risk group was s icantly shorter than those in the low-risk group (p < 0.05) ( Figure 4B). The areas unde 1-year, 3-year and 5-year time-dependent ROC curves were 0.643, 0.699 and 0.690, re tively ( Figure 4C). Combined with the above research results, this model is shown to good prediction ability.
The Glutamine-Related Risk Score Was an Independent Predictor of Biochemical Recurrence of PCa
First, we applied univariate analysis and found that the risk score, Gleason s PSA value and pathological T and N stages were significantly associated with bioche recurrence of PCa (p < 0.05, Figure 5A). Next, multivariate Cox regression analysis f that both the Gleason score and the risk score were independent prognostic facto biochemical recurrence of PCa (p < 0.05, Figure 5B). These results indicate that the score model constructed with the 5 glutamine-related genes was an independent ris tor for the BCR of PCa patients.
The Glutamine-Related Risk Score Was an Independent Predictor of Biochemical Recurrence of PCa
First, we applied univariate analysis and found that the risk score, Gleason score, PSA value and pathological T and N stages were significantly associated with biochemical recurrence of PCa (p < 0.05, Figure 5A). Next, multivariate Cox regression analysis found that both the Gleason score and the risk score were independent prognostic factors for biochemical recurrence of PCa (p < 0.05, Figure 5B). These results indicate that the risk score model constructed with the 5 glutamine-related genes was an independent risk factor for the BCR of PCa patients.
Immunosuppressive Tumor Microenvironment and Pathways Were Enriched in High Risk Group
We then analyzed the differential infiltrated immune cell types by CIBERSORTx and identified that naive B cells, resting mast cells, and neutrophils were more enriched in the low-risk group, whereas Tregs, M2 macrophages, and activated NK cells were significantly enriched in the high-risk group ( Figure 6A). We also calculated the stromal score, estimate score, and immune score by the ESTIMATE algorithm and discovered that the scores were significantly lower in the high-risk compared with the low-risk group, whereas tumor purity was significantly higher in the high-risk compared with the low-risk group ( Figure 6B). We subsequently evaluated the association between immune activity and risk scores by using the single sample gene set enrichment analysis (ssGSEA), Interestingly, we also observed that patients with high risk scores had a lower of type II IFN response scores than those with low risk scores ( Figure 6C). Finally, we noticed that the proportion of M2 macrophages was positively correlated with Gleason score, ASNS, CAD, FPGS and risk score. Similarly, the Tregs fraction was also positively correlated with Gleason score, negatively correlated with ATCAY (p < 0.05), and not significantly correlated with risk score (p > 0.05) ( Figure 6D). Collectively, these results show that patients with high glutamine metabolic signatures might receive immunosuppressive effects through various mechanisms. The results also indicated that abnormal glutamine metabolism may contribute to the malignant progression of PCa through modification of tumor microenvironment, especially through altering the infiltration of immune-related cells and pathways.
Immunosuppressive Tumor Microenvironment and Pathways Were Enriched in H Risk Group
We then analyzed the differential infiltrated immune cell types by CIBERS identified that naive B cells, resting mast cells, and neutrophils were more enric low-risk group, whereas Tregs, M2 macrophages, and activated NK cells we cantly enriched in the high-risk group ( Figure 6A). We also calculated the stro
Mutational Landscape and the Potential Molecular Mechanisms in High-and L Groups of PCa
We analyzed tumor mutational burden (TMB) levels in patients from d groups and found that patients in the high-risk group had significantly higher than those from the low-risk group, and the risk score was positively associate ( Figure 7A). We also determined the level of microsatellite instability (MSI) sco risk and low-risk groups, and, similar to TMB, MSI scores in the high-risk significantly higher than those in low-risk group, and the risk score was pos ciated with MSI ( Figure 7B). Subsequently, the top 20 most frequently mutated analyzed in high-risk-group patients, and 148 out of 211 patients with mut detected in the high-risk group, and the three genes with the highest mutatio TP53 (22.3%), TTN (20.3%) and SPOP (12.8%) ( Figure 7C). By contrast, in group, 116 out of 211 patients harbored genetic mutations, including SPOP (2 (14.7%) and TTN (12.9%) ( Figure 7D). It is worth noting that TP53 is a typical pressor, and its mutation is one of the common factors in tumorigenesis. TP5 are even more directly associated with advanced prostate cancer and enhance siveness of prostate cancer [12,13]. Moreover, SPOP mutation is a common m tern in prostate cancer, accounting for about 15% of primary prostate cancer estingly, the frequency of SPOP mutations was significantly higher in prim cancers than in metastatic prostate cancers [15]. This is similar to our results t in the low-risk group had a higher frequency of SPOP mutations. Furtherm sample GSEA was used to analyze the potential molecular mechanisms betwe risk groups. Interestingly, pathways associated with the repair of DNA dama riched in the high-risk group, for example, "Base excision repair (BER)", "Nu cision repair (NER)", "Homologous recombination (HR)" and "DNA mism (MMR)" (Figure 7E-H). Other pathways were mainly related to metabolism, idine metabolism", "Glyoxylate and dicarboxylate metabolism" and "Oxid phorylation", etc. (Supplementary Figure S1).
Mutational Landscape and the Potential Molecular Mechanisms in High-and Low-Risk Groups of PCa
We analyzed tumor mutational burden (TMB) levels in patients from different risk groups and found that patients in the high-risk group had significantly higher TMB levels than those from the low-risk group, and the risk score was positively associated with TMB ( Figure 7A). We also determined the level of microsatellite instability (MSI) scores in high-risk and low-risk groups, and, similar to TMB, MSI scores in the high-risk group were significantly higher than those in low-risk group, and the risk score was positively associated with MSI ( Figure 7B). Subsequently, the top 20 most frequently mutated genes were analyzed in high-risk-group patients, and 148 out of 211 patients with mutations were detected in the high-risk group, and the three genes with the highest mutation rates were TP53 (22.3%), TTN (20.3%) and SPOP (12.8%) ( Figure 7C). By contrast, in the low-risk group, 116 out of 211 patients harbored genetic mutations, including SPOP (23.3%), TP53 (14.7%) and TTN (12.9%) ( Figure 7D). It is worth noting that TP53 is a typical tumor suppressor, and its mutation is one of the common factors in tumorigenesis. TP53 mutations are even more directly associated with advanced prostate cancer and enhance the aggressiveness of prostate cancer [12,13]. Moreover, SPOP mutation is a common mutation pattern in prostate cancer, accounting for about 15% of primary prostate cancers [14]. Interestingly, the frequency of SPOP mutations was significantly higher in primary prostate cancers than in metastatic prostate cancers [15]. This is similar to our results that patients in the low-risk group had a higher frequency of SPOP mutations. Furthermore, single-sample GSEA was used to analyze the potential molecular mechanisms between different risk groups. Interestingly, pathways associated with the repair of DNA damage were enriched in the high-risk group, for example, "Base excision repair (BER)", "Nucleotide excision repair (NER)", "Homologous recombination (HR)" and "DNA mismatch repair (MMR)" (Figure 7E-H). Other pathways were mainly related to metabolism, like "Pyrimidine metabolism", "Glyoxylate and dicarboxylate metabolism" and "Oxidative phosphorylation", etc. (Supplementary Figure S1).
Drug Sensitivity Analysis
In addition to surgical treatment modalities, endocrine therapy and chemotherapy are both systemic treatment for patients with PCa. We calculated individual drug half
Drug Sensitivity Analysis
In addition to surgical treatment modalities, endocrine therapy and chemotherapy are both systemic treatment for patients with PCa. We calculated individual drug half maximal inhibitory concentrations (IC50) for each sample in high-risk and low-risk groups using the R package "pRRophetic". We found that the IC50 of the antiandrogen bicalutamide was significantly lower in the low-risk group than in the high-risk group ( Figure 8A). In addition, we noted that patients in the high-risk group appeared to derive better survival benefit from chemotherapeutic agents, such as docetaxel, doxorubicin, etoposide and mitomycin C (Figure 8B-E). Next, we evaluated the correlation between the drugs in the CellMiner database and the five DEGRGs by Spearman correlation analysis, and 16 drugs with the greatest correlation with these 5 DEGRGs were retained ( Figure 8F). Collectively, our data demonstrated that the glutamine-related risk signature could be applied as a useful clinical parameter to choose proper treatment for PCa patients.
J. Clin. Med. 2023, 12, x FOR PEER REVIEW maximal inhibitory concentrations (IC50) for each sample in high-risk and lo groups using the R package "pRRophetic". We found that the IC50 of the antian bicalutamide was significantly lower in the low-risk group than in the high-risk ( Figure 8A). In addition, we noted that patients in the high-risk group appeared to better survival benefit from chemotherapeutic agents, such as docetaxel, doxor etoposide and mitomycin C ( Figure 8B-E). Next, we evaluated the correlation b the drugs in the CellMiner database and the five DEGRGs by Spearman correlatio ysis, and 16 drugs with the greatest correlation with these 5 DEGRGs were retaine ure 8F). Collectively, our data demonstrated that the glutamine-related risk sig could be applied as a useful clinical parameter to choose proper treatment for P tients.
Validation of 5 Key Genes Expression Levels in PCa Clinical Samples
We validated the expression levels of 5 prognostic genes in the TCGA-PRAD (Supplementary Figure S2) and found that both ATCAY and GLUL were lower expressed in the tumor tissues than normal prostate tissues, while the ASNS, CAD and FPGS were overexpressed in the prostate cancer group. We further validated the expression of these 5 genes in clinical samples of PCa (n = 8) and benign prostatic hypertrophy (n = 7) by qPCR. We revealed that the expression of ATCAY and GLUL were significantly downregulated, whereas CAD and FPGS were upregulated in PCa samples. The results were consistent with our bioinformatics results ( Figure 9A,B,D,E). However, the expression of ASNS was not statistically different between PCa and normal samples ( Figure 9C). This outcome may result from the insufficient sample size. Figure 8. Drug sensitivity analysis in the different risk groups. (A-E) The IC50 of bicalutamide, docetaxel, doxorubicin, etoposide and mitomycin C difference between high-risk and low risk groups. (F) Anticancer sensitivity drug analysis of 5 genes. ** p < 0.01, *** p < 0.001,.
Validation of 5 Key Genes Expression Levels in PCa Clinical Samples
We validated the expression levels of 5 prognostic genes in the TCGA-PRAD (Supplementary Figure S2) and found that both ATCAY and GLUL were lower expressed in the tumor tissues than normal prostate tissues, while the ASNS, CAD and FPGS were overexpressed in the prostate cancer group. We further validated the expression of these 5 genes in clinical samples of PCa (n = 8) and benign prostatic hypertrophy (n = 7) by qPCR. We revealed that the expression of ATCAY and GLUL were significantly downregulated, whereas CAD and FPGS were upregulated in PCa samples. The results were consistent with our bioinformatics results ( Figure 9A,B,D,E). However, the expression of ASNS was not statistically different between PCa and normal samples ( Figure 9C). This outcome may result from the insufficient sample size.
Silencing of ASNS and GLUL in PCa Cell Lines Induced Tumor Growth Inhibition under Glutamine Ablation
As ASNS and GLUL are abundantly expressed in prostate tissue, we sought to determine their biological function through PCa cells. We knocked down the expression of ASNS and GLUL in DU145 and 22Rv1 cell lines by siRNA. To confirm the knockdown efficacy, qPCR assays were conducted ( Figure 10A,B). Furthermore, cell growth was evaluated by CCK8 proliferation assay. We observed that knocking down either ASNS or GLUL did not lead to a significantly different proliferation both in the DU145 and 22Rv1 cell lines (Figure 10C,D). Surprisingly, we also found that, under the glutamine deprived
Silencing of ASNS and GLUL in PCa Cell Lines Induced Tumor Growth Inhibition under Glutamine Ablation
As ASNS and GLUL are abundantly expressed in prostate tissue, we sought to determine their biological function through PCa cells. We knocked down the expression of ASNS and GLUL in DU145 and 22Rv1 cell lines by siRNA. To confirm the knockdown efficacy, qPCR assays were conducted ( Figure 10A,B). Furthermore, cell growth was evaluated by CCK8 proliferation assay. We observed that knocking down either ASNS or GLUL did not lead to a significantly different proliferation both in the DU145 and 22Rv1 cell lines ( Figure 10C,D). Surprisingly, we also found that, under the glutamine deprived condition, PCa cells displayed significant growth inhibition after ASNS or GLUL knocking down ( Figure 10E,F). Our results suggest that PCa cells are highly dependent on both exogenous and endogenous glutamine, and that targeting glutamine metabolism may be effective in inhibiting PCa growth.
J. Clin. Med. 2023, 12, x FOR PEER REVIEW 16 knocking down ( Figure 10E,F). Our results suggest that PCa cells are highly depen on both exogenous and endogenous glutamine, and that targeting glutamine metabo may be effective in inhibiting PCa growth. Figure 10. Cell viability of 22Rv1 and DU145 cells after siRNA knockdown of GLUL or ASNS. Knockdown efficiency of siRNA-GLUL and siRNA-ASNS in DU145 and 22Rv1 cell lines by q (C,E) The CCK8 proliferation assay of the no treatment groups. (D,F) The CCK8 proliferation of without-glutamine-treated groups. * p < 0.05, ** p < 0.01, *** p < 0.001, **** p < 0.0001, ns repre no significant difference.
Discussion
Metabolic reprogramming is considered to be one important hallmark of tumor esis and tumor development [16]. Like metabolization in normal cells, tumor cells de on the Warburg effect and high glutamine metabolic state to meet their high energ quirements [17]. Glutamine plays a critical role in multiple biological activities, inclu tumor growth and metastasis. For example, glutamine participates in the tricarbo acid cycle (TCA) as an intermediator, helps to construct purine and pyrimidine bases synthesizes glutathione to promote the antioxidant capacity of cancer cells, etc. [18]. importantly, glutamine not only influences the development of tumor cells, it also lates the metabolic state throughout the tumor microenvironment. In the TME, tumor take up the maximum amount of glutamine to meet growth consumption, resulti lack of glutamine utilization by immune cells and reduced anti-tumor immun sponse [19]. Moreover, glutamine metabolites can also promote tumor progression b hibiting glucose-dependent differentiation of macrophages as well as T cells, ther disrupting the function of immune cells [20]. Therefore, targeting glutamine metabo has been proposed as a potential approach for cancer treatment.
Discussion
Metabolic reprogramming is considered to be one important hallmark of tumorigenesis and tumor development [16]. Like metabolization in normal cells, tumor cells depend on the Warburg effect and high glutamine metabolic state to meet their high energy requirements [17]. Glutamine plays a critical role in multiple biological activities, including tumor growth and metastasis. For example, glutamine participates in the tricarboxylic acid cycle (TCA) as an intermediator, helps to construct purine and pyrimidine bases, and synthesizes glutathione to promote the antioxidant capacity of cancer cells, etc. [18]. More importantly, glutamine not only influences the development of tumor cells, it also regulates the metabolic state throughout the tumor microenvironment. In the TME, tumor cells take up the maximum amount of glutamine to meet growth consumption, resulting in lack of glutamine utilization by immune cells and reduced anti-tumor immune response [19]. Moreover, glutamine metabolites can also promote tumor progression by inhibiting glucose-dependent differentiation of macrophages as well as T cells, therefore disrupting the function of immune cells [20]. Therefore, targeting glutamine metabolism has been proposed as a potential approach for cancer treatment.
The occurrence of biochemical recurrence in patients with PCa has been an intractable clinical problem. Biochemical recurrence occurs in approximately 20-40% of patients after radical prostatectomy or radiotherapy and is associated with poor clinical outcomes [21]. Although a variety of treatment options are available to improve the prognostic outcome of PCa patients [22], patients with biochemical recurrence were likely to develop distant metastasis, which eventually led to cancer-related death [3]. In clinical practice, surgical margins, Gleason Score, PSA and seminal vesicle invasion are useful prognostic factors of BCR in PCa patients [23]. Therefore, accurate prediction of BCR is of great importance for long-term survival of PCa patients. In this study, we identified a glutamine metabolic signature consist of 5 genes that could predict the prognosis, especially BCRFS, for PCa patients. The univariate and multivariate Cox regression analysis also showed that the signature was an independent risk factor of BCRFS in PCa. However, multivariate Cox regression analysis showed that the HR for the prediction model-generated risk score is relatively low (close to 1), which could be a flaw in this study. Further training and validation of this model with larger sample size may be a solution. In the present study, the model demonstrated a good efficacy and accuracy in both TCGA and GEO cohorts. As a result, early intervention and individualized treatment strategies may be developed according to the risk stratification. Currently, BCRFS prediction models based on other characteristics have also been established. For example, the BCRFS model of PCa was constructed through differential glycolysis-related gene characteristics [24]. An iron-death-related prognostic model for BCRFS was also reported, and differences in immune infiltration in tumor microenvironment under different molecular clusters was observed [25]. In addition, a recent study reported a long, non-coding RNA-based molecular feature model to predict BCRFS [26]. Compared with these molecular classifiers, our glutamine-metabolism-based genetic signature demonstrated comparable predictive efficacy in predicting the BCRFS in PCa patients. Collectively, this glutamine-metabolism-related risk signature may serve as a useful parameter in stratifying patients during clinical decision making.
In this present study, the prognostic model was constructed with five genetic features: Kinesin Light Chain Interacting Caytaxin (ATCAY), Glutamate-Ammonia Ligase or Glutamine Synthase (GLUL), Asparagine Synthetase (ASNS), Carbamoyl-Phosphate Syn-thetase2, Aspartate Transcarbamylase, And Dihydroorotase (CAD) and Folylpolyglutamate Synthase (FPGS). Some of these genes are important regulators during cancer progression. For instance, GLUL catalyzes the synthesis of glutamine from glutamic acid and ammonia and plays an important role in maintaining acid-base homeostasis, cell signaling and growth, ammonia detoxification and pathological angiogenesis [27,28]. Its overexpression can be used as an early marker of hepatocellular carcinoma [29]. Enhanced expression of GLUL is also related to the malignant progression of pancreatic cancer, so targeting GLUL can effectively reduce the survival of pancreatic cancer cells [30]. Moreover, ASNS is highly expressed in PCa, especially in castration resistant prostate cancer (CRPC) patients, and inhibition of ASNS can lead to decreased viability of PCa cells. As a result, targeting ASNS was proposed as an potential strategy to treat CRPC [31]. Importantly, CAD upregulation in PCa promotes androgen receptor (AR) nuclear translocation and transcriptional activity, therefore enhancing the metastatic capacity and recurrence risk of PCa [32]. In our current model, ATCAY and GLUL are highly expressed in low-risk patients and patients without biochemical recurrence. On the contrary, the expression of ASNS, CAD and FPGS were upregulated in high-risk patients and patients with biochemical recurrence. Combined with the previous data, we believed that ATCAY and GLUL may be a protective factor in PCa, while ASNS, CAD and FPGS are act as unfavorable factors to promote the occurrence and development of PCa. Unfortunately, ATCAY, GLUL and FPGS have not been studied in depth in PCa, so evidence of their mechanisms remains limited.
Tumor microenvironment (TME) is one of the most important factors that promotes malignant progression of solid tumors. A large number of immune cells and cytokines in the microenvironment constitute an immunosuppressive environment that promotes tumorigenesis, metastasis, and therapeutic resistance of malignancies [33]. Normally, M2 macrophages has been considered as tumor-associated macrophages (TAM) that support the development of numerous tumors [34]. As an important immune component of the prostate cancer tumor microenvironment, the total amount of TAM infiltration and its subtype are closely related to the clinical outcome and pathological features of prostate cancer patients, and prostate cancer patients with larger amount of infiltrating TAM often have a worse prognosis [35][36][37]. Tregs can inhibit anti-tumor immune responses and promote the formation of an immunosuppressive microenvironment, which, in turn, promotes cancer progression. Tregs tend to be highly infiltrative in tumor tissues, including hepatocellular carcinoma, lung cancer, pancreatic cancer, breast cancer and prostate cancer, and are directly related to poor prognosis [38,39]. It is reported that Tregs can modulate the direct development of post atrophic hyperplasia (PAH) into prostate cancer and suppress the anti-tumor immune response before the primary tumor is formed [40]. In addition, glutamine metabolism can also participate in the remolding of TME through multiple mechanisms. For example, tumor cells can compete with immune cells for glutamine uptake and reduce the viability of immune cells to promote tumor progression. Glutamine also alters the metabolism of immune cells in the tumor microenvironment and regulates the differentiation and function of immune cells in many ways. For instance, M2 macrophages consume more glutamine intracellularly than M1 macrophages, and glutamine also promotes macrophage polarization to M2 type by modifying the correspondent gene expression [19]. Additionally, PCa cells can alter the phenotype of recruited macrophages by releasing chemokines to polarize them toward a pro-tumor macrophage mixed state [41]. Notably, in the current study, we found a significantly higher infiltration of M2 type macrophages and Tregs cells in the high-risk group. Available data suggest that Tregs cells and M2 macrophages are important regulators in driving the malignant progression of PCa and are associated with increased risk of biochemical recurrence of PCa [37]. Finally, we also found that patients in the high-risk group had higher tumor purity scores, but lower stromal, immune and estimate scores, indicating a more intense immunosuppressive effect. As a result, we proposed a potential mechanism that aberrant glutamine metabolism may contribute to progression of PCa via modifying TME, especially immune cell infiltration. However, future mechanistic study is warranted.
Pembrolizumab, the anti-programmed cell death protein 1 (PD-1) antibody, is approved by the FDA for the treatment of microsatellite instability-high (MSI-H) and TMBhigh (TMB-H) tumors [42,43]. Available research data indicate that patients with TMB-H and MSI-H scores may benefit from immunotherapy [44,45]. A growing number of studies have shown that TMB can be used as a predictive marker for response to immune checkpoint inhibitors (ICIs) in a variety of tumors, including non-small cell lung cancer and melanoma [43]. Deficient mismatch repair (dMMR)/microsatellite instability (MSI) usually occurs in colorectal cancer and can be detected in about 15% of colorectal cancer patients [46]. Although MSI is uncommon in prostate cancer, with an incidence of about 3.1%, it is important in guiding clinical treatment of prostate cancer [47]. In our study, many pathways related to the repair of DNA damage were enriched in the high-risk group by GSEA analysis, for example, "Base excision repair (BER)", "Nucleotide excision repair (NER)", "Homologous recombination (HR)" and "DNA mismatch repair (MMR)", which may result in mutations in high-risk group patients. We observed that the patients with high risk scores tended to have higher TMB and MSI scores, which indicated that patients with high risk scores would be more sensitive to immunotherapy. Additionally, we observed that patients in the high-risk group appeared to derive better survival benefits from chemotherapeutic agents such as docetaxel, doxorubicin, etoposide and mitomycin C. Therefore, immunotherapy plus chemotherapy may be a new option for PCa patients with high glutamine profile.
Although our study achieved some interesting results, there are still some limitations. First, there was a small amount of data for PCa patients; although the TCGA database is a large public database, there were only 499 prostate cancer patients, and only 422 cases were carefully screened by us with information on biochemical recurrence. The number of prostate cancer patients in the GSE70769 dataset was also low, and the clinical information of some patients with PCa from the GSE70769 dataset was incomplete. Second, this was a retrospective analysis, and selection inaccuracies may exist in this study. Third, although we initially demonstrated the important role of glutamine in prostate cancer cells through qPCR and siRNA silencing of target genes in this experiment, extensive in vivo and in vitro experiments are still needed to further explore the potential mechanisms behind the regulation of glutamine metabolism in PCa risk score and BCR.
Conclusions
Our study identified the key glutamine-metabolism-related genes that described the glutamine-metabolism background in PCa. We constructed a risk signature with high accuracy in predicting BCRFS and treatment response in PCa patients. Genetic mutation, the landscape of TME and drug sensitivity were compared according to our risk stratification. Moreover, the study provided new insights into the mechanisms by which altered glutamine metabolism regulates the progression of PCa by modulating the dynamics of TME. Further relevant in vitro and in vivo verification are warranted to deeply explore the mechanisms involved in the metabolic regulation of glutamine synthesis in PCa. Table S1: The forward and reverse primers of 5 key genes and GAPDH; Table S2: siRNA sequences of knockdown GLUL and ASNS; Table S3: 91 glutamine related genes from GSEA database.
Author Contributions: P.G. and X.L. conceptualized the study, acquired funding support, and revised the final manuscript; H.W., W.Z. and Y.C. performed the design, acquisition, analysis and interpretation of data; H.L., H.T. and Z.X. performed the qPCR experimental validation; R.W. and J.T. collected clinical samples and profiles; C.Z. and R.L. performed the cell culture and CCK8 proliferation assay. All authors have read and agreed to the published version of the manuscript.
Informed Consent Statement:
The studies involving human participants were reviewed and approved by the Ethical Review Committee of The First Affiliated Hospital of Kunming Medical University. The patients/participants provided their written informed consent to participate in this study.
Data Availability Statement:
Publicly available datasets were analyzed in this study: The Cancer Genome Atlas (https://portal.gdc.cancer.gov/, accessed on 1 April 2022) and UCSC database (https: //xenabrowser.net/datapages/, accessed on 1 April 2022). The GSE70769 dataset was downloaded from Gene Expression Omnibus (GEO: https://www.ncbi.nlm.nih.gov/geo/, accessed on 17 May 2022). The original contributions presented in the study are included in the article/Supplementary Material. The rest of the data used and analyzed during the current study are available from the corresponding author on reasonable request.
|
2023-03-16T15:08:16.014Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "137300ab3dd54fe8f6ea2dfa842c7d7919f709f4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2077-0383/12/6/2243/pdf?version=1678780599",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f23058d16da24dbd4852759ae285c20cb5e78c72",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
55832405
|
pes2o/s2orc
|
v3-fos-license
|
Interaction between plasticized polyvinyl chloride waterproofing membrane and extruded polystyrene board, in the inverted flat roof ; Interacción entre las membranas de policloruro de vinilo plastificado y el poliestireno extrusionado, en la cubierta plana invertida
The inverted flat roof is a constructive system widely used in flat roof construction. In this constructive solution, the insulation is placed over the waterproofing material as a protection. It is believed that this solution provides a longer life cycle; given the fact that it limits the thermal variation the waterproofing material bears up to the end of its life cycle. Consequently, the result will be providing a longer life to the waterproofing membrane. This constructive solution always incorporates polymers or other materials with a thermoplastic addition in their composition. Some polymers show interactions between them that can affect their integrity, and, at the same time, the bulk of the polymeric materials are incompatible. The extruded polystyrene board is always present in the inverted flat roof, and although it is an unbeatable product for this use, it presents incompatibilities and interactions with other materials, and these can affect their properties and therefore the durability of them.
INTRODUCTION
Nowadays, building for long-term sustainability is essential, and therefore, establishing mechanisms for preserving the materials included in this constructive solution has become crucial.
In this paper, the inverted flat roof will be analyzed, focusing and studying the possible interactions between extruded polystyrene board (XPS) and plasticized polyvinyl chloride (PVC-P) waterproofing membranes.Figure 1 shows a detail drawing (cross section) of the materials make up this constructive solution.This is not, however, a chemical study; this paper intends to analyze some interactions that can affect the materials involved in this constructive solution, and the consequences that placing them improperly in a flat roof might have; PVC-P waterproofing membranes have been used in flat roofs for more than 35 years; this material has undergone a great improvement along this time, adding better internal reinforcements to stabilize the dimension of the membrane, and including new additives to withstand weathering, the ultraviolet radiation of the sun, and other inconvenient factors, in order to make gradually a more reliable and durable material.Thus, nowadays PVC-P waterproofing sheets are materials with a great deal of additives in their composition, and composed by different layers in the cross section.To carry out a general study of the behavior of these materials in specific circumstances is a complex task, even more so, if the many different manufacturers are considered, that obviously vary the composition of their waterproofing single ply.
Several tests were carried out in the laboratory to have a clearer outcome on the performance of these materials in pre-established circumstances, and a research work carried out on PVC-P waterproofing membranes installed in flat roofs, with the condition of being part of an upside down flat roof.
OBJETIVE
The main goal in this study is to detect interactions and incompatibilities between PVC-P waterproofing sheets and XPS board, to research if these processes can happen in a common inverted flat roof in use for several years, and to find out how can they affect the integrity of the materials involved in this constructive system.In addition, a wide variety of PVC-P waterproofing membrane brands of common use were tested in the laboratory to check whether this can occur as a general rule.On the other hand, the effectiveness of some frequent auxiliary separating sheets for safeguarding the integrity of these materials was to be analyzed.
STATE OF THE ART
Most polymers are incompatible with each other; and in fact, the interactions and incompatibilities between them have been widely studied.One of the fields of study in this matter is the compatibility of incompatible polymer blends.This research line tends to create stable polymer combinations, in order to combine the properties of theoretically incompatible polymers (1,2) and to take advantage of the better qualities of each one.
PVC-P can contain an ample variety of additives, as for example plasticizers that increase the plasticity of a material.The main applications of plasticizers are as additives for plastics.The ASTM D883 standard defines plasticizer as: a substance incorporated into a plastic or elastomer to increase its flexibility, workability or distensibility (3).There is a great deal of plasticizers can be incorporated into a plastic however, those with a greater molecular weight, are more appropriated to increase the durability of the PVC-P waterproofing membranes (4).Phthalates are broadly used in PVC-P laminas, the physical characteristics of these plasticizers are rather diverse.The boiling point can fluctuate from 160 °C of the Diallyl Phatalate to 384 °C of the Di-sec-Octyl Phatalate (5).Other physical data, such as the melting point can oscillate from 5.5 °C of Dimethyl Phatalate to −58 °C of Disiobutyl Phatalate (6).
The interactions between PVC-P and XPS might be summarized as a transference process of plasticizers, commonly named by the term: plasticizers migration, which is a widely studied phenomenon in other industries, such as health and food industries (7).Plasticizers can migrate from PVC-P to any adjacent absorbent material, if the strength and the interface between these materials is not too high, and if the plasticizer is compatible with the receiving material (8,9).Plasticizers of plastic materials can migrate to another substance or material, such as food products, liquids, or even to another plastic (10).Nevertheless, plasticizer migration from plasticized PVC into other polymeric materials has not been studied as extensively as plasticizer migration into air (i.e., volatile loss) and liquid.
There is a wide variety of plasticizers, and each of them presents a similar behavior when a PVC-P containing some of them is exposed to heat.Time and temperature have influence in the plasticizer migration (11), furthermore, the rate of plasticizer loss increases when the temperature rises (12), and this does occur even with bio-based plasticizers (13).
In addition, heat it is an important factor to study this process in the short term, and to make a prediction about the behavior of a polymer in specific circumstances.On the other hand, polystyrene (no foams of this material) is frequently used in chemical studies or tests, as an absorbent material, and as a vehicle for accelerating the speed of the plasticizer loss in a PVC-P sheet (14).Thus, in favorable conditions, crude polystyrene is capable of taking plasticizers contained in other plastics.XPS is basically manufactured, by extruding molting polystyrene containing a blowing agent (nitrogen gas or chemical blowing agent) under elevated temperature and pressure, into an atmosphere where the mass expands and solidifies into rigid foam (15).There is no evidence that XPS cannot absorb plasticizers.Additionally, once polystyrene has been transformed into plastic foam the thermal conductivity (XPS) lies in a range between 0.03 and 0.04 W/mK (16).
Plasticizers loss has an important consequence for the waterproofing membranes.The amount of plasticizers in a PVC-P might reach the 50% of the total mass, and in the case of some PVC geomembrane already studied the initial content percentage of plasticizers was between 31% and 40% (17).The mass loss of a waterproofing sheet implies a loss of volume, which brings a variation of the membrane dimension (18), and moreover, a gradual reduction of the flexible properties of the sheet.This process will end up marking the end of the life cycle of the waterproofing sheet, causing eventually moisture and leaks.
The plasticizers loss in synthetic waterproofing membranes has been studied in the civil engineering industry.There are works based on the analysis of some materials placed in reservoirs, but not on the interactions previously described.The amount of plasticizers contained in some synthetic membranes was studied, the additives and plasticizers were identified, and their behavior through time was analyzed (19).The phthalates used as additives in PVC-P waterproofing sheets were examined in a geomembrane placed in another reservoir (20).In contrast, another study abounded in the performance of a PVC geomembrane after five years of service, used as a final cover system for the Dyer Boulevard Landfill in West Palm Beach, Florida.In this case, the sheet lost 13% of the initial plasticizer content (21).The conditions a synthetic waterproofing sheet has to bear in the construction field are different, and involve usually more variable factors and singularities, such as roof shape, -whether it has a great number of corners or right angles-services (air conditioning, gas pipes, antennas, solar hot water and electro voltaic panels), sheet areas permanently exposed to the open air, etc.And the most important factor to be considered is: contact or proximity with XPS.
When these waterproofing sheets began to be used in the building industry, after several years of being placed, they brought some problems, at was the case of some unreinforced PVC roof membranes (22).The concern about the effect of the plasticizers loss for the durability of PVC-P membranes generated some studies as well (23).In the case of the properties evolution of these sheets installed on roofs, the performance of them through time has also been researched, and several samples were removed for laboratory testing characterization of selected mechanical and physical properties (24).
PVC-P waterproofing membranes have been studied from various and interesting points of view, but this is not the case of the inverted flat roof in the building industry.
MATERIALS AND METHODS
The methodology of this study is divided in three parts; the first part of the study is devoted to analyze an inverted flat roof in use for 10 years with a PVC-P membrane placed on it, looking for possible interactions between polymeric materials.The following two parts of this research article are the experimental tests performed in the laboratory analyzing the possible interactions between XPS board and PVC-P.
An established standard test was partially used to check the response of these materials in the laboratory, the ISO 177:1998 standard: Plastics -Determination of migration of plasticizers.It describes an experiment, based on exposing plastic materials to heat through time.Following this, it is possible to determine the tendency of the plasticizers to migrate from plastic materials (into which they have been incorporated), towards other absorbent materials or plastics placed in touch with them.This is possible by analyzing the mass loss of the samples after several days in the draft furnace.The standard indicates that the samples have to be placed in close contact with two sheets capable of absorbing plasticizers, under defined conditions of pressure, temperature, and time.The mass loss of the test specimen becomes a measure of the plasticizer migration.In order to accurately adapt this experiment to the materials placed in an inverted flat roof with a waterproofing membrane of PVC-P, the absorbent material was the XPS board, which would take the plasticizers from the PVC-P waterproofing sheet.
The interactions between these materials, if they finally occur, can be appreciated after a week in the draft furnace.These interactions show a perfectly visible degeneration in at least one of the materials involved in the test.In the Figure 2 can be appreciated the interaction between XPS and a sample of PVC-P waterproofing lamina, after one of the tests carried out in the draft furnace.Two experiments were done simulating small inverted flat roof in different situations, and formed by different configurations.Figure 3 shows an image of the samples (in the left part of the image "A"), and a detail drawing of the composition of them (in the right part of the image "B").The samples were disposed on a tray especially made to carry and to handle the specimens.Every single case (configuration) was tested three times, as the standard advices.
Testing an inverted flat roof in use
Testing an inverted flat roof in use for 10 years, removing the protection layers of the roof in different positions was carried out and the state of the PVC-P sheet was analyzed, looking for interactions between materials.The studied areas were chosen depending on the orientation of the place, choosing one with an important amount of solar radiation, especially in summer, and another shady area (with no direct sun radiation).In every position at least two parts of the sheet were analyzed; one of them in the vertical edge of the sheet, and another completely covered by the protection layers, with and without contact with XPS if possible, in order to study any kind of degradation of the materials.
Test in a draft furnace at 70 °C
The second part of this paper analyzes the results offered by a test in the draft furnace working at 70 °C.The test was repeated during seven and during fifteen days.Seventy two samples of three different commercial brands were tested, thirty six samples for every part.This experiment intends to detect interactions between XPS and PVC-P waterproofing laminas, and to test the behavior of the most common separating auxiliary barriers used in the inverted flat roof.These separating sheets are within the requirements of some current and not mandatory standards, with a weight greater than 250 g/m 2 (25).The temperature used for the test was the reference one considered by the ISO 177 standard; however, temperatures between 50 °C and 85 °C are allowed by the standard (26).
The waterproofing PVC-P materials tested were: Novanol 1.5 mm polyester fiber-Basf; Danopol FV 1.2 -Danosa; Sikaplan®-SGMA 1.2 (Trocal SGMA 1.2).Every brand was shaped by four different configurations, and everyone was tested three times.The configurations of the samples were: first, direct contact between PVC-P and XPS; and later, separating PVC-P and XPS, the following materials were used: polyester geo-textile 300 g/m 2 , polypropylene geo-textile 300 g/m 2 , and aluminum foil, a metallic barrier in order to guarantee no chemical interactions.All waterproofing samples were weighed before placing them in the furnace, and after removing them.However, before being weighed, every sam ple passed the conditioning process described in the standard ISO 291:2008; Plastics -Standard atmo spheres for conditioning and testing.
The brands and types of the rest of the materials involved in this experiment were; XPS IV Type ROOFMATE SL -30 mm; polyester geo-textile Sika® Geotex PES 300; polypropylene geo-textile Tex Delta 300 g/m 2 ; and a common alimentary aluminum foil of 0.013 mm thick.
Test in a draft furnace at 50 °C
The study also performed a new test in the draft furnace, but in this case running at 50 °C, and only during fifteen days (the analysis of the evolution of the plasticizers loss was not the objective of this test, that is why the test during seven days was discarded).The purpose of this experiment was to have a wide view of the results of plasticizers migration that is why, a high amount of brands used to waterproof roofs were tested.In fact, fifty four samples of nine different commercial brands were tested.However, in this case, the samples were placed with only two configurations, and again as it occurred in point 4.2, every sample was analyzed three times.The PVC-P materials tested in the 4.2 point were also checked in this part.The waterproofing PVC-P materials tested were: Novanol 1.5 mm polyester fiber-Basf, Danopol FV 1.2 -Danosa, Sikaplan®-SGMA 1.2 (Trocal SGMA 1.2) -Sika; Sikaplan®-1.2G -Sika, Danopol HS 1.2 -Danosa; Sikaplan® W P 5160 -12H (light gray) -Sika; Sarnafil® G410 -12 -Sarnafil, Rhenofol CV 1.2 -Braas Gmbh, Flagon SP -1.2 -Flag.The configurations of the samples were; direct contact between PVC-P and XPS, and aluminum foil between both.The conditioning process was followed in this case as well, in the same way it was done in point 4.2.On the other hand the other materials involved in this point were: XPS IV Type ROOFMATE SL -30 mm; and a common alimentary aluminum foil of 0.013 mm thick.
THEORY -CALCULATION
The procedure exposed in section 4.1 of this study is a visual overseen section.Thus, following the criterion previously developed, XPS and plasticized polyvinyl chloride are going to interact when placed in specific conditions and circumstances.Therefore, the roof finally chosen will be reviewed looking for traces of interactions.
The results of the two following parts were estimated by calculating the average of all mass loss values of the three samples tested in every configuration, brand and time placed in the draft furnace.The mean of the results had to follow certain norms; for instance, in the case of the three samples analyzed of Sikaplan®-SGMA 1.2 (Trocal SGMA 1.2) tested for 7 days in the draft furnace working at 70 °C, and placed in direct contact with the XPS board, the mean deviation of the three weights could not be greater than 10% when compared to each single weight.All the samples had to comply with this rule regarding each corresponding mean, and if this requirement was not fulfilled, because weights were too dissimilar, the test would have to be repeated until reaching correct result.
In order to weigh all the samples, a balance with a 0.001g of precision was used, the difference between the previous and the final mass will be the mass loss of this material in these conditions and with this configuration.
The final mass loss results are later shown in graphs to simplify and clarify the broad amount of data finally obtained.
Testing an inverted flat roof in use
After finding an inverted flat roof with all the parameters needed to carry out this part of the study, located in Madrid -Spain, it was found that the roof studied did not have any geo-textile or auxiliary separating layer between XPS board and PVC-P sheet.Consequently, in the case of this inverted flat roof, only two areas of the sheet were analyzed: one on the vertical an exposed zone of the sheet, and another area under the XPS board.In the shady area of the roof (with no direct solar radiation), neither PVC-P nor XPS presented any trace of interactions or incompatibilities, however, in the area of the roof with a great deal of direct solar radiation, evident traces of tension were observed on the vertical and exposed area of the sheet.The movement of the horizontal area of the sheet can be appreciated in Figure 4, where red arrows are included to show the shrinkage of the membrane.
Shortly after, the PVC-P waterproofing sheet of this roof was identified as RHENOFOL CG -FDT and the entire roof is formed by this sheet.The XPS boards of the roof were: IV Type ROOFMATE SL -50 mm, over it, a 300 g/m 2 polyester geo-textile, and finally a layer of gravel, five centimeters thick.
After removing the XPS board, possible evidences of chemical interactions between both materials could be observed.As a matter of fact, the surface of the XPS board had evident traces of the effect that PVC-P close contact, temperature, and pressure have had on the material.Furthermore, this effect had consequences on the dimension and mass of the PVC-P sheet, and moreover in the amount of plasticizers contained in it.In Figure 5 the marks of these possible chemical interactions can be seen, there is lack of material on the XPS board (visible in the inferior part of the image "B"), and the resultant material of these interactions is adhered on the surface of the PVC-P sheet (in the superior part of the image "A").As can be seen in Figure 5, arrows show interactions in the vertical finishing of the sheet, and numbers show zones where degradation took place on the horizontal area of the membrane, which coincides perfectly with the lack of material pointed out in the inferior part of the Figure 5 (B).In other words, there is match between the backside of the XPS material, and the front side of the PCV-P waterproofing membrane.The numbers are a reference to make easier find the connection between the two parts of the Figure .Due to the area from which the photographs were taken, these traces were specially located in the areas where the waterproofing membrane had more relief, such as the overlap of the vertical finishing of the membrane over the horizontal part of it.That is why, on the surface of the XPS board, numbers 3, 4 and 5 point out traces with some straight lines, which matches perfectly with the end of the overlaps of the PVC-P waterproofing membrane ("c" letter on the Figure) visible in the superior part of the image "A".Furthermore, in the corner of the roof, there were other traces (1 and 2 numbers), which were the consequence of higher reliefs on the waterproofing membrane they were produced by consecutive overlaps.
Number 2 is pointing out an area where coincide two overlaps coming from two directions (one over the other, "c" letter on the Figure 5).Number 1 has additionally a special piece of reinforcement over the two previous overlaps ("d" letter shows the edge of this piece).Furthermore, other traces can be appreciated, especially on the surface of the XPS board.
As it was predictable, the more insulation thickness, less interaction possibility, because of the higher thermal protection, or in other words, the heat that would reach the PVC-P waterproofing membrane would be minimized.However, even five centimeters of insulation cannot be enough for safeguarding these interactions.On the other hand, there is a troublesome area, which is the edge of the XPS board all around the perimeter of the roof, and in every area susceptible of contact with the vertical finishing of the sheet, this area has a high risk of interactions due to the absence of thermal protection.
It is important to place a proper auxiliary separating sheet to cut down these interactions.Moreover, it is essential to protect the contact of the vertical finishing of the sheet and the edge of the XPS board.The Figure 6 shows a close-up of these areas, with a detail of the possible interactions in a sample extracted from the roof (sunny area).The dotted line marks the right angle between the horizontal and the vertical area of the roof; the region between broken lines, points out the area in which XPS and PVC-P contacted.As a matter of fact, in this part, remains of interactions stuck on the sheet can be seen (number five).In the area between the broken line and the dotted line, indicated with the number three, there was not any contact with the XPS due to the rabbet of the board edge.
Test in a draft furnace at 70 °C
This test was carried out in two different phases, by testing samples for seven and for fifteen days in order to quantify the evolution of the mass loss of the samples.The results of the experiment show a similar behavior of the different brands of PVC-P sheets tested.Every sheet and every configuration of the samples presented a mass loss in the draft furnace after the experiment.Big differences in the mass loss can be observed between the expositions of direct contact and the samples shaped with auxiliary separating sheets or a metallic barrier.
The samples shaped with direct contact between XPS and PVC-P, after the first week in the draft furnace show a perfect visible degradation in the XPS board.Additionally the polystyrene foam suffered a transformation in contact with the PVC-P.Figure 7 (in the left part of the image -A) shows an image of polystyrene transformed (with the appearance of a blue gel), which appeared partially stuck on the surface of the sheet in the case of some samples tested (i.e.B part of the Figure 7), as had occurred in the case of the inverted flat roof analyzed in point 6.1.XPS can be altered by the contact with PVC-P under certain conditions.
After the experiment, every sample had a similar reaction, however, some of them did not have the remains of the interaction stuck on the PVC-P sheet.The behavior of every brand of waterproofing membrane is obviously particular, and it depends on the composition of every sheet.
On the other hand, it is necessary to control the stability of the XPS samples, when the experiment is carried out at 70 °C.This temperature produces thermal degradation in the XPS samples, nevertheless, this is completely different to the deterioration of this material when it is tested in direct contact with PVC-P laminas.Thermal degradation varies the shape, the dimensions and consequently the density of the foam material (Figure 8).Despite being the temperature of reference in the ISO 177 standard, lower temperatures are more convenient to performance these experiments with foam materials.
The results of the auxiliary separating sheets checked as plasticizers barrier, offered a reasonable response to the test, both geo-textiles tested had the same weight, 300 g/m 2 .The Figure 9 shows the mean results of mass loss in percentage, offering a comparison between geo-textiles and the me tallic barrier, which can guarantee the absence of chemical interactions, and shows the effectiveness of the auxiliary separating sheet tested as a plasticizer barrier.
Volatiles substances of a PVC-P waterproofing membrane can be approximately quantified by analyzing the mass loss of the samples formed with the metallic barrier in between.
Test in a draft furnace at 50 °C
After testing the PVC-P waterproofing membranes in the draft furnace, the mass losses of the samples were lower than the ones shown in section 6.2, as it was predictable due to the lower temperature.Plasticizers migration took place also at this temperature with every sample.Indeed, the interactions between XPS and PVC-P in the configurations with direct contact occurred also at this temperature.Polystyrene foam suffered a transformation in contact with the PVC-P as it happened in section 6.2, but clearly to a lesser extent.
Figure 10 shows a photograph of different materials from distinct specimens tested.In this image is visible the effect of every particular configuration on the surface of the XPS boards.Notice the absence of interactions in the XPS sample formed with a metallic barrier (4´).
The mass loss of the different brands of PVC-P sheets tested was similar for every configuration.Thus the Figure 11 shows a reliable and simplified way of presenting the mass loss results independently of the brand of the PVC-P membrane.This figure also shows findings from the test at 70 °C, to appreciate better the mass losses obtained in the entire study.Results are presented comparing data of mass losses between the two configurations.
In spite of the fact that the PVC-P membranes tested show a similar behavior for every configuration, as it was shown in Figure 9 and 11, the results are varied considering all the brands.The response is slightly different, depending on the brand.There are a wide variety of plasticizers that can be chosen to make these sheets, and this is also the case of additives; every combination of plasticizers and additives is going to offer the sheet a slightly dissimilar behavior in specific conditions.
PVC-P waterproofing membranes are polymers made with a great deal of additives such as antioxidants, fillers, plasticizers, heat and light stabilizers, pigments, etc.The composition of every PVC-P waterproofing brand is different, and as a consequence, the response of every PVC-P is only similar in every configuration tested in this research.There are materials, which show a better behavior to the direct contact with the XPS, but for instance, they can lose more volatile substances, or react better or worse with a specific geo-textile.However it can be assured as a general rule, that PVC-P waterproofing sheets show interaction or incompatibilities with XPS board, and the response of them for every brand, is close to the results presented in Figure 9 and 11.
Summary of the results
This section is going to show the results of the tests performed at 70 °C, moreover, mean values, standard deviation and RSD% of the results are going to be presented.On the other hand, the range of response of the sheets tested in sections 6.2 and 6.3 of this study can be seen in Figure 12, showing maximum and minimum values of mass loss (mean percentage) for every configuration.
In Table 1 can be seen the results of the test in the draft furnace during 7 days at 70 °C, additionally, Table 2 shows the results of the test in the draft furnace during 15 days at 70 °C.
The mean values of every brand tested show a different behavior for every brand after the experiment in the draft furnace.It becomes evident, making a comparison between the mean values of mass loss of the brands tested.The second brand of PVC-P waterproofing membrane got the lowest values of mass loss for every configuration and temperature.Nevertheless, it does not mean that this brand have better quality than others.To stay this, additionally, other factors have to be taken into a consideration.
The values of standard deviation obtained are not related with the brand tested, in other words, there is no relation between the characteristics of the waterproofing lamina and the standard deviation values finally obtained.However, this is not the case of the configuration of the samples.
The configuration with the greatest values of standard deviation is: direct contact between XPS and PVC-P, followed by Polypropylene geo-textile, Polyester geo-textile, and finally, the lowest values are offered by the metallic barrier.This is also the case of the results of mass loss, which follow the same rule.The effectiveness of the auxiliary separating lamina used produces a significant decrease of mass loss in the samples tested and consequently, the results values are closer between each other.The mean values offered by the configuration with the metallic barrier are more precise than others.
Analyzing the RDS% values showed in Table 1 and 2, it can be said, that the precision of the results is higher for the configuration with direct contact (they are the lowest RDS% values).Other configurations offered higher, but quite similar RSD% values (between 5% and 9% approximately).Nevertheless, from an experimental point of view, the precision of the results in the configuration with the metallic barrier have to be much higher, in order to achieve data within the requirements of the ISO 177 standard.A balance with a 0.0001g of precision is more appropriated, or even with higher precision if possible.
CONCLUSIONS
Heat is an important factor in the mass loss of a PVC-P waterproofing membrane; even without any possibility of interactions, heat produces mass loss (volatiles substances are lost in this process).
XPS can be altered by the contact with PVC-P waterproofing membranes under certain conditions.Interactions between these materials can occur in a common inverted flat roof.This phenomenon might be especially significant in latitudes with hot summers.
The amount of heat that can reach the PVC-P membrane in a conventional inverted flat roof is clearly minimized by the thermal protection that XPS board offer to the constructive solution.However, it is not enough, even despite the five centimeters thick insulation board of the roof.
In the long term, the mass loss observed, implies a dimension loss in the waterproofing sheet.This process makes the waterproofing membrane become brittle.The shrinkage of the membrane brings internal strong stresses that will end up producing leaks by tearing the sheet, or by producing failures in the weakest points of the welding.
Every PVC-P waterproofing sheet used in an inverted flat roof has to have an internal reinforcement to minimize this shrinkage effect, but even though, the dimension loss occurs, as in the case of the sheet studied in sections 4.1 and 6.1 of this study.
PVC-P waterproofing membranes and XPS are remarkable materials, with excellent waterproof and insulating roofs properties, but they have to be placed with an appropriate auxiliary separating sheet, and covering the edges of the XPS board everywhere in which the vertical finishing of the membrane can be in contact with the edge of the XPS board, i.e. in the perimeters of the roof.
The direct contact between both materials (tested in the draft furnace) produces five to nine times more mass loss that those configurations with a proper barrier placed in it.This mass loss produced, in turn, a loss of membrane dimension, which brings internal stress, and increases the pressure between PVC-P waterproofing membrane and XPS on the perimeters of the sheet (which raises the plasticizers migration).
The geo-textile tested with the best behavior as an auxiliary separating layer was the polyester geo-textile of 300 g/m 2 .On the other hand, polypropylene geo-textile of 300 g/m 2 can offer a good response to reduce the transference of plasticizers in a PVC-P waterproofing sheet.Nevertheless, plasticizers migration does occur, even with these materials placed as auxiliary separating sheets.
There are many factors to assess the quality of a PVC-P waterproofing sheet.And, the stability of the plasticizers after the test in the draft furnace cannot only be taken into consideration.The quality of the internal reinforcement of the membrane, among other factors, can also be important.
For the study of interactions and incompatibilities between XPS and PVC-P in a draft furnace, the temperature of 50 °C is more appropriated.The plastic foam material remains stable along these experiments at this temperature.
The behavior of the materials placed in an inverted flat roof depends on many factors, such as heat, pressure, composition of the PVC-P waterproofing membrane, the auxiliary separating layer placed in, etc.Additionally, another important factor is the composition and the manufacturing system of the XPS board.Thus, it is really difficult to predict the global behavior of the materials involved in this constructive system.
Figure 2 .
Figure 2. Degradation of XPS in direct contact with a PVC-P waterproofing membrane tested after 15 days in the draft furnace (50 °C).Nomenclature of the image: A -XPS sample; B -PVC-P waterproofing membrane sample; c -Edge of the XPS sample with no contact with the PVC-P lamina during the experiment; d -Degraded area of the XPS in contact with the PVC-P sheet during the test.
Figure 3 .
Figure3.Image of the samples for the experiment in the draft furnace (in the left part of the image, "A"), and a detail drawing of the composition of the specimens (in the right part of the image, "B")."A" -Image before the experiment, the samples were disposed on the tray.Space between samples: 32 and 28 mm, in order to make easier the ventilation in the draft furnace.Afterwards, another plate of galvanized steel (same dimensions of the inferior one) was placed on the samples.The system (before the introduction in the furnace) was finally compressed with a stainless steel frame, using threaded bars."B" -Detail of the composition of the samples.Nomenclature of the image: 1 -Metallic tray to dispose the samples; 2 -Auxiliary separating layer of Polyester geo-textile of 150 g/m2 (same for all the samples); 3 -PVC-P waterproofing membrane (variable); 4 -Auxiliary separating layer (variable), no layer in configuration with direct contact; 5 -Extruded Polystyrene Board (XPS) IV Type; 6 -Plate of galvanized steel.
Figure 4 .
Figure 4. Arrows show the waves produced by the movement of the horizontal area (shrinkage) of the PVC-P waterproofing membrane.
Figure 5 .
Figure5.Detail of some possible evidences of chemical interactions between XPS and PVC-P membrane produced in the sunny area of the roof (high levels of solar radiation).In the superior part of the image (A), the front side of the PVC-P waterproofing sheet, numbers and arrows identify some possible interactions and remains of it.The inferior part of the image (B) shows the backside of the XPS board (previously in direct contact with the PVC-P membrane), numbers identify interactions with lack of material.Letter "c" points out the edges of the two overlaps of the vertical finishing of the waterproofing membrane over the horizontal area of the lamina.Letter "d" points out the edge of a specific piece of reinforcement for corners (extra overlap).
Figure 7 .
Figure 7. Image of samples of XPS and PVC-P waterproofing membrane after the experiment in the draft furnace 7 days 70 °C (configuration -direct contact between them).In the left part of the image (A) XPS sample with the resultant material of possible chemical interactions (similar to a blue gel) in its surface.In the right part of the image (B) a PVC-P sample with the resultant material of the interaction stuck on the surface of the sheet.
Figure 6 .
Figure 6.Sample of PVC-P waterproofing membrane extracted from the roof (sunny area) including traces of possible interactions with the XPS board; Dotted line -Right angle line between the horizontal and the vertical area of the waterproofing sheet; 1 -Horizontal area of the sample (the remaining of the interactions in this area were removed); 2 -Vertical area of the sample; 3 -Vertical area with no contact with XPS board, due to the rabbet of the edge of the XPS board; 4 -Vertical area with contact with XPS board; 5 -Remaining of interactions stuck on the sheet; 6 -Exposed vertical area (to the open air, and in contact with the gravel).
Figure 8 .
Figure 8. Image of samples of XPS after the experiment in the draft furnace 15 days 70 °C.In the left part of the image (A) a XPS sample with thermal degradation and no interaction (formed with an auxiliary separating lamina).In the right part of the image (B) a XPS sample with thermal degradation and interaction; "c" letter shows the edge of the XPS sample deformed by thermal degradation (slight frustoconical shape); "d" letter shows the perimeter of the face of the XPS tested, showing thermal degradation (variation of dimension, but no interaction).
Figure 12 .
Figure 12.Range of response of the sheets tested; maximum and minimum values of mass loss (mean percentage) for every configuration.The RSD% values are the same presented in Figure 9 and Figure 11.
|
2018-12-05T08:34:23.362Z
|
2014-12-30T00:00:00.000
|
{
"year": 2014,
"sha1": "58ef0a523341e219826b1c034aab0947ccccd529",
"oa_license": "CCBY",
"oa_url": "https://materconstrucc.revistas.csic.es/index.php/materconstrucc/article/download/1675/2028",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "58ef0a523341e219826b1c034aab0947ccccd529",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
216145022
|
pes2o/s2orc
|
v3-fos-license
|
Status of SARS-CoV-2 infection in patients on renal replacement therapy. Report of the COVID-19 Registry of the Spanish Society of Nephrology (SEN)
Introduction The recent appearance of the SARS-CoV-2 coronavirus pandemic has had a significant impact on the general population. Patients on renal replacement therapy (RRT) have not been unaware of this situation and due to their characteristics they are especially vulnerable. We present the results of the analysis of the COVID-19 Registry of the Spanish Society of Nephrology. Material and methods The Registry began operating on March 18th, 2020. It collects epidemiological variables, contagion and diagnosis data, signs and symptoms, treatments and outcomes. It is an online registry. Patients were diagnosed with SARS-CoV-2 infection based on the results of the PCR of the virus, carried out both in patients who had manifested compatible symptoms or had suspicious signs, as well as in those who had undergone screening after some contact acquainted with another patient. Results As of April 11, the Registry had data on 868 patients, from all the Autonomous Communities. The most represented form of RRT is in-center hemodialysis (ICH) followed by transplant patients. Symptoms are similar to the general population. A very high percentage (85%) required hospital admission, 8% in intensive care units. The most used treatments were hydroxychloroquine, lopinavir–ritonavir, and steroids. Mortality is high and reaches 23%; deceased patients were more frequently on ICH, developed pneumonia more frequently, and received less frequently lopinavir–ritonavir and steroids. Age and pneumonia were independently associated with the risk of death. Conclusions SARS-CoV-2 infection already affects a significant number of Spanish patients on RRT, mainly those on ICH, hospitalization rates are very high and mortality is high; age and the development of pneumonia are factors associated with mortality.
Results: As of April 11, the Registry had data on 868 patients, from all the Autonomous Communities. The most represented form of RRT is in-center hemodialysis (ICH) followed by transplant patients. Symptoms are similar to the general population. A very high percentage (85%) required hospital admission, 8% in intensive care units. The most used treatments were hydroxychloroquine, lopinavir-ritonavir, and steroids. Mortality is high and reaches 23%; deceased patients were more frequently on ICH, developed pneumonia more frequently, and received less frequently lopinavir-ritonavir and steroids. Age and pneumonia were independently associated with the risk of death.
Conclusions: SARS-CoV-2 infection already affects a significant number of Spanish patients on RRT, mainly those on ICH, hospitalization rates are very high and mortality is high; age and the development of pneumonia are factors associated with mortality. Material y métodos: EL Registro comenzó a funcionar el 18 de marzo de 2020. Recoge variables epidemiológicas, datos del contagio y diagnóstico, clínica acompañante, tratamientos y desenlace. Se trata de un registro on line. Los pacientes fueron diagnosticados de infección por SARS-CoV-2 en base a los resultados de la PCR del virus, realizada tanto en pacientes que habían manifestado clínica compatible o tenían signos sospechosos como en aquellos a los que se había hecho como cribado tras algún contacto conocido con otro enfermo.
Introduction
In late 2019, the Authorities from People's Republic of China reported to the World Health Organization several cases of pneumonia of unknown etiology in Wuhan, a city located in the Chinese province of Hubei. Later, it was found that it was an infection caused by a new coronavirus called SARS-CoV-2. This virus causes various clinical manifestations included under the term COVCID-19, including respiratory symptoms of different severity, from a common cold to severe pneumonia with respiratory distress syndrome, septic shock and multiple organ failure.
Since then, the virus has been spreading throughout the world, and to date, almost 2 million people are contaminated. 1 At April 11, 2020, United States is the country with the most confirmed cases, followed by Spain, Italy, Germany and France. 1 The emergence of this pandemic has demanded a special care to more vulnerable groups of people. Among these are chronic kidney disease (CKD) patients, especially those that require renal replacement therapy (RRT).
Since the beginning of this pandemic, the Spanish Society of Nephrology (SEN) started to work together with the Ministry of Health, Nephrology services across the country, patients' associations and other scientific societies. The idea was to develop contingence plans and specific protocols to gain information and knowledge on this new and very serious disease.
One of the first projects developed during the first weeks of the SARS-CoV-2 pandemic affecting Spain was the generation of a specific Registry of patients in some form of RRT in Spain. This collective effort resulted in the "SEN Registry of COVID-19". The objective of this manuscript is to present the results of the first analysis of this Registry at week 3 of data collection.
Material and methods
The COVID-19 Registry began operating on March 18, 2020. The week before, a committee of experts was created to decide which variables should be included in the registry. The selection of variables to generate a Registry is not immediate a simple. It is desirable to incorporate as many variables as possible to acquire a large amount of information. However complexity is a drawback, the greater the number of variables, the lower the degree of implementation. Furthermore, currently Nephrologists and Nephrology Services are under a high demand and even a significant number of professionals are affected by the coronavirus. For this reason, it was decided to choose the minimum set of variables that would give us a perspective of the impact that the SARS CoV-2 pandemic in patients on RRT in Spain. Furthermore, the information to be collected should be in line with the Registry of the European Renal Association -European Dialysis and Transplant Association (ERA-EDTA) so that eventually data could be unified at European level.
After satisfying all these demands, epidemiological variables, modalities of renal replacement therapy, contagion information, diagnosis data, accompanying symptoms, treatments and outcome were included in the Registry. This is an anonymous Registry that meets the requirements imposed by legislation. Authorization for its operation was requested from the Regional Ethical Committee of the Principality of Asturias.
The COVID-19 Registry of the SEN has a format "on line" with access through the website of the Society (www.senefro.org) after the prior identification of the person having access to the site. Each user of the Registry has access to the patient data they have entered, but not to the rest of the information. The complete database can only be managed by the coordinator of the Registry or any other member of the SEN upon written request and prior authorization from the experts committee. The patients were diagnosed with SARS-CoV-2 infection based on the PCR results, carried on the patient with symptoms suggestive of infection, or as screening after contact with another infected patient.
The Registry will remain operational as long as the current coronavirus pandemic situation is maintained. Periodic
Statistical analysis
Continuous variables were expressed as mean and standard deviation and categorical variables as percentage. Baseline values were compared using T test and Chi Square as appropriate. The KoImogorov-Smirnov test had been used to determine if data is normally distributed. Linear or logistic regression models were used to know the factors associated with mortality. A P value of less than 0.05 was considered significant. The statistical package SPSS 20 ® for Windows (SPSS Inc, Chicago, IL) was used to analyze the results.
Results
As of April 11, data from 868 patients on RRT with documented SARS-CoV-2 coronavirus infection had been entered into the Registry. Patients were from 103 health centers scattered throughout Spain. All regions so called Autonomous Communities, have reported cases (Table 1), Madrid presented the largest number of cases (36%), followed by Catalonia (18%), Castilla La Mancha (12%) and Andalusia (9%). The average age of the infected patients is 67 ± 15 years and two thirds are male.
Three out of ten infected patients had had known prior contact with someone else infected. This percentage slightly rose to 34% in the case of patients on HDC, being 24% on PD and 22% in the case of TX patients. The average incubation period, in those patients with known prior contact, was 7 ± 4 days.
Regarding clinical manifestations (Table 2), 3 out of 4 patients had fever, two-thirds had symptoms of upper respiratory infection and 43% dyspnea. Almost a quarter had gastrointestinal symptoms. Only 8% were asymptomatic. The most frequent complication developed was pneumonia in 72% of patients, and a 80% also had lymphopenia.
A very high percentage of registered patients (85%) required hospital admission, and 8% had to be admitted in the Intensive Care Units (ICU), of these almost two thirds required mechanical ventilation. The average length of hospital admission (contemplating only cured patients) was 10 ± 4 days.
The most commonly used treatments (Table 3) were the hydroxychloroquine (85%) and the combination of lopinavir-ritonavir (40%). A third of the patients received the 3 drugs together. Steroids, interferon and tocilizumab used were less frequently.
To date, 198 patients have died (a 23% of the registered patients). The characteristics of these patients are reflected in Table 4. As compared with cured patients, deceased were older, were more often from HDC, developed more frequently pneumonia, were more frequently on lopinavir-ritonavir and steroids and had been prescribed less frequently renin angiotensin aldosterone system (RAAS) inhibitors before infection. The characteristics of SARS-CoV-2 patients from Dialysis (including HDC, HHD, and PD) and transplant were different ( Table 5). The transplanted patients were younger, with more frequent hospital admissions both in the ward and in ICU, they developed more pneumonia and a greater number of transplant patients were treated with lopinavir-ritonavir, hidroxychloroquine, steroids, and tocilizumab; they also had received RAAS inhibitors more frequently before being infected. Finally, percent of deaths were less in transplant than in dialysis patients.
We analyzed the factors associated with mortality in transplant and dialysis patients. In transplant patients, age and development of pneumonia were independently associated with mortality ( Table 6). As for dialysis patients, again age and pneumonia were factors associated with mortality but it was also found a beneficial effect of hydroxychloroquine ( Table 7).
Finally, the cure of the infection has been reported in 20% of patients; the average time elapsed before curation was 12 ± 5 days. A 22.7% of patients died and the rest remained in a situation of active infection (Table 8).
Discussion
The analysis of data collected during the first three weeks of the Covid-19 Registry of SEN shows that the SARS-CoV-2 infection affects a significant number of Spanish patients on RRT, mainly those that are in HDC. The rate of hospital admissions are very high and the mortality is elevated; age and the development of pneumonia are risk factors for mortality, while the use of hydroxychloroquine could have had a protective effect, at least in dialysis patients.
In Spain, SARS-CoV-2 infection has spread throughout all the Autonomous Communities. According to data supplied by the Ministry of Health 2 the incidence has been particularly high in Madrid, Catalonia, Castilla La Mancha and Castilla y León. Similarly, and according to the data from the SEN Registry, infected patients on RRT are mainly from these regions, although Castilla y León has reported fewer cases than they would correspond due to their global number of infections.
Communities with the lowest overall number of infections, such as Cantabria, Murcia, Extremadura, the Canary Islands and the Principality of Asturias, have also reported a lower number of SARS-CoV-2 patients on RRT.
The mean age of patients infected match the mean age of the patients on RRT. As in the entire population on RRT, infected HD patients were also significantly older than transplant or PD patients. In the general population, it appears that the coronavirus affects more males than females. In the case of patients on RRT, there are also more men infected, but this could reflect the greater number of men in the RRT programs.
Despite the fact that more than half of the Spanish patients on RRT are transplanted, 3 the spread of SARS-CoV-2 is more frequent in patients from HD centers. This fact may not by surprising because HD patients failed to fulfill the regulatory confinement since they have to move to the hospitals and dialysis centers 3 or more times per week and not infrequently they have to use public transportation. Moreover, a number patients undergoing HDC live in nursing homes, a place of frequent infections. Despite immunosuppression, transplant patients, represent only one third of the registered infected patients. Finally it should be noted that patients on peritoneal dialysis, and especially those in HHD, represent a very low rate of infected patients, although their representation in the total of RRT patients in Spain is relatively low. 3 As far as clinical manifestations, there are no differences with what has been reported in the general population. Most frequent manifestations were fever and the symptoms of upper respiratory infection. These same symptoms and with a similar frequency were observed in a retrospective study including almost 1100 patients with SARS-CoV-2 infection not in RRT. 4 A quarter of our patients also reported gastrointestinal symptoms which is more frequent than in the general population as shown in the previously mentioned study. 4 By contrast, other authors show a rate of gastrointestinal symptoms that is even higher than in our patients. 5 Moreover, the first case of SARS-CoV-2 infection in a hemodialysis patient in the United States, presented with diarrhea as the first symptom of the infection. 6 The different forms of screening or diagnosis in the population may be the cause of nonuniform results.
Hospitalization rates in our patients are very high, exceeding 80% of cases. Published studies in the general population reveal considerably lower rates of hospitalization; 7 however it should be taken into consideration that patients in RRT are older and with more comorbidities whichundoubtedly results in a more deteriorated clinical situation. Just three weeks ago, the results of a meta-analysis that included 4 published studies and 1400 patients were published and concluded that chronic kidney disease was a risk factor for developing a more severe SARS-CoV-2 infection. 8 One of the possible explanations why CKD have a worse prognosis lies in the role that T lymphocytes on the recovery from infection. 9 It is known that in uremia there is a deterioration in lymphocytes and granulocytes function which may alter the defense mechanisms against the virus. 10 Hospital admission were more frequent among transplanted patients as compared with those on dialysis; and, they also have more admission in ICUs. It seems that immunosuppression could play a role. 11 The mortality is high, above 20%. Dialysis patients with SARS-CoV-2 infection have a higher risk of dying than transplant patients, this circumstance is probably related to older age and comorbidity (a variable not recorded in the Registry). In the last report from the Ministry of Health, April 13th 2 mortality in the age group of 70-79 years was 13.9%, half of what we found in our dialysis patients Registry. The high risk of complications in patients on RRT must be taken into consideration. The analysis of factors independently associated with the risk of death show that age and the development of pneumonia determine a worse prognosis. Furthermore, in the group of patients on dialysis, the use of hydroxychloroquine is associated with a lower rate of deaths; however, the significance of this last finding requires studies in a larger number of patients. Presently there is controversy about the beneficial effect of hydroxychloroquine. In in vitro studies, this drug has shown activity against various viruses, including coronaviruses and influenza. 12 A French study, found some benefit from its use but number of patients was limited; 13 by contrast, another study from China does not find that patients treated with hydroxychloroquine have better recovery rates. 14 In our study, the beneficial effect of hydroxychloroquine is found in dialysis, but not in transplant patients. Nevertheless our registry show that, hydroxychloroquine and other commonly used drugs in the SARS-CoV infection-2 are used more frequently in transplant than in dialysis patients. Ongoing controlled studies will show if the use of these drugs is beneficial.
The benefit of RAAS inhibitors is controversial. Some of the initial publications warned about the possibility that the use of these drugs (indicated in the treatment of hypertension, heart failure, ischemic heart disease, etc.) may increase the risk of infection by SARS-CoV-2. The rational is that, to infect cells virus binds to Angiotensin II converting enzyme (ACE 2), and this enzyme appears to be overexpressed in subjects on ACE inhibitors or Angiotensin II receptor antagonist (ARA 2). 15 However, experiment in animals models have suggested that the use of ARA2 can mitigate infection by attenuating Angiotensin II-mediated acute lung injury by blocking the Angiotensin II type 1 receptor. 16 A recent meta-analysis suggests a beneficial effect of ARA 2 on the severity of SARS-CoV-2 pneumonia in elderly patients. 17 For all these reasons, the health authorities recommend maintaining the indication of these drugs. 18 At the time of the present report there was only a 3 week period of data collection in this Registry, 20% of cases are cured and 60% are shown as active infection. In the coming weeks will know the final outcome of all these patients. This information will provide more knowledge on the effects of SARS-CoV-2 in RRT patients.
Conflicts of interest
The authors have no conflicts of interest to declare.
|
2020-04-27T13:04:33.663Z
|
2020-04-27T00:00:00.000
|
{
"year": 2020,
"sha1": "8a8b30981005e653163decfd7bf0358a952fe1a0",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.nefroe.2020.04.002",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d890211dfc24f2f30cd43b4a075ddf62484a2a55",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
13591580
|
pes2o/s2orc
|
v3-fos-license
|
2 The Library of John Webster
INTRODUCTION The manuscript of the catalogue of the library ofDr John Webster ofClitheroe is to be found today in the archives of Chetham's library, Manchester (Chetham MS A.6.47). It was previously in the ownership of the celebrated Lancashire antiquarian, James Crossley, who was himself given the manuscript on 12 June 1876 by a friend, the Reverend Thomas Corser of Stand.46 The manuscript consists of twenty-two foliated leaves bound with marbled boards and leather spine, and is almost certainly a copy of an original draft, composed by Webster himself, probably for the purpose of evaluating his estate.47 In Webster's will, dated 3 January 1680 (see Appendix 1), the contents ofthe library were valued at £400, a figure roughly approximating to the more accurate catalogue evaluation of £402 6s. 10d. The catalogue is systematically arranged according to subject and book-size and would appear to reflect the original plan of the books as they actually appeared on the shelves of Webster's library. It comprises fifteen sections (which, for the sake of convenience, I have labelled A to 0) with 1501 entries in total. This figure, however, is not an accurate assessment of the number of volumes once possessed by Webster. Although it is impossible to give a precise figure,48 a conservative estimate would indicate a total number ofvolumes in the region of 1662 (a figure which includes works from Section M which were not in Webster's possession at the time the catalogue was produced). Clearly, the sheer size of Webster's library is one of its most striking features, but what else, apart from the broadest generalizations, can it tell us about the owner of this collection? The limited use of such evidence is all too obvious. For example, even if it were possible to read every work, how would this help us to understand how a seventeenth-century reader such as Webster would interpret the same information and knowledge? How can we know whether or not Webster even read some or all of these works? Many of the volumes, particularly the older ones, were almost certainly the fruits of inheritance. Others may constitute unread gifts or volumes acquired merely for the sake of collection. To make matters worse, the library itself has not survived
INTRODUCTION
The manuscript of the catalogue of the library of Dr John Webster ofClitheroe is to be found today in the archives of Chetham's library, Manchester (Chetham MS A.6.47). It was previously in the ownership of the celebrated Lancashire antiquarian, James Crossley, who was himself given the manuscript on 12 June 1876 by a friend, the Reverend Thomas Corser of Stand.46 The manuscript consists of twenty-two foliated leaves bound with marbled boards and leather spine, and is almost certainly a copy of an original draft, composed by Webster himself, probably for the purpose of evaluating his estate.47 In Webster's will, dated 3 January 1680 (see Appendix 1), the contents ofthe library were valued at £400, a figure roughly approximating to the more accurate catalogue evaluation of £402 6s. 10d.
The catalogue is systematically arranged according to subject and book-size and would appear to reflect the original plan of the books as they actually appeared on the shelves of Webster's library. It comprises fifteen sections (which, for the sake of convenience, I have labelled A to 0) with 1501 entries in total. This figure, however, is not an accurate assessment of the number of volumes once possessed by Webster. Although it is impossible to give a precise figure,48 a conservative estimate would indicate a total number ofvolumes in the region of 1662 (a figure which includes works from Section M which were not in Webster's possession at the time the catalogue was produced).
Clearly, the sheer size of Webster's library is one of its most striking features, but what else, apart from the broadest generalizations, can it tell us about the owner of this collection? The limited use of such evidence is all too obvious. For example, even if it were possible to read every work, how would this help us to understand how a seventeenth-century reader such as Webster would interpret the same information and knowledge? How can we know whether or not Webster even read some or all of these works? Many of the volumes, particularly the older ones, were almost certainly the fruits of inheritance. Others may constitute unread gifts or volumes acquired merely for the sake of collection. To make matters worse, the library itself has not survived 46 Corser had originally intended to publish the catalogue, along with a "fuller life" of Webster, in the Proceedings ofthe Chetham Society; see Pots's discovery of witches ... reprintedfrom the original edition of 1613, Publications of the Chetham Society, new series, no. 6, Manchester, 1845, Appendix ('Works sugiested for publication'). 7 For evidence of Webster's authorship, see below, item 1176: "2 my owne Sermon bookes". This is presumably a reference to Webster's The judgement set [also item 451. 48 Difficulties encountered in this respect are largely due to the vague descriptions that the cataloguer frequently employed in the composition of the manuscript. Item 1197, for example, is simply described as "H:N: Workes". Likewise, the use of "Opera" to describe an unknown quantity of books (e.g., item 1390: "Opera Jo: Wigandi") makes it impossible to arrive at an accurate figure for the total number of volumes in the collection. For the sake of statistical analysis, I have counted such items as comprising one volume only.
The library of John Webster intact, nor, as far as I am aware, have individual volumes come to light, which might have provided some clues to Webster's reading habits. In the absence of annotated books in Webster's handwriting, one is bound to accept that the catalogue as it stands represents for the historian only a limited record of one man's intellectual and literary tastes. 49 Notwithstanding the very real methodological problems and consequent limitations that beset a study such as this, it is possible to elicit much useful information which might add to our overall picture ofWebster. To a very large extent, I am encouraged in my optimism by the invaluable evidence of the breadth of Webster's reading to be found in his published writings. In all but his earliest theological writings, Webster included comprehensive and often detailed citations of references which I have included in the main body of the catalogue.50 An analysis of these suggests that in the writing of just three works (Academiarum examen, Metallographia, and The displaying of supposed witchcraft), Webster cited over two hundred authorities, or roughly fourteen per cent of the total number ofitems in his possession in 1682. Such a high figure (given the low sample) certainly suggests that Webster was no mere dilettante collector of books, a view confirmed by Webster himself in 1677, when he confessed to having led "a solitary and sedentary life ... having had more converse with the dead than the living, that is, more with Books than with Men"..51 All other arguments to one side, the sheer size and value ofWebster's library is surely testament enough to his voracious appetite for the printed word. Such a collection could only have been amassed at considerable personal expense, and Webster was, after all, a man of only moderate income and wealth. Moreover, geographical isolation, particularly from the specialist book markets in the new medical and scientific literature that features so prominently in Webster's library, must have placed real constraints on Webster's ability to purchase books. Some instances of individual items will serve to illustrate the point. The most expensive item in the library, Robert Fludd's five-volume Opera (Oppenheim & Frankfurt, 1617-26: item 1), was valued in 1680 at the staggering figure of £9 lOs. Od. A colour edition of Gerard Mercator's Atlas (Amsterdam, 1636-38; 1641: item 77) was priced at £8, and a three-volume edition of Conrad Gesner's Historia animalium (Zurich, 1551-58: item 83) at £6 lOs. Od.
In the case of less expensive items, the greatest problem for Webster was probably inaccessibility to booksellers. This fact alone probably accounts for one of the more curious features ofWebster's library, namely the virtual absence of books dating from the period after 1658-59. Although the problem of identification of editions makes it apologized for the publication of this translation because "there are too many books already, and the multitude of them is the greatest cause of our ignorance". In religion, similar claims were made by the spokesmen of the radical sects, yet Webster, who shared the view that book learning and other aids were irrelevant to the acquisition of grace, was simultaneously establishing a vast library that included a large percentage of orthodox theological writings.55 In Webster's case at least, the charge of "ignorance" which was levelled against him in the 1650s, and has been repeated since, was clearly unfounded. 56 Webster, in fact, never condemned book learning per se, even in his most vitriolic attacks upon the scholastic teachers of Oxford and Cambridge. The thrust of his argument was against what he perceived as the total reliance of the schools upon "poring continually upon a few paper Idols and unexperienced Authors: as though we could fathome the Universe by our shallow imaginations, or ... weak brains". It was for this reason that Webster advised the adoption of Baconian inductivism in natural philosophy and other branches of learning in order to redress the balance of university studies. Henceforth students should proceed to learn about the secrets of nature through the use of "manual operation, and ocular experiment", an eventuality which would "never come to pass, unless they have laboratories as well as Libraries, and work in the fire, [rather] than build Castles in the Air."57 OfWebster's breadth of learning and love of books there can be little serious doubt. But what of the actual contents of his library? Classification of each entry by subject (Table 1) and language (Table 2) is an obvious starting-point, notwithstanding certain important reservations as to the reliability of the statistical evidence used to determine these categories. In the case ofTable 1, the most serious caveat concerns the quite often insuperable problem of definitional status. Certain works obviously fall under more than one broad subject classification ( Webster's assault on the universities in the 1650s was not simply the work of an "ignorant fanatic" hell-bent on destroying the educational status quo. It was rather the product ofextensive reading in the literature of the new science, in all its various forms, which Webster continued to cultivate throughout the 1650s.59 Another equally important element ofWebster's eclecticism was his solid grounding in the learning of the "ancients", and in particular Aristotle, whom he regarded as the root of scientific error. Webster not only possessed a large edition ofAristotle's Works in Greek and Latin [79], but also a whole series oflearned commentaries and textbooks on Aristotelian natural philosophy. The key to Webster's rejection of the Aristotelian system is clear. As a pagan, he was not fit to carry the title of "father of Christian Philosophy", a view consistently repeated by the radical sectaries in the 1650s. Furthermore, the high proportion of Jesuit scholars in Webster's library, many of them orthodox Aristotelians, must . Such thinking, which lay at the root ofmuch ofWebster's own reform proposals, was shared by Comenius's English patron, Robert Greville (1608-43), whose The nature of truth [1365] was also in Webster's possession. An earlier example of this concern for a truly Christian natural philosophy can be found in the work of the French Huguenot, Lambert The library of John Webster The guiding spirit behind the work of these men was the rediscovery of the original unity between man, nature, and God, an enterprise which, if successful, would lead to an end to religious and intellectual division in Europe. One aspect of the pansophist vision was the revival of Renaissance hermeticism. For Webster and others, here lay the answer to the stale and corrupting paganism that continued to contaminate the learning of the academies in mid-seventeenth-century England. The various branches of hermetic learning are all well-represented in Webster's library, with particular emphasis upon the art of alchemy as taught by Paracelsus (1493-1541) and his followers. According to Webster in Academiarum examen, "the secrets of nature" were more likely to be uncovered by the study of this one subject than "by all the Peripatetick Philosophy in the world". Such faith is clearly reflected in Webster [729]. An exposition ofapocalyptic cabbalism, it has been described as an attempt by the author to envisage "a return to the earthly paradise ofGenesis" where "mankind will be united in a common speech [Hebrew], a common govemment, and a common religion based on cabala".71 The concordancy with Webster's own concerns in the 1650s is strikingly apparent (cf. above pp. 3-6). Works on various aspects of occult philosophy occupy almost a third of Webster's collection ofnatural science books. Apart from the large number of works which fit no specific heading (sixty-eight; e.g. encyclopaedic compilations ofnatural curiosities such as Olaus Worm's Museum [4]), the next largest category of works in this section comprises the study of astronomy (twenty-eight volumes). Since Webster was an unqualified supporter of the heliocentric system, one would expect to find a preponderance of pro-Copernican works in his library. This, however, is not the case. For every astronomical text advocating the Copernican system, Webster ). In addition, a large collection of lesser authorities, many of 1674, Webster informed Lister that he had prepared "a peice written of the Philosophers Universall Dissolvent, that hath laid by me above this five yeares, being unwilling to make it publike, untill I had by assured practise verified the virtues, and effects of the same"; Bodleian Library, Oxford, Lister MS. 34, f. 157. In the same letter, Webster referred to the Helmontian William Simpson, whom he claimed to have known in York "a good many yeares ago". The library of John Webster them favourable to astronomical innovation, was available to Webster.73 Two further aspects of Webster's library of natural philosophy demand our attention: mathematics and medicine. From Webster's ownership of seventy-nine volumes of mathematical and related writings (e.g., works on mechanics, surveying, architecture), it is possible to gain some idea ofthe relative importance that he attached to this particular branch of learning. In Academiarum examen, Webster had reprimanded the universities for their neglect of the mathematical sciences, "the superlative excellency ofwhich transcends the most ofall other sciences". In particular, Webster lamented the failure of the universities to appreciate the academic and utilitarian value of such studies (a fault shared, of course, by Francis Bacon). Arithmetic and geometry were both worthy of serious attention, Webster claimed, echoing in the process a similar plea made by John Dee in his preface to Billingsley's edition of Euclid's Geometry [200]. 74 If the evidence of his library is a faithful guide, Webster's own knowledge of, and commitment to, mathematical investigation was substantial. A solid core of ancient texts75 was supplemented by more recent works of sixteenth-76 and seventeenth-century77 authorities. To [1187]. The latter's Arithmetica infinitorum, published at Oxford in 1655, was the most important new work of its kind in the field of mathematics, and contained within it the roots of the differential calculus. Oughtred's Clavis mathematica [506], on the other hand, was probably the more influential work given its widespread popularity as a comprehensive and comprehensible guide to arithmetic and algebra. As Richard Greaves has noted, the fact that two of these three English mathematicians were unattached to the universities when they completed their mathematical researches tends to lend substance to Webster's view that mathematics was largely ignored in the contemporary curriculum. 78 Webster's possession of 242 volumes concerned solely with medical practice and theory is a sufficient indication of the significance that he attached to this specialized field of learning. Webster was, of course, a medical practitioner of long experience, who, despite the lack of any formal qualification (he gained official licence to practise in 1669), was clearly well-versed in all aspects of his chosen profession. As with Webster's interest in natural science generally, so too in the field of medicine one cannot fail to admire the astonishing breadth of medical learning covered by his library. Once again, one is struck by the familiar emphasis upon works of a modernist The library of John Webster hue, but not without due consideration to older authorities. The former trend is most noticeable in the case of works relating to recent anatomical research. Webster In addition to these treatises of orthodox medical practice and theory, Webster owned twenty-eight volumes concerned solely with the art of surgery, an area of medicine which he almost certainly performed in the course of his own practice in Clitheroe. The most striking feature here is the high incidence of valuable folio editions, which include the works of such renowned authors as Joannes Scultetus Not surprisingly, however, the largest category of works in Webster's medical library falls under the heading of occult and iatrochemical medicine (sixty-eight volumes or 28-1 per cent). The vast majority of these are by authors of the Paracelsian school of medicine. In Webster's eyes, the key to medical reform lay in the study and application of Paracelsian medical methods, which would, he believed, ultimately overthrow the rotten edifice of the Galenic system. Webster was no half-hearted reformer who wished, like some, to amalgamate elements of Galenism (usually its theoretical base) with elements of Paracelsianism (chemical remedies It would be a mistake, however, to view these works in isolation from those other works of occult philosophy which he owned (above pp. 22-23) and which together constitute 170 volumes or almost ten per cent of the complete contents of the library. Throughout his own published works, he continued to cite favourably authors of the occult and hermetic school of natural philosophy, in full awareness of the damaging criticisms that were consistently levelled against this form of scientific enquiry.86 And yet this view ofWebster as occult philosopher and hostile critic oftraditional science is, I believe, only partially reflected in his library. In medicine, astronomy, mathematics, and various other branches of natural science, Webster The library of John Webster appreciative of, the contribution which the "ancients" had made in all these fields of learning. Webster's library thus reveals a man of truly eclectic tastes in science who, despite his obscure background and apparent lack of academic credentials, was probably as well-qualified as anyone in England to criticize the deficiencies of contemporary scientific education.87 THEOLOGY After natural science, the largest category of books in Webster's possession was devoted to theology (397 volumes or 24 4 per cent). In the light of Webster's career as non-conformist and radical critic of the national church (above pp. 1-6), one cannot fail to be impressed and somewhat surprised by the number and range of works contained in this part of the library. For the best part of twenty years, Webster's theological outlook was based upon a fundamental rejection of the premise that other men, through their writings, might act as authoritative guides in religious matters. And yet, from the evidence of his library, it is apparent that Webster was a man steeped in the accumulated wisdom of theologians, ancient and modern, particularly those of the Calvinist school. Works by learned Calvinist professors and preachers on a variety of topics dominate the collection (135 volumes or 34 per cent) and far exceed the relatively small number of works representative of what one might term the "mystical-radical" tradition (only thirty-three volumes or 8-3 per cent).88 Indeed, works by Catholic theologians and apologists are almost as numerous as works in the latter category (thirty-one volumes or 7-8 per cent), so that one can only assume that works of this kind possessed a significance for Webster out of all proportion to their size and number. What conclusions, then, can one draw from Webster's library of theological works?
First, it would seem wholly reasonable to suppose, howsoever Webster acquired or read these works, that one of the major early theological influences upon his religious beliefs was orthodox Calvinism The most significant aspect of this large collection of Calvinist writings is its emphasis upon works of Calvinist exegesis (thirty-four volumes, or 25 per cent of the total for Calvinist authors). Two commentators in particular seem to have excited Webster's interest: the Scottish cleric Robert Rollock (15559-1599) (eight volumes) and the German Ramist Johann Piscator of Herborn (1546-1625) (six volumes). Both were widely cited in Webster's later published works, alongside another leading German Calvinist and prolific biblical commentator, Amandus Polanus of Basle (three volumes). Webster clearly approved of works of this kind, the Ramist nature of many (Polanus was also a Ramist) undoubtedly providing much of their appeal (cf. below p. 41). It is worth noting, however, that Webster's liking for learned scriptural analysis was not restricted to Calvinist exegetes. In all, he owned sixty-two volumes (or 15-6 per cent of the total number of theological works) of biblical commentary covering a wide spectrum of theological views from ancient patristic and scholastic sources to modem Catholic and Lutheran commentators.90 Leaving to one side the question of confessional allegiance and influence posed by these works, it is difficult to avoid the conclusion that writings of this kind possessed distinct appeal for Webster. Predictably perhaps, there is a discernible emphasis upon from this litany of learned sources and aids to biblical study is one of a highly educated man, immersed in the knowledge of the scriptures and fully conversant with orthodox Protestant, especially Calvinist, interpretations of the word ofGod. It would appear to follow, therefore, that when Webster opted to reject the orthodox Calvinism of the English church, he did so in complete understanding of the scriptural and theological foundations upon which the reformed faith in England was based. This much is evident from a cursory reading of the contents of Webster's library. What is more difficult to ascertain is the extent to which Webster's semi-mystical faith of the spirit was itself derived from conventional and orthodox sources (see below pp. 33-34). A further clue to Webster's theological development may be found in the frequency with which certain books of the Bible recur as the object of learned discussion and commentary. Genesis, Psalms, and Revelation appear most often (five volumes apiece), with commentaries on the books of Daniel and Romans almost as popular (four volumes each). Paul's epistle to the Romans, of course, possessed a political as well as a religious significance, with the Pauline injunction to "honour the powers that be" (Romans, 13), the subject of much debate in Protestant circles throughout this period.92 Genesis and Psalms, on the other hand, were naturally well-favoured by all literate Christians, both for study and religious edification. In the context ofWebster's proven attraction to millenial ideas in the 1650s (above pp. 2-3), undoubtedly the most intriguing entries in the catalogue are the nine commentaries upon the two prophetic books, Daniel Strangely enough, however, there is a general paucity of works of any kind that deal with millenial themes in the period of most acute crisis in Webster's life-time, the 1640s and 1650s. Only a single work, a commentary on Revelation by the puritan divine, Francis Woodcock [12 16], is to be found in this category. Published in 1643, it was an attempt to explain the current turmoil in England by placing the events of the time within the framework of the eleventh chapter of Revelation. Offar greater interest (though not strictly speaking a work concemed with the biblical millennium) was Paul Felgenhauer's Postilion (London, 1655) [375]. Ostensibly a work of astrological prophecy, the existence ofthis radical tract on the shelves ofWebster's library points to a continuing interest on Webster's part in the theme ofmillenial reform in interregnum England. Its message ofimpending, fundamental change in all areas ofhuman activity, coupled with Felgenhauer's prediction that the earth and the heavens would be made anew, was wholly in keeping with Webster's earlier pronouncements in 1653.95 In particular, Felgenhauer's sweeping indictment of conventional wisdom and scholastic education must have elicited a sympathetic response from the disillusioned radical who, only a few years earlier, had voiced much the same opinions with respect to the English universities. Felgenhauer, for example, chastised the Aristotelians, who "thinke not that any Physicks can be learned in the Bible", and he went on to admonish such men for holding the view that "that which is true in Theology does not hold in Phylosophy". According to Felgenhauer, these men were responsible for the destruction of that essential unity that once reigned between man, nature, and God, a common theme of radical literature in the 1650s. And Webster would surely have agreed with Felgenhauer's concluding remark that: "yee Book-men ... have filled the world full of Bookes, which are endlesse and numberlesse, nothing else but that you thereby are more and more scattered, confused and intricated".96 Felgenhauer's comprehensive assault upon established learning, religious hypocrisy, and traditional forms of government was just one of a number of radical tracts in Webster's library which together point to the most obvious source for Webster's own disaffection in the 1650s. In all, works of this nature total thirty-three volumes (8-3 per cent of the theological works) and cover a broad spectrum of unorthodox religious ideas and beliefs. Many obviously have a bearing on the development of Webster's own ideas. Others may help to shed light upon his subsequent disillusionment with the radical cause. It would, of course, be wrong to place too much emphasis on single works, but the fact that Webster possessed so few dating from this period, and that the majority were distinctly radical in tone, would Behmen"), then Webster's public recantation from the radical cause in about 1657-58 did not appear to dampen his enthusiasm for Behmenist literature. Whatever the case, Boehme's simple message that all men were united in the brotherhood of the holy spirit, and his belief in the superfluous nature of doctrinal and liturgical controversy, were echoed throughout Webster's early theological works. It is not surprising, therefore, that one of Webster's few known associates during the 1650s was the Welsh radical, William Erbery (1604-54), who was himself deeply imbued with Behmenist beliefs, and whose posthumous writings [452], edited by Webster in 1658, are to be found in the catalogue.98 It is not inconceivable that Webster may also have shared the company of another Behmenist (and ranter sympathizer) John Pordage (1607-8 1), who knew Erbery and whose apologetic narration Innocence appearing (London, 1655) [168] was also to be found on Webster's shelves.99 Another source for Webster's aversion to doctrinal orthodoxy and religious uniformity was the German mystic Valentine Weigel (1533-88). Webster owned two curious, but very influential, tracts by Weigel [373; 1370], both ofwhich were translated into English in the late 1640s. In Ofthe life ofChrist [1370], Weigel had stressed the idea that salvation was not tied to the observation of the sacraments or other human inventions in religion, but was rather the gift of pure faith. All men, Weigel claimed, possessed access to the transforming power offaith, which was acquired through belief in the inner spiritual Christ. Most men chose to reject this free gift and opted instead to follow the path of base "Adamic man". Those, however, who opened their hearts to the principle of the "Christ Life" within them were automatically received into membership of the true church, an invisible congregation of believers united by their common faith in the inner Christ. The result was a religion devoid of doctrine and ceremonies, tolerant and unworldly, which, like that professed by Webster
The library of John Webster
Good books, outward verbal ministry have their place, they testify to the real Treasure, they are witnesses to the inner Word within us, but Faith is not tied to books, it is a new nativity which cannot be found in a book. He who hath the inward Schoolmaster loseth nothing of his Salvation although all preachers should be dead and all books burned.100 One of the chief features of both Boehme's and Weigel's mystical reasoning was its tendency to interpret the message of the scriptures by depicting the biblical struggle between good and evil as one large allegory for the battle that took place within the hearts and souls of each and every believer. These ideas gained wide currency in England during the 1640s, and it is likely that Webster encountered them at this time and then adapted them to his own religious ends. Webster's insistence, for example, that Satan and Antichrist existed in man only as metaphors for sin and evil (cf. above pp. 2-3) may well have derived from Joseph Salmon's Antichrist in man (London, 1647) [1367a]. Similar ideas are to be found in the works ofWilliam Erbery and the American Familist, Samuel Gorton (d.1677). In the latter's Incorruptible key (London, 1647) [464], the author interpreted the biblical references to witchcraft to mean "those spiritual juglings" that the learned ministers employed "by art, and humane leaming ... in and about the word of God".10' Webster's interest in Gorton's work was maintained until the late 1650s, since he also possessed his Antidote against the common plague (London, 1657) [458]. In this work (addressed to Oliver Cromwell), Gorton defended the notion that Antichrist was "not to be confined to any one particular man, or devil", and he repeated his earlier suggestion that the clergy were little better than the witches they persecuted. 102 The library of John Webster such may well indicate a further sign of Webster's growing disillusionment with the cause of radicalism. In this work, Owen professed to hold a strong desire for religious unity and peace but concluded, in much the same way as Webster in 1658 (above pp. 6-7), that the time was not yet right. Owen therefore cautioned that it was the duty of all men to yield to the present system of church government in England, a policy born out of practical necessity and one that would sorely test the consciences of many puritan ministers in 1660. Webster, as we have seen (above pp. 6-7), was probably reconciled to this way of thinking by the late 1650s, and so had little difficulty in accepting the restored church in 1660 (note his possession of a copy of the special prayer devised to be read on the anniversary on the death ofCharles I, and published in On a superficial level at least, there seems little reason to doubt the fact of Webster's public acknowledgement of the restoration church. It is just possible, however, that Webster combined outward conformity to the church of England after 1660 with a continuing private appreciation of the merits of radical religious beliefs. This insinuation was suggested in the case of certain elements of Webster's reasoning with regard to witchcraft (above pp. 11-12). It may also be inferred from the admittedly ambiguous evidence to be found in that section of Webster's library entitled "Bookes lent & omitted in ye formr Catologue" (Section M). Since the catalogue was probably compiled shortly before Webster's death in 1682, it is evident from the references to his own published sermons [1176], two volumes of Socinian writings [1 189a-b], and the "Works" of the Familist Hendrik Niclas [1197] that radical literature was still circulating in the Clitheroe region some twenty years after the restoration. It is not beyond the bounds of possibility, therefore, that Webster continued privately to disseminate the radical message whilst at the same time maintaining a public image of conformity. After all, such expediency was a key element in the survival of Familist groups like the Grindletonians.104 In the last resort, the whole issue of influence, and the extent to which it can be inferred from documents such as library catalogues, is one which defies precise analysis and evaluation. In Webster's case, I have merely attempted to suggest certain lines of speculation linking his known religious views with the volumes in his library. Naturally therefore, the emphasis of my comments has focused upon those works that espoused radical theological beliefs. One should not infer from this, however, that the other works oftheology in his library were entirely unrelated to the formulation ofWebster's own religious outlook. It is highly probable, for example, that the large number of orthodox Calvinist and Lutheran authorities in Webster's library may have provided him with an alternative (or original) source for the view that human reason and learning were immaterial to the acquisition ofgrace. Uncertainty in learned Protestant circles as to the exact function and place of reason was a common theme of much Calvinist and Lutheran literature.'05 Inevitably, the doubt that this created in some minds as to the relevance of learned human authority in spiritual matters was a double-edged sword and produced a popular obscurantism that was never the object of the original authors. An example of this kind of work was Jean Daille's Traicte de l'employ des sainct peres [1088] in which Daille censured excessive reliance upon patristic and scholastic sources. Yet Webster reminds us that even sources such as these, frequently the object of Protestant scorn, were not entirely without merit. In Academiarum examen, for example, he repeatedly cited Saint John Chrysostom [136] as one source for the view that human learning was antithetical to true religion. One is reminded yet again of the fact that Webster was at heart a committed eclectica statement which applies as much to his theology as to his views on natural science. 106 One should not exaggerate, however, the reaction against learned values and human reason in Protestant discourse which was generally outweighed by works in favour of such aids to salvation. Webster himself owned a number of the latter, including an exhaustive defence oflearning and reason by the Englishman, Egeon Askew [appended to 1234], and the German Calvinist, Nicolaus Vedelius [774]. In addition to these, Webster also possessed numerous manuals for preachers, which were designed to illustrate the various uses of learning, reason, and logic in the construction of sermons (e.g., works by Bartholomew Keckermann [1410] and Niels Hemmingsen [1469]). 107 Despite protestations to the contrary, Webster almost certainly imbibed elements of much of this literature, and was indebted, to some extent, to scholastic methods of theological argument and discourse. Indeed, his inconsistency in this respect was used by his adversaries to expose the flaws in his radical arguments. George Wither pointed out in 1653 that much of Webster's phraseology in theological matters was clothed in the language and logic of those scholastic conventions that he purported to despise. 108 In theology too, then, one is faced with the paradox of a would-be reformer whose commitment to change was shaped as much by traditional, orthodox sources as it was by new ideas and beliefs.
HISTORY
The most conspicuous feature of Webster's large collection of historical works (169 volumes or 10-4 per cent) is its astonishing range. Although works of ancient history predominate (sixty-four volumes), Webster owned an impressive set of annals and histories of medieval and Renaissance Europe, as well as a substantial number of works relating to the history of the church. Most of the works relating to ancient Greece (eleven volumes) and Rome (thirty-seven volumes) can probably be accounted for by the fact that texts such as these were frequently used for instruction in the grammar schools of early modem England. Works [127], and Stow [125; 933], Webster owned a series of lesser works which together set out to emphasize the glorious antecedents ofthe English people, and in some cases that of their monarchy. Two works in particular stand out for comment, not so much for any intrinsic merit that they might possess, but rather for the prominence which they gave to the ancient historical myth that the kings of England were directly descended, through Arthur, from the Trojan founder of Britain, Brutus. The first, Sir John Price's Historiae britannicae defensio [897] has been described as "the major scholarly affirmation ofthe pro-Brutus-Arthur faction". 109 The other, Thomas Heywood's Life of Merlin [468], reiterates the same theme, and extends the myth to include Charles I and the Stuarts (a strange work, one might think, to find in the hands of a parliamentarian). Webster's interest in such matters, however, was probably unrelated to either historical or political concerns. A much more likely explanation lies in Webster's extraordinary fondness for tales of Arthurian-style romance, which quite often took as their starting-point the Brutus-Arthur legend (see below pp. 36-37).
Despite the concentration of interest in British history, there are surprisingly few works to be found relating to recent events in Webster [107]. Much supplementary historical information relating to non-European subjects was also available to Webster in the form of early travel journals, which proved highly popular in this period as general works of scholarly reference (for statistical purposes I have included these under the category of Natural Philosophy). Webster himselfwas greatly indebted to one such work, by the Jesuit, Joseph de Acosta French, Italian, and English, many of them not published or translated until the 1640s and 1650s. The major source for these chivalric romances was the Brutus-Arthur legend (see above p. 35), which underwent dramatic literary embellishment in the late-fifteenth and early-sixteenth centuries at the hands of unknown Spanish authors. The most famous product of this school ofwriting were the stories ofAmadis de Gaule Works of this kind can be interpreted and understood at various levels of meaning. Many clearly read them for pure enjoyment and little else, since they were certainly amongst the most popular forms of literature in sixteenthand early-seventeenthcentury Europe. 112 A more sophisticated audience, however, was meant to perceive a didactic purpose in these stories, ranging from princely advice and chivalric instruction to the conveyance of specific moral points. Such was the purpose of John Lyly's Euphues [914] and Sidney's Arcadia [180], the latter displaying another common feature of this genre, namely its idealization of rusticity (cf., for example, the Diana of Jorge de Montemayor [1096]). Of course, it is impossible to say what aspects of these novels attracted Webster's attention. But the mere fact that Webster, given his religious background, collected such works is itselfquite extraordinary. Books ofthis kind, so one is generally led to believe, were anathema to most puritans and sectaries, who felt that they tended to corrupt the minds of their readers and distract them from their godly duties. Webster, however, far from rejecting literature of this kind, was actively engaged in the collection ofthe latest romances and fictions in the 1640s and 1650s-in the period, that is, of his own greatest commitment to the reform of learning and religion."13 Webster, in fact, had little to say about popular literature, ancient or modern, in Academiarum examen, apart from the vaguest indictment of excessive reliance upon pagan authors, poets, and dramatists. Poetry, rhetoric, and other branches of classical literature were not condemned outright by Webster, who, unlike many of his radical colleagues, was content to allow moderate use of such studies. One of the principal benefits of this learning was its contribution to the perfection of style and eloquence which Webster almost certainly employed in his capacity as grammar school master. In 112 The popularity of such works is discussed in Margaret Spufford, Small books andpleasant histories: popularfiction and its readership in seventeenth-century England, London whole, the preponderance of Hebrew (fourteen volumes), Latin (twelve volumes), and Greek (seven volumes) is entirely predictable. Not only was knowledge of all three languages considered an essential accoutrement for serious biblical scholarship, but skill in these languages was also established as a staple element in the curriculum of the grammar schools. Webster must therefore have used many of these works in his teaching duties and almost certainly applied them to his interest in scriptural exegesis (cf. above pp. 28-29). 120 On the whole, there is little that is remarkable or noteworthy about these works. The numerous Latin grammars and manuals are primarily humanistic in tone and include a number of popular educational manuals designed to who, in Webster's words, had attempted "to lay down a platform ... that youth might as well in their tender years receive the impression of the knowing of matter, and things, as of words, and that with as much ease, brevity and facility". 123 The appeal of Comenius's linguistic theories lay in the practical benefits to learning in general which such methods were purported to produce. Science, in particular, stood to gain from the implementation of the Comenian system, which attempted to substitute an emphasis on words for an emphasis on things. In England, ideas of this kind were extremely influential among the Baconian reformers, who, in some cases, extended the scope of their linguistic research to the quest for a universal language. Webster had, ofcourse, shown a great deal ofinterest in such schemes in Academiarum examen, citing Jacob Boehme as the source of his belief that one day the universal language of nature would be rediscovered (above p. 3). According to Webster, all learning, not just science, stood to benefit from this discovery, a view which no doubt accounts for his possession of other works concerned with this subject ( One of the smallest categories of works in Webster's library consists of the three branches ofphilosophy: ethics, logic, and metaphysics (fifty-six volumes). All three, to some extent, were considered in need of reform by Webster in the 1650s, with metaphysics singled out for special criticism. Metaphysics was, in fact, one of the few subjects under-represented in Webster's library (nine volumes), and most of these were wholly traditional in their adherence to Aristotelian method and form.'25 Webster seems to have attached far greater significance to the study of logic (twenty-two volumes) and ethics (twenty volumes), subjects which he considered worthy of university study once they too had been purified from the corruptions ofAristotle and the schoolmen. In both cases, however, Webster possessed a solid grounding in the works of traditional authors, which formed the basis of instruction at Oxford and Cambridge.
In [413]), which, in its combination of poetry, art, and philosophy, possessed obvious pedagogical uses.
In the case of logic, there already existed in Webster's time a framework for wholesale reform of the Aristotelian system of logic in the shape of Ramism. Not surprisingly, Webster owned a number of Ramist texts. In addition to two copies ofthe Dialectica of La Ramee (Ramus) [801; 802], Webster He was also well versed in the works of a number of Protestant theologians who applied Ramist logic to their religious studies (e.g., in England, Downame, Ames, Perkins, and Fenner; in Europe, Alsted, Piscator, and Polanus). The chief end of Ramism was to simplify the actual process of learning and to make knowledge more accessible and factual, rather than conjectural. One of the main criticisms levelled against scholastic logic (by Webster amongst others) was that it was incapable of producing new knowledge or understanding. Ramism clearly could, and one area to which it was commonly applied was the puritan sermon. Granger's Syntagma logicam (London, 1620) [474], for example, was a comprehensive guide to the new logic which dealt exhaustively with the various religious applications of Ramism. Interestingly, Webster had little to say about Ramus's methods in Academiarum examen. If this is indicative ofan equivocal response on Webster's part to Ramism, then it may well have arisen because of its associations with orthodox Calvinism.'29 In logic, as in so many other areas of learning, Webster was fully acquainted with traditional Aristotelian method. The logical works of Aristotle [234; 807] were supplemented by numerous manuals and expositions of peripatetic logic, many of which were commonly employed as standard texts in the schools and universities of seventeenth-century England ( [1399]. The works of Aristotle were presumably to be relegated to a minor position in the curriculum, but not abolished altogether, since they were felt to possess some valuable insights into this particular branch of learning.'34 CONCLUSIONS Hugh Trevor-Roper has described John Webster as "a learned and dogmatic auto-didact .. . [and] a compulsive name dropper" who "uncritically sang the praises of all writers, who, from whatever position, had attacked ... Aristotle ... and Galen". 135 In the light of the preceding analysis of Webster's library, I am inclined to the view that this represents a fair assessment of Webster the man, except perhaps in one respect. IfWebster did name-drop in his writings, we now know that he did so from a position of complete familiarity with the sources that he cited. Likewise, his assault on the "ancients" in general, and Aristotle in particular, was not simply the typical knee-jerk response ofall radicals in this period to the traditional university curriculum. It was rather the product of a man who in all probability had received a traditional university education and who opposed the wisdom of the "ancients" from a vantage-point of knowledge rather than ignorance (cf. William Dell whose works Webster owned [450]). The extent of Webster's knowledge, and the range of his scholarly interests, was remarkable for a man so far removed for much of his life from the mainstream of intellectual activity in England. His library, dominated by works of medicine, natural philosophy, and divinity, must have been one of the largest private collections in the north of England. Unfortunately, however, it has not survived intact, nor as yet have any of the volumes that once formed part of the library come to light. Until that time, any attempt, my own included, to evaluate the meaning of these books for their original owner must remain speculative. I am nonetheless convinced that for those more expert than myself, the catalogue of Webster's library provides a unique insight into the mind of one of the leading proponents of radical educational, religious, and scientific change in seventeenth-century England. 134 Webster owned a single copy ofAristotle's Politica
|
2016-05-31T19:58:12.500Z
|
1986-01-01T00:00:00.000
|
{
"year": 1986,
"sha1": "8c023aad563adb89af5269cfcaea763869c1eec7",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "8c023aad563adb89af5269cfcaea763869c1eec7",
"s2fieldsofstudy": [
"History"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
257102384
|
pes2o/s2orc
|
v3-fos-license
|
Hera: A Heterogeneity-Aware Multi-Tenant Inference Server for Personalized Recommendations
While providing low latency is a fundamental requirement in deploying recommendation services, achieving high resource utility is also crucial in cost-effectively maintaining the datacenter. Co-locating multiple workers of a model is an effective way to maximize query-level parallelism and server throughput, but the interference caused by concurrent workers at shared resources can prevent server queries from meeting its SLA. Hera utilizes the heterogeneous memory requirement of multi-tenant recommendation models to intelligently determine a productive set of co-located models and its resource allocation, providing fast response time while achieving high throughput. We show that Hera achieves an average 37.3% improvement in effective machine utilization, enabling 26% reduction in required servers, significantly improving upon the baseline recommedation inference server.
I. INTRODUCTION
Deep neural network (DNN) based personalized recommendation models play a vital role in today's consumer facing internet services (e.g., e-commerce, news feed, Ads). Facebook, for instance, reports that recommendation models account for more than 75% of all the machine learning (ML) inference cycles in their datacenters [1]. A major challenge facing this emerging ML workload is the need to effectively balance low latency and high throughput. More concretely, unlike training scenarios where throughput is the primary figure-of-merit, ensuring low latency responsiveness is a fundamental requirement for inference services, especially for these user-facing recommendation models. Nonetheless, achieving high server utility and system throughput is still vital for hyperscalers as cost-effectively maintaining the consolidated datacenters directly translates into low total cost of ownership (TCO).
Given this landscape, "co-locating" multiple workers from a single or multiple recommendation models is an effective solution to improve system throughput. As the inference server is constantly being fed with numerous service queries, the scheduler can utilize such query-level parallelism to have multiple inference queries be concurrently processed using these multiple workers. Recommendations are typically deployed using CPUs because of their high availability at datacenters as well as their latency-optimized design. A key challenge in co-locating recommendation models over a multi-core CPU is determining which models to co-locate Preprint. together, how many workers per each model to deploy, and how to gracefully handle the interference between co-located workers at shared resources, i.e., caches and the memory system. Latency-critical ML tasks operate with strict service level agreement (SLA) goals on tail latency, so even a small amount of disturbance at shared resources can cause deteriorating effects on "latency-bounded" throughput (i.e., the number of queries processed per second that meets SLA targets, aka QPS).
To this end, an important motivation and contribution of this work is a detailed characterization on the effect of co-locating multiple recommendation model workers on tail latency as well as latency-bounded throughput. We make several observations unique to multi-tenant recommendation inference. Conventional convolutional and recurrent neural networks (CNNs and RNNs) are primarily based on highly regular DNN algorithms. These "dense" DNN models enjoy high QPS improvements by utilizing query-level parallelism to scale up the number of multi-tenant workers [2], a property this paper henceforth refers to as worker scalability. However, DNN-based recommendations employ "sparse" embedding layers in addition to dense DNNs, exhibiting a highly irregular memory access pattern over a large embedding table. Depending on which application domain the recommendation model is being deployed (e.g., ranking, filtering, . . .), the configuration of different models can vary significantly in terms of its 1) embedding table size, 2) the number of embedding table lookups per each table, and 3) the depth/width of the dense DNN layers. All these factors determine the memory capacity and bandwidth demands of a model, affecting its worker scalability (Section V). For example, recommendations with a modest model size generally exhibit a computelimited, cache-sensitive behavior with high worker scalability. On the other hand, models with high memory (capacity and/or bandwidth) requirements are substantially limited with their worker scalability, rendering co-location a suboptimal, or worse, an impossible design point. As a result, blindly choosing the recommendation models to co-locate, without accounting for each model's worker scalability, leads to aggravated tail latency and QPS, leaving significant performance left on the table.
In this paper, we present Hera, a "Heterogeneous Memory Requirement Aware Co-location Algorithm" for multi-tenant recommendation inference. The innovation of Hera lies in its ability to accurately estimate co-location affinity among Fig. 1: Model architecture of DNN-based recommendations a given pair of recommendation models. Models with high (or low) co-location affinity are defined as those that can (or fail to) sustain high QPS while sharing the compute/memory resources with each other. Our key observation is that memory capacity and/or bandwidth limited recommendations with low worker scalability generally exhibit high co-location affinity with models with high worker scalability. This is because memory-limited models fail to fully utilize on-chip cores and its local caches, allowing compute-limited/cachesensitive models to leverage such opportunity to spawn more workers while causing less disturbance to tail latency. Based on such key observation, we design Hera with two key components: 1) a "cluster-level" model selection unit (Section VI-B) and 2) a "node-level" shared resource management unit (Section VI-C). At the cluster level, Hera utilizes an analytical model that systematically evaluates colocation affinity among any given pair of recommendation models. Hera's model selection unit then utilizes this information to determine what models are most appropriate to be co-located together across all the nodes within the cluster. Once the models to co-locate are determined, Hera's nodelevel resource management unit examines how many workers as well as how much shared cache capacity each model should be allocated within each node that effectively utilize query-level parallelism for high sustained QPS. Overall, Hera achieves an average 37.3% improvement in effective machine utilization, which enables a 26% reduction in the number of required inference servers, significantly improving state-ofthe-art.
II. BACKGROUND A. Neural Recommendation Models
Recommendation models aim to find out contents/items to recommend to a user based on prior interactions as well as user's preference. A well-known challenge with content recommendations is that users interact only with a subset of available contents and items. Take YouTube as an example where any given user only watches a tiny subset of available video clips. Consequently, state-of-the-art DNN-based recommendation models combine both dense and sparse features for high accuracy. Here, dense features represent continuous inputs (e.g., user's age) whereas sparse features represent categorical inputs (e.g., a collection of movies a user has previously watched). Figure 1 shows a high-level overview of DNN-based recommendation models which we detail below.
Model architecture overview. The dense features in recommendations are processed with a stack of dense (bottom) DNN layers (e.g., convolutions, recurrent, and MLPs). Categorical features on the other hand are encoded as multi-hot vectors where a "1" represent a positive interaction among the available contents/items. As the number of positive interaction among all possible items are extremely small (i.e., a small number of "1"s within the multi-hot vector), the multi-hot vectors are transformed into real-valued, dense vectors (called embeddings) by an embedding layer. Specifically, an array of embedding vectors are stored contiguously as a table, and a sparse index ID (designating the location of "1"s within the multi-hot vector) is used to read out a unique row from this table. Because the multi-hot vector is extremely sparse, reading out the embedding vectors (corresponding to each sparse indices) from the table is equivalent to a sparse vector gather operation. The embedding vectors gathered from a given embedding table are reduced down into a single vector using element-wise additions. In general, embedding vector gather operations exhibit a highly memory-limited behavior as they are highly sparse and irregular memory accesses. Another distinguishing aspect of embedding layers is their high memory capacity demands: the embedding tables can contain several millions of entries, amounting to tens to hundreds of GBs of memory usage.
As there are multiple embedding tables, multiple reduced embeddings are generated by the embedding layer, the result of which goes through a feature interaction stage. One popular mechanism for feature interactions is a dot-product operation [3] between all input vectors (i.e., implemented as a batched GEMM). The feature interaction output is concatenated with the output of the bottom DNN layer, which is subsequently processed by the top DNN layers to calculate an event probability (e.g., the click-through-rates in advertisement banners) for recommending contents/items.
State-of-the-art DNN-based recommendation models. While Figure 1 broadly captures the high-level architecture of DNN-based recommendations, state-of-the-art model architectures deployed in industry settings exhibit notable differences in terms of their key design parameters (colored in red in Figure 1). Table I summarizes our studied, industryscale DNN-based recommendation models published from Google, Facebook, and Alibaba [1], [4], [5], [6], [7]. We further detail our evaluation methodology later in Section IV.
Multi-tenant inference server architecture. While providing fast response to end-users is vital for inference, achieving high server utility is also crucial for cost-effectively maintaining the consolidated datacenters. Co-locating multiple workers of an ML model on a single machine is an effective solution to improve server utility and throughput at the cost of aggravated latency. In our baseline CPU-based, multi-tenant inference server, a single worker (implemented using Caffe2 worker) is allocated with a dedicated CPU core and its local caches, multiples of which share the last level cache (LLC) and the memory subsystem ( Figure 2). Because the inference server is consistently being requested with multiple service queries, having multiple workers helps leverage query-level parallelism to simultaneously execute multiple inference services, improving throughput. As inference is a highly latency-sensitive operation, existing ML frameworks are designed with an in-memory processing model assuming the entire working set of a given worker process is all captured inside DRAM (i.e., paging data in and out of disk swap space is a non-option). Therefore, having multiple workers be co-located within a single machine requires the CPU memory capacity to be large enough to fully accommodate the aggregate memory usage of all the concurrent workers [13], [16], [17].
III. RELATED WORK
Improving server cost-efficiency via multi-tenancy has been studied extensively in prior literature [18], [19], [20], [21], [22], [23]. As co-located multi-tenant tasks contend for shared resources, prior work has focused on how to minimize interference and performance unpredictability. A common approach is to co-locate a user-facing, latencycritical task with best-effort workloads (e.g., batch jobs), prioritizing the latency-critical task with higher QoS to meet SLA. While effective in guaranteeing QoS for latency-critical tasks, such solution suffers from low server utility in terms of the number of latency-critical tasks scheduled per servers. Consequently, recent work [24], [25], [26] explored QoS- Unlike these prior work focusing on generic, latency-critical cloud services (e.g., Memcached, Sphinx, MongoDB, . . .), our work focuses on ML-based recommendation inference servers, presenting our unique, application-aware Hera architecture. More importantly, Hera develops a novel analytical model that quantifies co-location affinity among a given pair of recommendation models for intelligently selecting models to co-locate. More relevant to Hera is recent work by Gupta et al. [1], [13], which conducts a workload characterization on the effect of co-locating multiple workers from a single, homogeneous recommendation model ( Figure 2). As we detail in Section V-B, co-locating workers from a single model severely limits the worker scalability for memory capacity or bandwidth limited recommendations. To the best of our knowledge, Hera is the first to quantitatively evaluate the effect of co-locating workers from both homogeneous as well as heterogeneous recommendation models, developing a cluster-wide heterogeneous model selection algorithm as well as a node-level QoS-aware resource partitioning algorithm for recommendation inference servers. While not directly related to recommendation inference, Choi et al. [27] and Ghodrati et al. [28] studied an NPU-based multi-tenant inference for CNNs/RNNs/Attentions using temporal [27] and spatial [28] multi-tasking, respectively. There is also several recent work proposing near-data processing for accelerating recommendations [29], [30], [31], [32], [33]. In general, the key contributions of Hera is orthogonal to these prior work.
IV. METHODOLOGY
Hardware platform. We utilize a multi-node cluster containing one master node and five compute nodes, allowing a total of ten inference servers to be deployed (Table II). The intra-node resource partitioning and isolation are done using Linux's cpuset cgroups for allocating specific core IDs to a given model worker, and Intel's Cache Allocation Technology (CAT) for LLC partitioning. Software architecture. Hera's runtime manager is implemented using Facebook's open-sourced DeepRecInfra [13], a software framework for designing ML inference servers for at-scale neural recommendation models.
Query arrival rates. Prior work [13] reports that the query arrival rates of recommendation services in a production datacenter follows a Poisson distribution. Similarly, MLPerf's cloud inference suite [34] also employs a Poisson distribution in its inference query traffic generator. As such, our evaluation utilizes DeepRecInfra's inference query traffic generator which issues requests to the inference server based on a Poisson distribution, the query arrival rate of which is configured as appropriate per our evaluation goals (detailed in Section VII).
Query working set size. The size of queries for recommendation inference decides the number of items to be ranked for a given user, which determines request batch size. Prior work [13] observes that the working set sizes for recommendation queries follow a unique distribution with a heavy tail effect. We utilize DeepRecInfra to properly reflect such distribution, having the batch size of an inference range from 1-1024 [13]. Benchmarks.
A. Analysis on "Single" Model Worker
We start by characterizing a single worker's inference behavior without co-location, breaking down latency per major compute operators. As shown in Figure 3, the apparent operator diversity leads to different performance bottlenecks. For example, models such as DLRM(A,B,D) are significantly bottlenecked on the memory intensive embedding layers, experiencing high cache miss rate and high memory bandwidth usage ( Figure 4). Because embedding vector gather operations are conducted over a large embedding table, its memory access stream are extremely sparse and irregular with low data locality. Consequently, models with a large number of embedding tables and embedding lookups (DLRM(A,B)) or wide embedding vectors (DLRM(D)) show much higher memory intensity as the majority of execution time is spent gathering embedding vectors. Such property is in stark contrast to DLRM(C), NCF, DIEN, DIN, and WnD, which spend significant amount of time on the computationally intensive FC and/or recurrent layers. Thanks to the (relatively) higher compute intensity and smaller working set, these computeintensive models exhibit better caching efficiency and lower memory bandwidth consumption.
B. Serving with "Multi"-Tenant Workers
Building on top of the characterization on single worker inference, we now study the efficacy of co-locating multiple workers from a single recommendation model. We assume a single multi-core CPU machine is utilized for servicing inference queries for the recommendation model but the number of concurrently executing workers are scaled "up" (i.e., one worker per each core, Figure 2) to characterize its effect on shared memory resources ( Figure 5) as well as latencybounded throughput, i.e., QPS ( Figure 6). QPS is measured by quantifying the maximum input load the concurrent workers can process without violating SLA. Specifically, we start from a low input query arrival rate (i.e., queries arrived per second) and gradually inject higher request rates until the observed (95th percentile) tail latency starts violating the SLA target. The max load of a recommendation model therefore is defined to quantify the maximum input query arrival rate its workers are able to sustain without SLA violation. As the number of workers are increased, we generally observe a gradual increase in LLC miss rate with a corresponding increase in memory bandwidth usage. This is expected as more workers proportionally demand larger compute and memory usage. However, there are noticeable differences in the way different models react under a multi-tenant inference scenario. First and foremost, memory "capacity" hungry models such as DLRM(B) is severely limited with its ability to spawn a large number of concurrent workers as the aggregate memory usage of a single worker alone amounts to 25 GB (Table I). Consequently, the inference server suffers from an out-of-memory error beyond 8 workers (recall that ML inference servers employ a software stack implemented using an in-memory processing model, Section II-B), leaving an average 50% of CPU cores and on-chip caches left idle. Such behavior is likely to be a significant concern to hyperscalers because recent literature frequently points to neural recommendation models having several tens to hundreds of GBs of memory usage [1], [36], [37], [40]. As such, co-locating a large number of workers for these memory capacity limited models are challenging under current ML serving architectures ( Figure 6). Second, embedding limited models with high memory "bandwidth" usage (DLRM(A,D)) exhibit an almost linear increase in memory bandwidth utility with a large number of workers. This is because the LLC fails to capture the already meager data locality of embedding layers, frequently missing at the LLC and consuming high memory bandwidth. The performance of DLRM(D) in particular is completely bottlenecked on memory bandwidth, whose aggregated bandwidth usage saturates beyond 12 workers ( Figure 5). As a result, the QPS improvements in DLRM(D) levels off around 12 workers, only achieving a further 4% throughput enhancements going from 12 to 16 workers. Therefore, scaling up the number of multi-tenant workers beyond 12 for DLRM(D) is highly sub-optimal from a throughput cost-efficiency perspective. Lastly, the remaining five recommendation models with high compute intensity and a modest model size ( Table I) leave plenty of memory bandwidth available even when the number of parallel workers are maximized. As shown in Figure 6, such headroom in memory bandwidth allows these computeintensive recommendation models to enjoy a scalable increase in QPS with a large number of parallel workers. Results are normalized to the right-most configuration where we allow the workers to fully utilize the entire (11 ways) LLC. Each experiment assumes maximally possible workers are spawned for execution (8 for DLRM (B) and 16 for other models). Note that Intel's CAT prevents the allocation of zero LLC ways to any given process (i.e., bypassing the LLC is impossible), thus having at least a single LLC way allocated per each model is required. We observed similar trends when measuring sensitivity of QPS on LLC over different number of workers, so we omit them for brevity.
Overall, we conclude that recommendation models with high memory capacity and/or bandwidth demands are severely limited with its worker scalability, i.e., the ability to leverage query-level parallelism in ML inference servers to scalably increase QPS via multi-tenant workers. In contrast, models that are relatively more compute intensive with high cache sensitivity exhibit much better worker scalability. Such high worker scalability however is only guaranteed when each worker is provided with a "large enough" LLC capacity to sufficiently capture locality, as we further discuss below.
C. Sensitivity to LLC Capacity
To analyze the sensitivity of each model's worker scalability to LLC size, we utilize Intel's Cache Allocation Technology (CAT) [41] to limit the number of LLC ways allocated to the multi-tenant workers. Figure 7 plots the sustained QPS of each model architecture (y-axis) as we gradually limit the number of ways allocated to the executing workers (from right to left on the x-axis). Several important observations can be made from this experiment. First, the memory-limited DLRM(A,B,D) shows high robustness to available LLC capacity. For instance, DLRM(D) is able to achieve 90% of maximum QPS despite having only a single LLC way allocated. These memory-limited models spend significant fraction of their execution time on highly irregular embedding gather operations with low locality (Figure 3). Therefore, leveraging memory parallelism rather than locality (i.e., memory bandwidth rather than LLC capacity) is more crucial for these memory-limited models in achieving high sustained QPS. Second, models with high compute-intensity (NCF, DIEN, DIN, WnD) exhibit high sensitivity to the LLC capacity, underscoring the importance of sufficiently provisioning shared cache space to the multi-tenant workers. Interestingly, many of these cache-sensitive workloads are able to sustain reasonably high QPS despite having allocated with a small LLC space. For instance, DIEN is capable of achieving more than 80% of maximum QPS while only being allocated with 2 out of the 11 LLC ways (and similarly 2 ways and 5 ways for WnD and DIN, respectively).
A. Design Principles
Overview. Figure 8 provides an overview of Hera's two key components, 1) the cluster-level model selection unit (Section VI-B) and 2) the node-level shared resource management unit (Section VI-C). At the cluster level, Hera employs our analytical model that utilizes the heterogeneous memory needs of recommendations for selecting the optimal pair of models to co-locate across all the server nodes. Once the models to co-locate are determined, Hera node-level resource management unit sets up the proper compute/memory resource allocation strategy for the multi-tenant workers, which is dynamically tuned for maximum efficiency by closely monitoring each model's tail latency, QPS, and query arrival rates.
Key approach. Section V-B revealed that models with high memory capacity or bandwidth usage suffer from low worker scalability, leaving significant amount of CPU cores and shared LLC underutilized. To address such inefficiency, Hera aims to deploy workers from a pair of low and high worker scalability models, as their complementary compute and memory usage characteristic (i.e., memory-intensive vs. compute-intensive execution for low vs. high worker scalability models respectively) helps maximize server utility while minimizing interference at shared resources. Figure 9 provides examples that highlight the importance of Hera's worker scalability aware, multi-tenant model selection algorithm, where the cache-sensitive NCF is co-located with (a) another cache-sensitive DIEN and (b) memory capacity limited DLRM(B). As depicted, co-locating two cache-sensitive and high worker scalability models with similar resource requirements (NCF and DIEN) results in severe interference at the LLC, causing an average 20% throughput loss compared to each model's isolated execution. In contrast, consider the example in Figure 9(b) where workers with complementary compute/memory access patterns are co-located. Because of DLRM(B)'s memory capacity constraints, the model is not able to fully utilize the 16 cores on-chip, rendering the remaining CPU cores and LLC left underutilized. Colocating NCF with DLRM(B) therefore helps better utilize server resources thereby significantly improving aggregate throughput. Of course, the interference between DLRM(B) and NCF cannot be eliminated completely, so both models experience some throughput loss compared to their respective isolated executions. Nonetheless, the net benefit of enhanced Isolated execution (max load with 8 cores and entire LLC) Co-located execution (max load with optimal LLC partition)
server utility via intelligently co-locating low and high worker scalability models outweighs the deterioration in each model's throughput, leading to a significant system-wide QPS improvement.
As such, we seek to address the key research challenge regarding how to reliably estimate a given model's worker scalability anad utilize that information to decide the optimal set of models to co-locate across the cluster. Once the models to co-locate are determined, another research question remains regarding how to efficiently allocate shared resources among multi-tenant workers that minimize interference and maximize QPS. We first elaborate on Hera's cluster-level model selection algorithm in Section VI-B followed by a discussion of our node-lvel resource management policy in Section VI-C.
B. Cluster-level Multi-tenant Model Selection Unit
Profiling-based worker scalability estimation. Through our characterization in Section V-B, we observe that a given recommendation model's performance scalability (as a function of the number of concurrent workers) can be estimated reliably through profiling. Hera utilizes the slope of the performance scalability curve in Figure 6 to make a binary decision on whether the subject model has high worker scalability or not. For example, memory capacity limited DLRM(B) and memory bandwidth limited DLRM(D) are categorized as those with low worker scalability because employing a large number of workers is either impossible (DLRM(B)) or simply unproductive beyond a certain threshold from a QPS perspective (DLRM(D)). The remaining recommendation models on the other hand are categorized as having high worker scalability as they can fully utilize the CPU cores and on-chip caches with sustained high QPS. The profiled result in Figure 6 only needs to be collected once for a target server architecture and the derivation of whether a model has high worker scalability or not is entirely done offline, having negligible impact on Hera's performance or memory usage, i.e., a static boolean variable per each model designates its worker scalability.
Determining key sources of resource contention. By utilizing the recommendation model's (high/low) worker scalability information, Hera tries to co-locate a pair of (high, low) worker scalability models. This helps us significantly reduce the model selection search space as it tries to avoid the unfruitful co-location of (high, high) worker scalability models. Nonetheless, different recommendation models naturally exhibit different compute and shared resource usage, Algorithm 1 Co-location Affinity
14:
15: Step C: Estimation of system-level co-location affinity 16: CoAffsystem = min(CoAffLLC, CoAffDRAM) so understanding a model's unique resource requirement helps better estimate the magnitude of shared resource contention, thereby narrowing down the model selection search space even further. Prior work [20], [24], [25] considers the shared LLC, memory, storage, and network bandwidth as one of the most important sources of resource contention under multi-tenancy. However, our key finding is that the multi-tenant workers for recommendation inference rarely compete for storage and network bandwidth. As previously discussed in Section II-B, state-of-the-art ML frameworks for inference servers employ an in-memory processing model. Consequently, once the ML inference server is bootstrapped for deployment (e.g., initializing inference server-client processes, provisioning each worker's memory allocation needs inside DRAM, . . .), the multi-tenant workers have little to no interaction with the storage system at inference time. This is not surprising given the latency-critical nature of inference and its need for high performance predictability. When it comes to the network stack, the ML inference server receives client's service queries through the NIC over an HTTP/REST protocol [16], [17]. However, we observe less than an average 1.9 Gbps of network bandwidth usage in all our evaluation, far less than the available system-wide network bandwidth (which is typically in the orders of several tens to hundreds of Gbps in at-scale datacenters). Consequently, Hera only considers the interference at shared LLC and memory bandwidth for evaluating both the candidate models for colocation and the efficient resource allocation mechanism for those models. Compared to previous, generic QoS-aware resource partitioning mechanisms [20], [24] that consider the interference at all of cache/memory/storage/network, Hera's application-awareness helps our proposal more agiley adapt to the dynamics of inference query arrival patterns, significantly improving upon state-of-the-art (Section VII-A2). Identifying co-location affinity for model selection. We develop an analytical model that estimates co-location affinity among a given pair of models. Hera's model selection unit utilizes co-location affinity to determine the best set of models to co-locate, choosing those with high co-location affinity, i.e., ones that can sustain high QPS while sharing the (a) (b) Fig. 10: In (a), we show the estimated co-location affinity among all possible pairs of co-located models (the higher the better). To demonstrate how well our co-location affinity models the effect of shared resource interference, we show in (b) the measured aggregate QPS of co-located models normalized to the summation of the QPS achieved when each model is executed in isolation.
LLC/memory. Co-location affinity is derived by estimating how much QPS loss is expected to the co-located Model A and Model B due to the interference at the shared LLC and memory bandwidth -the two most important sources of resource contention under multi-tenant ML inference (Algorithm 1). Similar to our definition of worker scalability, we employ a profiling-based approach to model the effect of shared resource contention on QPS. First, the profiled LLC sensitivity study in Figure 7 is utilized to derive the expected QPS when Model A and Model B gets an equal partition of the CPU cores for worker allocation. Each model is then given a partitioned slice of the shared LLC ways (CacheWay A and CacheWay B in line 4-5). The profiled QPS for each model is then normalized to the QPS achieved when each model is given the entire LLC for execution, the result of which is averaged over the two co-located models to quantify the effect of LLC interference (line 6). By examining all possible combination of LLC partitioning, we are able to derive the most optimal LLC partitioning point that gives the highest aggregate QPS (line 7-8). As Hera's resource management unit is capable of utilizing such information for optimal LLC partitioning (detailed in Section VI-C), we utilize this LLC partitioning point as the reference to quantify co-location affinity at LLC. As such, the closer the CoAff LLC value is to 1, the more likely the co-located models are less interfering with each other, thus having high co-location affinity at LLC.
Estimating co-location affinity for memory bandwidth sharing (CoAff DRAM ) by following the same measure as done in deriving CoAff LLC is challenging. This is because there is currently no practical way to manually partition and isolate each model's memory bandwidth usage (unlike the LLC where Intel's Cache Allocation Technology [41] provides means to fine-tune LLC partitioning). We therefore employ the analytical model in line 13 which utilizes our profiled result in Figure 5(b) to measure co-location affinity with respect to memory bandwidth sharing. Here MemBW A and MemBW B are the amount of memory bandwidth consumed when each model is given half of the CPU cores and the entire LLC for isolated execution without co-location. By normalizing the sum of MemBW A and MemBW B to the available socket-level memory bandwidth (MemBW system ), we get an estimate on how much effective bandwidth each model will be able to utilize vs. an idealistic scenario without bandwidth interference due to co-location (line 13). The evaluated CoAff DRAM can therefore provide guidance on how intrusive the memory bandwidth sharing will have on colocated models, quantifying co-location affinity at memory. For a conservative evaluation of co-location affinity that considers interference at both LLC and memory, we choose the lower value among the CoAff LLC CoAff DRAM (line 16). Figure 10(a) plots the derived co-location affinity for all possible model pairs we study. To visualize how well our co-location affinity captures the interference and its effect on QPS, we also show in Figure 10(b) the measured aggregate QPS of co-located models normalized to the theoretically maximum QPS achievable when each model is executed in isolation. As shown, Figure 10 clearly demonstrates the strong correlation between our estimated co-location affinity and the measured QPS (i.e., Pearson correlation coefficient: 0.95) It is worth pointing out that the parameters to derive Algorithm 1 are all statically determined. Therefore, the derivation of co-location affinity for all possible model pairs are done offline and are stored as a lookup table inside a twodimensional array (indexed using Model A and Model B 's identifier, Figure 10(a)). This table is then utilized by the central master node with global cluster visibility to determine model pairs to co-locate in inference servers to achieve a clusterwide target QPS. Algorithm 2 is a pseudo-code of Hera's cluster scheduling algorithm where we start by examining the low worker scalability models for deployment, checking the co-location affinity table to find the best candidate for co-location (line 6). Specifically, the scheduler prioritizes models with high worker scalability as prime candidates for co-location with low worker scalability models. When low worker scalability models are all deployed, the remaining models are allocated with a dedicated server in isolation but with maximum workers spawned to maximize QPS (line 17).
3:
Step A: Monitor phase 4: Monitor tail latancy, QPS, and query traffic rate for Tmonitor
5:
Step Once the model pair to co-locate are chosen for an inference server, Hera goes through the server initialization procedure for bootstrapping, followed by an iterative monitor-andadjust process to periodically fine-tune the resource allocation policy per inference query traffic (Figure 8).
Initialization. Server bootstrapping is done by evenly partitioning the CPU cores and shared LLC among the colocated workers (i.e., one core per each worker, all workers for a single model allocated with half the LLC), properly allocating the required data structures in-memory. If one of the co-located model does not have enough worker scalability to fully utilize allocated cores (e.g., memory capacity limited DLRM(B)), the other model utilizes those idle cores to spawn more workers.
Monitor. After server initialization, Hera resource management unit (RMU) periodically monitors tail latency, QPS, and the service query traffic rate for a period of T monitor to evaluate the effectiveness of current resource allocations (line 4 in Algorithm 3). The RMU checks whether the current core/LLC allocation is appropriate or not by calculating the SLA slack, i.e., ratio between measured latency vs. the model's SLA target (line 7). If the tail latency is larger than the SLA, the RMU deems that the current resource allocation is under-provisioned and the likelihood of violating SLA is too high. Contrarily, if the tail latency is smaller than the SLA, the possibility of SLA violation is relatively low. However, too large of an SLA slack by some predefined threshold (80% of SLA in our default setting) also implies that the current resource allocation is unnecessarily overprovisioned. Therefore, when either one of these conditions are met for a given model (line 8), RMU calls functions adjust_workers() and adjust_LLC_partition() to properly upsize/downsize the allocated cores and LLC to make sure they are given sufficient amount of resources to meet SLA (line 9,13).
Adjusting model workers. At-scale datacenters receive a dynamically fluctuating query arrival patterns following a Poisson distribution (Section IV). SLA violations occur because the rate at which inference queries arrive to the server is overwhelmingly too high for the currently available workers. Therefore, the ability to dynamically provision a proportional amount of workers and LLC per query arrival rate is crucial for appropriately handling query-level parallelism. Now recall that our characterization on worker scalability (the left-axis in Figure 6) provides a proxy on how much sustained QPS can be achieved with a given number of workers. The RMU therefore utilizes the performance scalability curve in Figure 6 (the same data structure used in Section VI-B for deriving models' worker scalability) to find out the minimum number of workers that can achieve sustainable QPS for the current input query arrival rate, traffic query (find_number_of_workers() in line 24), the result of which is used to adjust the number of workers for the next monitoring phase. Given the reactive nature of Algorithm 3 (i.e., allocating additional workers occur after SLA violation is observed), we additionally define urgency of a model's queries as defined in line 19-21 to make sure Hera can adequately handle high spikes in query arrival rates. We define urgency as the ratio between the measured tail latency and the SLA target, so the higher the urgency the more likely that the inference server has delinquent service queries yet to be serviced inside the server request queue, leading to an unusually high tail latency. By artificially scaling up the observed query traffic rate with its urgency, the RMU helps better provision large enough workers for urgent models, enabling agile responsiveness. Of course, such over-provisioning might lead to unnecessarily large SLA slack after adjustment, but such case is gracefully handled by the RMU by properly downsizing the workers during the next monitor-and-adjust phase.
Adjusting LLC partitions. When the number of workers for each co-located model has changed, the RMU also adjusts the partitioned LLC ways as appropriate using adjust_LLC_partition(). Similar to how the optimal LLC way partitioning design point in Algorithm 1 was derived, we employ a profiling-based strategy. Specifically, a one-time, offline profiling of QPS for all the models for all possible combination of (number of workers, number of LLC ways) is conducted, which is used to populate the (three-dimensional) lookup table utilized in line 33 of Algorithm 3. Whenever adjust_LLC_partition() is called, the RMU utilizes this lookup table to re-evaluate the optimal number of LLC ways to allocate under the renewed (upsized/downsized) number of workers allocated for each model that results in the highest aggregate QPS. The overhead of generating this lookup table is amortized over all future deployments over a target server architecture and the memory allocation required to store this data structure is less than 2 KBs, having negligible impact on performance and memory usage.
VII. EVALUATION
Our evaluation takes a bottom-up approach, first focusing on intra-node evaluation (Section VII-A, Section VII-B), followed by our cluster-wide analysis (Section VII-C). Following prior work [24], we first evaluate scenarios where the multitenant workers run at constant loads (Section VII-A) and later explore fluctuating load (Section VII-B).
A. Constant Load 1) Effectiveness of Hera Model Selection Unit: We establish two baseline model selection algorithms and two Hera based design points as follows. The baseline designs are: 1) state-of-the-art DeepRecSys [13], which co-locates multiple workers from a single, homogeneous model (Section III/Section V-B), and 2) randomly choosing any given pair of heterogeneous models to co-locate without any restriction (Random). The two Hera design points are 3) Hera(Random) and 4) Hera, where both utilize the worker scalability of the candidate model to avoid the undesirable co-location of (high, high) worker scalability models. However, Hera(Random) randomly chooses any one of the possible model pairs (excluding (high, high) model pairs), whereas Hera can utilize our estimated co-location affinity ( Figure 10) to choose model pairs that provide the highest machine utilization. To rule out the effect of resource management policy in our evaluation, all four design points including Hera employ our proposed resource management algorithm in Section VI-C.
We measure Effective Machine Utilization (EMU), a metric used in related prior work [20], [24], [25] that is defined as the max aggregate load of all co-located applications, where each application's load is expressed as a percentage of its max load when executed in isolation with all server resources ( Figure 9, Section V-B discusses how max load is measured for each model). Note that EMU can be above 100% by better bin-packing shared resources among co-located models. Figure 11 shows the distribution of EMU for each model selection algorithm's all chosen pair of co-located models. Because DeepRecSys does not co-locate workers from multiple models, QPS is always identical to the max load of isolated execution, thus having EMU of 100%. As for Random, we show the EMU distribution for all possible combination of model pairs among the eight models we study, achieving 82 to 147% EMU. Although Random can improve EMU when opportunistically co-locating low worker scalability models with other models, it fails to rule out the co-location of (high, high) scalability co-location pairs with low co-location affinity, resulting in worst-case 18% EMU loss.
The two Hera design points on the other hand utilizes worker scalability to successfully rule out model pairs with low co-location affinity, guaranteeing the EMU never falls below 100% and achieve substantial EMU improvement than DeepRecSys and Random. However, a key difference between Hera(Random) and Hera are the following. When the cluster-level scheduler selects model pairs to co-locate, Hera(Random) makes random selection from any model combinations except (high, high) worker scalability model pairs. In contrast, Hera utilizes the estimated co-location affinity to judiciously choose model pairs with the highest EMU, leading to significant EMU improvements vs. the other three data points. Overall, Hera achieves 37.3%, 34.7%, and 5.4% average EMU improvement than DeepRecSys, Random, and Hera(Random), respectively.
2) Effectiveness of Hera Resource Manager: PAR-TIES [24] is a QoS-aware intra-node resource manager targeting generic, latency-critical cloud services. To clearly demonstrate the novelty of Hera's application-aware resource manager, we implement PARTIES on top of Hera's model selection algorithm and compare its EMU against Hera. Across all evaluated scenarios, Hera achieves an average 12% (maximum 55%) EMU improvement than PARTIES. Due to space constraints, we show in Figure 12 a subset of our evaluation where we visualize the max load of low worker scalability DLRM(D) (x-axis) and the other co-located model (y-axis). Hera generally achieves higher sustained max load over PARTIES for the majority of deign points when gradually injecting each model with loads from 40% to 100% of their respective max load. The reason behind Hera's superior performance is twofold. First, Hera is based on our profile-based characterization to determine a good initial starting point within the search space to find the optimal core/LLC resource allocation scheme. Second, while PARTIES needs to carefully fine-tune all shared resources in a system (i.e., in addition to core/LLC, the disk and network bandwidth is also carefully monitored and adjusted by PARTIES), our application-aware Hera leverages the unique properties of ML inference servers (e.g., in-memory processing, Section VI-B) to narrow down the resource management targets, thus enabling rapid determination of an optimal resource management strategy that best suits recommendation model's heterogeneous memory needs. In Figure 13, we show a snapshot of PARTIES vs. Hera's resource allocation when the low worker scalability DLRM(D) is co-located with high worker scalability NCF and DIN. Given its cache sensitive property (Figure 7), NCF and DIN require sufficient LLC capacity to sustain high max load. While PARTIES is able to eventually allocate enough workers for NCF, the amount of LLC provisioned for this cachesensitive workload is much lower than under Hera, failing to achieve high max load.
B. Fluctuating Load
This section evaluates the robustness of Hera's resource manager vs. PARTIES in handling dynamically changing query arrival rates. At-scale datacenter applications frequently experience fluctuations in their load (e.g., sudden burst of high traffic load, diurnal patterns where load is high at daytime while gradually decreasing during night time). To simulate such scenario, we employ measures proposed by Chen et al. [24] where we co-locate models with heterogeneous resource requirements, e.g., DLRM(D) and NCF, and vary the query arrival rate as illustrated in Figure 14: the load to both DLRM(D) and NCF are gradually increased until T 1 , which is when NCF experiences a sudden decrease in its load. At T 2 , the query arrival rate to NCF is suddenly spiked from (Table II).
20%→60% of its max load while DLRM(D) sees a sudden drop from 70%→10%. Figure 14 clearly shows that Hera does a much better job in maintaining tail latency at below SLA, unlike PARTIES which frequently exhibits sudden spikes of SLA violations throughout its execution. Our analysis revealed that PARTIES, while it is capable of eventually figuring out a decent resource allocation strategy, the decision is based on a constant upsize/downsize feedback loop that monitors various shared resources within the system. Because Hera utilizes the profile-based characterization lookup table (Algorithm 3), it is able to rapidly determine the optimal resource allocation even at sudden changes of loads at T 1 and T 2 , enabling robust execution even under fluctuating query arrival rates.
C. Cluster-wide Server Utilization
In this section, we analyze Hera's effectiveness in reducing the number of servers required to fulfill a cluster-wide target QPS goal. Because the set of models deployed by hyperscalers as well as each model's expected service demand (i.e., query arrival rate) are proprietary information not publicly available, we take the following measure for our evaluation.
Even distribution of target QPS among models. We first assume that the eight models in Table I have an identical level of target QPS (Figure 15). We then utilize the four model selection algorithms to measure the total number of servers required to satisfy this target QPS. When the aggregate target QPS across all the models exceeds the max load serviceable by our multi-node cluster, we run separate rounds of experiments in an iterative manner to quantify how many additional servers are needed to service the remaining QPS.
The baseline DeepRecSys allocates a single server to service a single model, so low worker scalability models like DLRM(B,D) require a larger number of servers to reach the target QPS. As for Random, the cluster scheduler randomly selects a given pair of models for multi-tenant deployment until the target QPS is achieved. Because Random is capable of better utilizing compute nodes servicing low worker scalability models like DLRM(B,D), it is able to reduce the number of servers by an average 15% compared to DeepRecSys. However, Random is not able to properly isolate the interference at shared resources, so the unproductive co-location of (high, high) worker scalability models suffers from low efficiency. Both Hera(Random) and Hera are worker scalability aware and can potentially avoid deploying model pairs with low co-location affinity (e.g., NCF vs. DIEN/DIN/WnD). Nonetheless, Hera does a much better job reducing the number of servers vs. Hera(Random) as it effectively utilizes our estimated co-location affinity to minimize unfruitful model co-location pairs. For instance, Hera noticeably reduces the number of servers utilized for deploying DIEN all thanks to its ability to better bin-pack the shared resources among co-located models, achieving an average 26% and 11% reduction in total required servers than DeepRecSys and Random, respectively. Skewed distribution of target QPS among models. Figure 16 shows the number of servers needed when the permodel target QPS exhibits a skewed distribution. Unless the queries are all requested to low worker scalability models or all to high scalability models (which is unlikely the common case in real-world settings), Hera is able to noticeably reduce the number of servers required to reach a target QPS.
D. Sensitivity
Effect of Hera co-location and LLC partitioning. Hera consists of two key components: 1) affinity-aware co-location algorithm, and 2) application-aware LLC partitioning via Intel CAT. To clearly quantify where Hera's main benefits come from, we quantify the impact of isolating our two main proposals in Figure 17(a) as an ablation study. Even without LLC partitioning, Hera's co-location algorithm alone provides 22% EMU improvements vs. Baseline DeepRecSys, with further 8% improvement with LLC partitioning. Different system configuration. We show a subset of our sensitivity study to different system configurations in Figure 17(b), illustrating the EMU improvement when the underlying system configuration is changed with different number of CPU cores, LLC ways, and memory bandwidth (GB/sec). As depicted, the benefits of Hera remains intact across diverse system platform configurations, achieving 30%/35%/18% EMU improvement for the shown configurations.
E. Design Overhead
Profiling/deployment cost. Hera is implemented as a profiling-based, feedback-driven scheduler, and the main design overhead comes from profiling the following two lookup tables: (a) QPS as a function of number of parallel workers ( Figure 6) and (b) QPS as a function of LLC ways allocated (Figure 7). The profiling time taken to generate these two tables (T worker and T LLC , respectively) are: T worker = O(number of CPU cores) T LLC = O(number of LLC ways × number of CPUcores) Using our baseline system (Table II), T worker and T LLC takes less than 1 minute and 15 minutes per each model. As each data point can be collected completely independently, profiling and generating Figure 6 and Figure 7 across hundreds of models can easily be done within tens of minutes assuming thousands of compute nodes are available (which is readily accessible in cloud datacenters). Utilizing these two lookup tables as a 2D software array, generating the colocation affinity matrix in Figure 10(a) (Algorithm 1) even for hundreds of models takes less than one second using a single CPU core.
Deploying Hera across thousands of servers. Each inference server is deployed individually per-node and the execution of Hera's cluster-wide scheduling algorithm (Algorithm 2) incurs less than 100 ms of latency, enabling scalable deployment in at-scale datacenters.
VIII. CONCLUSION
Hera is an application-aware co-location algorithm and resource management software for multi-tenant recommendation inference. We first conduct a characterization on multi-tenant recommendations, uncovering its heterogeneous memory capacity and bandwidth demands. We then utilize such property to develop a profiling-based, feedback driven Hera runtime system that dynamically adapts resource allocation among multi-tenant workers for balancing latency throughput. Compared to state-of-the-art, Hera significantly improves effective machine utility while guaranteeing SLA.
|
2023-02-24T06:42:31.925Z
|
2023-02-23T00:00:00.000
|
{
"year": 2023,
"sha1": "05fc44ed0630a6c7b97075a03be2601a8ba6ed19",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "05fc44ed0630a6c7b97075a03be2601a8ba6ed19",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
6824066
|
pes2o/s2orc
|
v3-fos-license
|
A Multilayer-Based Framework for Online Background Subtraction with Freely Moving Cameras
The exponentially increasing use of moving platforms for video capture introduces the urgent need to develop the general background subtraction algorithms with the capability to deal with the moving background. In this paper, we propose a multilayer-based framework for online background subtraction for videos captured by moving cameras. Unlike the previous treatments of the problem, the proposed method is not restricted to binary segmentation of background and foreground, but formulates it as a multi-label segmentation problem by modeling multiple foreground objects in different layers when they appear simultaneously in the scene. We assign an independent processing layer to each foreground object, as well as the background, where both motion and appearance models are estimated, and a probability map is inferred using a Bayesian filtering framework. Finally, Multi-label Graph-cut on Markov Random Field is employed to perform pixel-wise labeling. Extensive evaluation results show that the proposed method outperforms state-of-the-art methods on challenging video sequences.
Introduction
The identification of regions of interest is typically the critical preprocessing step for various high-level computer vision applications, including event detection, video surveillance, human motion analysis, etc. Background subtraction is a widely-used technique to perform pixelwise segmentation of foreground regions out of background scenes. Unlike foreground object detection algorithms, background subtraction methods typically produce much more accurate segmentation of foreground regions rather than merely detection bounding boxes, without the need to train individual object detectors. A great number of various traditional background subtraction methods and algorithms have been proposed [34,8,21,32,12,28]. Most of these methods focused on modeling background under the assumption that the camera is stationary. However, more and more videos are captured from moving platforms, such as camera phones, and cameras mounted on ground vehicles, robots, ariel drones, etc. Traditional background subtraction algorithms are no longer applicable for such videos captured from a non-stationary platform [7]. The exponentially increasing use of moving platforms for video capture introduces a high demand for the development of general background subtraction algorithms that are not only as effective as traditional background subtraction but also applicable to moving-camera videos.
Similar to most video segmentation methods, a few works [26,5] resort to processing the whole video offline. Offline methods can typically produce good results on short sequences since the information in latter frames can significantly benefit the segmentation in the earlier frames. However, since it needs to store and process the information over the whole video, the memory and computational cost increase exponentially as the number of frames to process grows [7]. Additionally, in various cases, such as video surveillance and security monitoring, videos need to be analyzed as they being streamed in real time, where an efficient online background subtraction method is greatly in demand.
The key to handling long sequences in an online way is to learn and maintain models for the background and foreground layers. Such models accumulate and update the evidence over a large number of frames and also supply valuable knowledge foundation to high-level vision tasks. Recently, a few online background subtraction methods with moving cameras have been proposed [31,20,9,22,41]. Most methods formulate it as a binary segmentation problem with the assumption of only one foreground object, naturally resulting in bad segmentation when multiple moving objects appear in the scene. Especially in the case where objects go across each other, motion estimation for objects suffers great confusion, which further degrades the performance of background subtraction.
To remedy this drawback, we propose a general multilayer-based framework with the capability of handling multiple foreground objects in the scene. The objects can be automatically detected based on motion inconsistency and an independent processing layer is assigned to every foreground object as well as the background. In each layer, we would like the same process to be preformed concurrently inside the "processing block", which takes the accumulated information and the new evidence of each layer as input, and outputs a probability map indicating the confidence of pixels belonging to each layer. In this paper, we elaborately design such a "processing block" with three steps as follows. (a) Motion model estimation is first performed based on Gaussian Belief Propagation [1] with motion vectors of corresponding trajectories as the evidence. (b) The appearance model and the prior probability are predicted by propagating the previous one with the estimated motion model. (c) Given the current frame as new evidence, Kernel Density Estimation [8] is employed to infer the probability map as the output. Finally, based on the collection of probability maps produced by each "processing block", the pixel-wise segmentation for the current frame is generated by Multilabel Graph-cut.
Besides, since it is the first work to tackle multilabel background subtraction problem in moving camera scenarios, to our best knowledge, we also design a methodology to evaluate the performance and show our method outperforms the state-of-the-art methods.
Related Work
Motion Estimation and Compensation: The freely moving camera introduces a movement in the projected background scene, and thus complicates the background subtraction problem. An intuitive idea to tackle such a problem is compensating the camera motion. A few pioneering works resort to estimating a homography [13] that characterizes the geometric transformation of background scene between consecutive frames. Typically RANSAC [11] and its variants MLESAC [36] are employed to achieve robust estimation using many matches of feature points. Jin et al. [15] model the scene as a set of planer regions where each background pixel is assumed to belong to one of these regions. Homographies are used to rectify each region to its corresponding planer representation in the model. Zamalieva et al. [41] leverage geometric information to develop multiple transformation models, and choose one that best describes the relation between consecutive frames.
Recently, motion estimation has been widely employed to comprehensively specify the motion for every pixel [25,2,20,22]. These works [20,22] used optical flow as the evidence. Kwak et al. [20] divided the images into small blocks in a grid pattern, and employed nonparametric belief propagation to estimate the motion field based on average optical flow of each block. Its following work [22] improved the quality of motion estimation by replacing blocks with superpixels as the model unit. On the other hand, in [25,2], optical flow orientations were claimed independent of object depth in the scene, and used to clusters pixels that have similar real-world motion, irrespective of their depth in the scene. However, high dependency on the optical flow makes these methods susceptible to the noise in the estimation of optical flow. In contrast, our method improves motion model estimation by employing Gaussian Belief Propagation [1] with the motion vectors of sparse feature points as more robust evidence.
Appearance Modeling: Traditionally, statistical representations of the background scene have been proposed to estimate spatially extendable background models. Hayman et al. [14] built a mixture of Gaussian mosaic background model. Ren et al. [29] used motion compensation to predict the position of each pixel in a background map, and model the uncertainty of that prediction by a spacial Gaussian distribution. The construction of image mosaic associated with a traditional mixture Gaussian background model was also claimed to be effective in [23,30]. However, the hyper-parameter required by this parametric model restricts its adaptability and application. On the contrary, we employ nonparametric Kernal Density Estimation method [8] to build models of the appearance of foreground and background regions, making our approach more stable and applicable.
Layered Representation: The layered representation, referring to approaches that model the scene as a set of moving layers, has been used for foreground detection [27,16], motion segmentation [39,37,19]. In [27], the background was modeled as the union of nonparametric layer-models to facilitate detecting the foreground under static or dynamic background. Kim et al. [16] proposed a layered background model where a long-term background model is used besides several multiple short-term background models. Wang et al. [39] used an iterative method to achieve layered-motion segmentation. Torr et al. [37] modeled the layers as planes in 3D and integrating priors in a Bayesian framework. [19] models spatial continuity while representing each layer as composed of a set of segments. A common theme of these layered models is the assumption that the video is available beforehand [7]. Such an assumption prevents the use of such approaches for processing videos from streaming sources. Some dynamic textures methods [3,4,24] also employed the layered model to tackle the complex dynamic background, but with stationary cameras. To the best of our knowledge, the proposed method is the first layered model applied in the moving camera scenarios.
The organization of this paper is as follows. The overview of our proposed framework are presented in Section 3. Section 4 describes the trajectory labeling. The components inside "processing block" are described in Section 5&6 and the final pixel-wise multi-labeling are presented in section 7. Finally, Section 8 illustrates quantitatively and qualitatively experimental results based on two criteria.
Framework Overview
The proposed framework is demonstrated in Figure 1. First we employ the feature point tracking method presented in [35] to generate feature trajectories. Then the generated trajectories are clustered into different layers based on motion inconsistency, and the labels of trajectories are continuously propagated over frames using the dynamic label propagation [42]. Each cluster of trajectories is assigned to the corresponding layer, and the number of layers is adapted according to that of foreground objects appearing in each frame.
Each layer possesses an independent "processing block" that produces a posterior probability map. Let k and t denote the index of the layer and the frame respectively. Inside the "processing block", we have two sources of input: the appearance model A k t−1 and the prior probability map of labels P (L k t−1 ) produced from the previous frame, and a group of corresponding sparse trajectories P k t . The first step is the inference of the motion model in each layer using Gaussian Belief Propagation with the motion vectors of corresponding trajectories as the evidence. Then, the new appearance model A k t is obtained by shifting the previous appearance model A k t−1 based on the estimated motion model M k t , and the new prior probability map P prior (L k t |P k t ) can be inferred from the previous one in the same way.
Given the current frame I t as the new observation, the likelihood P (I t |L k t ) is estimated by Kernal density Estimation(KDE) [8] with the propagated appearance model in every layer. Then the posterior probability map for each layer is inferred as is the partition function. With a collection of the posterior probability maps from each layer, we achieve the final pixel-wise labeling by optimizing a cut on a multilabel graph with the minimal cut energy. At the end of the whole process, appearance mod-els are updated with the current frame and labels. In the following sections, we will describe the process steps in detail.
Trajectory Labeling and Propagation
The feature point tracking method [35] we employ has achived a good performance in feature trajectories generation. To cluster trajectories, several motion segmentation methods [26,40] have provided good solutions, but may fail when the video does not meet the assumption of the affine camera model. To get rid of such an assumption, an online method proposed by Elqursh et al. [10] considered sparse trajectory clustering as the problem of manifold separation and dynamically propagate labels over frames. Inspired by [10], we first cluster the trajectories in the initialization frames, and continuously propagates the label of trajectories over frames using the dynamic label propagation [42]. We briefly describe the algorithm here.
Trajectory Clustering
Given n trajectories, two distance matrices D t M and D t S are defined to represent the difference between trajectories in motion and spatial location. The entries are the distances between i-th and j-th trajoctories over frames up to t. For detailed defination, please refer to [26]. The affinity matrix over n trajectories is then formulated as where λ is the paramater to balance two distances.
Considering each trajectory as a node and the affinity matrix as the edge weights, we cast the trajectory clustering to graph cut problem. Starting from the initial cluster that contains all trajectories, normalized cuts [33] are employed to perform optimal binary cut on initial cluster and again on the generated clusters. This recursive process continues until the evaluated normalized cut cost on the cluster is above the threshold(10 −4 in our work), which indicates this cluster of trajoctories belongs to the same component (i.e. objects or the background), and needs no further splitting. All trajectories are assigned with labels according to which cluster they belong to.
Label Propagation
With the labels of trajectories in the intial frames, the label propagation, as a semi-supervised learning method, is adopted to infer the labels of trajectories in subsequent frames. We first construct a graph G with the trajectories in the previews and current frames as the labeled and unlabeled nodes respectively. The affinity matrix A involving the labeled and unlabeled trajectories are calculated and used as edge weights between corresponding nodes. Let Y l and Y u denote the labeling probability matrix corresponding to labeled and unlabeled node respectively. Each row of Y l is a one-hot vector with one at the location cooresponding to the label of node and zero otherwise. To estimate the labeling probability matrix Y u , we apply Markov random walks on the graph [43]. A transition matrix P is defined as where the entry p ij is the transition probabilities from ith node to jth node. Given the partition of nodes into labeled and unlabeled nodes, the matrix P can be splitted into 4 blocks: The closed form solution of the labeling probability matrix of unlabeled node is Y u = (I −P uu ) −1 P ul Y l . The label for the node i can be obtained by where y ij is the entry in the row corresponding to node i. It is worth noting that the label propagation algorithm can only assign the trajectories with known labels. However, when new objects move into the scene, new labels should be introduced in time. To accomplish it, after the labels are predicted in each frame, a normalized cut cost in each cluster is evaluated. A small cost indicates a great intra-cluster variation inside the cluster. If the cost is below the threshold (10 −4 in our work), the cluster should be further splitted and a new label is assigned to the cluster with more different appearance from the previous one. When an object moves out of the scene, few trajetories are assigned with the corresponding label and the corresponding cluster is removed. In this way, the number of clusters changes adaptively according to how many moving objects appear in the scene. After clustering process is done, all trajectories in each cluster are further assigned to each layer.
Motion Model Estimation
Although providing the motion information only at sparse pixels, trajectories of feature points are more accurate and less noisy compared with optical flow. With these accurate motion vectors of trajectories as evidence, we can estimate the motion field model for the whole frame (i.e. to estimate the motion vector of every pixel in each layer). To accomplish the pixel-wise motion estimation, we construct a pairwise MRF on a grid structure, where each vertex represents a pixel and the set of edges ε represents pairwise neighborhood relationship on this structure. Two sets of potentials are involved: edge potentials Ψ ij measuring the similarity between neighborhood vertices, and selfpotentials Φ i measuring the likelihood of evidence. If these potentials are defined as Guassion distribution, the task is thus formulated as a Gaussian Belief Propagation(GaBP) problem [1]. Given the motion vectors of trajectories in the k-th layer, the conditional joint probability distribution of motion model can be inferred as: where P k,t donates the set of feature points along with trajectories that are clustered to k-th layer. The edge and self potentials are defined as Ψ(m i k,t , m j k,t ) = N (m i k,t |m j k,t , Σ m ) and Φ(m i k,t ) = N (m i k,t |m i p,t , Σ p ), where m i p,t represents the motion vector of corresponding trajectories associated with i-th pixel. Our formulation encourages the similarity between motion vectors of neighborhood points and that between estimated motion vectors of feature points and the evidence (i.e. motion of trajectories). According to GaBP, this equation can be rewritten as where the inverse covariance matrix A is defined to show the connection of every pair of nodes, and the shift vector b is defined by the motion of trajectory. A closed form solution for marginal posterior probability is p(m is the entry {A −1 } ii . The estimated motion field is demonstrated in Figure 2.
Bayesian Filtering
The inference of probability maps can be performed as the sequential Bayesian filtering on the Hidden Markov Model. In this section, we first predict appearance model and prior probability for each layer given the motion models, followed by the inference of the posterior probability with new observations in the current frame. For simplicity of the expression, we remove the subscript k in the rest of paper. Note that the processes in each layer are identical.
Model Propogation
Given the probability distribution of the estimated motion model P (M t |P t ), we can estimate the appearance model and the probability map in the current frame by propagating the corresponding model and map from the previous frame. Specifically, the motion vector describes exactly how a pixel shifts between the consecutive frames. Therefore, armed with the motion information of a pixel in the current frame, we can easily obtain the appearance of the pixel by propagating that of the corresponding pixel in the previous frame. However, the motion vector of each pixel has Gaussian distribution over two-dimentional space. It is impractical to marginalize the appearance over the whole Gaussian distribution. If we discard the uncertainty in the motion vector and set the variance to 0, the gaussion distribution is reduced to a Dirac delta function Then the marginalization over the whole appearance model is reduced to the involvement of the particular pixel. The appearance model of i-th pixel can be readily obtained by where the function φ(i, µ i t ) is to obtain the corresponding index in the previous frame by reversely shifting the i-th pixel according to the associated motion vector µ i t . In practice, we found this approximation have little performance degradation while saving much computational cost.
Similarly, the prior probability of i-th pixel in each layer is obtained by
Probability Map Update
Once the appearance model and probability map are propagated from the previous frame, the posterior probability map of each layer can be inferred with the current Frame I t as the new observation. With the assumption of independence between pixels, the posterior probability of i-th pixel is computed by Eq. (9).
The likelihood of the observation in i-th pixel P (I i t |l i t ) is essentially describing how well the appearance I i t fits the appearance model in each layer . Kernel Density Estimation(KDE), as a nonparametric method, can effectively where K G (·) is the Gaussian kernel function, I i f is the color feature in f frame stored for KDE modeling, and N = 20 is the number of stored previous frames. The posterior probability map produced in each layer is normalized by partition function, and then used as the knowledge for multilabel segmentation.
Multilabel Segmentation
With the collection of normalized probability maps of all layers at hand, our final task for foreground objects segmentation is to perform pixel-wise labeling for the whole frame. This segmentation problem can be converted into an energy minimization problem on the pairwise MRF which can be polynomially solvable via Graph Cuts [17]. Due to ill-posed nature of the segmentation problem, regularizations are always required. For our problem, we designed two regularizers: a smoothing cost and a label cost in preference of smoothness of labeling and fewer unique labels assigned, respectively. The global energy function is formulated as: where S(L) denotes the set of unique labels of L. The three terms in the right-side of equation are data cost, smoothness cost and label cost, respectively. The data cost E(I i ) is defined as the negative log probability of ith pixel belonging to a certain layer − log P post (l i t |I i t ). The smoothness cost is defined as the similarity of two neighboring pixels E(I i , I j ) = − log N (I i |I j , Σ a ), and q ij , as a sign function, equals to +1 if l i , l j have the same label; otherwise −1. Moreover, in the term of label cost, h k is the non-negative label cost of the label k. And δ k (f ) is the corresponding indicator function with the definition δ k (L) = 1 if ∃i : l i = k; and 0 otherwise. λ 1 , λ 2 are non-negative parameters to balance data cost against such two regularizations (both are 1 in our work). The energy minimization function can be solved efficiently using the method presented in [6].
It is worth noting that when a new foreground object appears in the scene or initial frames are processed, the feature samples to build the appearance model usually are not sufficient for Kernel Density Estimation. In these cases that KDE is invalid, the probability maps is no longer avaiable as the data cost for the multilabel segmentation. An alternative way to define the term of data cost in energy function is based on the labeled trajectories. E(I i ) = c if l i = l p and −c otherwise, where c is a negative constant, and l p is the label of trajectory associated with i-th pixel. This defination simply ultilizes the motion informance rather than the appearance model.
Finally, according to the labels of pixels, the appearance models A t are updated by adding the color feature I i t in the current frame to the corresponding appearance model.
Experiments
We evaluate our algorithm qualitatively and quantitatively based on two criteria: one is the normal two-label background subtraction and one is multilabel background segmentation. The former is evaluated using F-score, which is the common measurement. For the latter, since, to our best knowledge, no one has done such work before, we carefully design a reasonable measurement. The result shows our method outperforms the state-of-the-art approaches in both settings.
Dataset: Experiments were run on two public datasets. A set of sequences (cars1-8, people1-2) in the Hopkins dataset [38] is commonly used for quantitative evaluation on this topic, some of which contain multiple foreground objects. To quantitatively evaluate the performance, we produced the groundtruth mask manually for all frames, including the discrimination of foreground objects. Another one is Complex Background Dataset(CBD) [25], where the complex background and camera rotation introduce a great challenge.
Two-label Background Subtraction
The performance of our framework, represented as MLBS (Multi-Layer Background Subtraction), is compared to five state-of-the-art algorithms: GBSSP [22], FOF [25], OMCBS [9], GBS [20], and BSFMC [31]. GBS requires the initialization labels of each frame as additional inputs, and GBSSP needs the groundtruth of the first frame as the initialization. For fair comparisons, we provide GBS with the labeling results of BSFMC, and offer GBSSP the labeling result of the first frame generated by our method. Note that instead of the requirement of these additional informations, our MLBS method completes self-initialization automatically. Moreover, we use the parameters provided by authors in external methods and fixed them for each algorithm in all experiments.
The quantitative comparisons on two dataset are shown in Table 1. It can be seen that the proposed method outperforms other methods in the literature on most test sequences. Especially in the case where the objects are occluded and then separated, such as the cars3 and people2, the multilayer strategy boosts the performance with a great jump on F-score. This is because our method can accurately estimate separate motion models for foreground objects, instead of only one ambiguous model for the whole foreground, and separate appearance models comprehensively provide the evidence of probabilistic inference. The overall F-score of our method is higher than the best score of the state-of-the-arts by a noticeable margin (83.66% vs 77.73%). To evaluate the ability of the proposed method handling multiple objects, we categorize the suquences into three scenarios according to the number of moving objects in the scene, and report the average F-score on each scenario in Table 2. In the single object scenario, our method outperform the second best method by a small margin (84.05% vs 80.70%). But as the number of objects increases, the margin grows (3.4% vs 8.0% vs 8.7%). It clearly demonstrates the outstanding capability of our method in complex scenes. Table 2: Two-label background subtraction performance comparison on the videos with different numbers of moving objects. Best performance scores are highlighted in bold.
MLBS GBSSP FOF OMCBS GBS BSFMC
Qualitative comparisons with three algorithms are illustrated in Figure 4. We pick two representative sequences (cars5 and people2), where at least two foreground objects appear in the scene, from Hopkins Dataset, and one se-quence (forest) from CBD. It is notable that our MLBS algorithm separates moving objects and background accurately when the background scene is complex and the motions of objects are not simply in the same direction. For instance, in the sequence of people2, two women, who are walking in different directions, occluded and then separated, are separated precisely while other methods wrongly label the new appearing background region (e.g. the black rubbish bin) as foreground.
Multilabel Background Subtraction
We also evaluate the capability to separate different foreground objects. Since no one, to our best knowledge, has done such work before, we have designed a baseline to compare the performance. Details about baseline are shown in the Supplementary Material. Besides, [26] have proposed an offline approach (SLT) to tackle video segmentation problem based on long-term video analysis, and achieved a cutting edge performance. Since this method has no discrimination of foreground and background, we modify it by manually assigning the background label to the best fit segmented region, and make a comparison with it.
For quantitative comparison, we design a measurement Table 3: Performance comparison of Multi-foreground segmentation on Hopkins Dataset by modifying the metric used in [26]. Let g i denote the foreground region in the groundtruth, c i the corresponding regions in the mask generated by algorithms, and |.| the number of pixels inside the region. For each foreground region in the groundtruth, precision, recall and F-score are defined as: Overall metric is obtained by averaging the measures of single regions. And the best one-to-one assignment of generated regions to groundtruth regions is found by the Hungarian method [18]. In the case where there exist generated regions without the assignment of groundtruth regions, we set the precision and recall of such regions to 1 and 0 respectively. It's worth mentioning that our revised metric is calculated over only foreground regions to keep consistency with the binary background subtraction, as the measurement values are the same as that of the binary one when there is only one foreground object in the scene.
The performance evaluation is shown in Table 3. We have compared the performance of the approaches evaluated on both all frames and the first ten frames. It can be seen that the proposed method outperforms SLT on both sets and has a great jump from the baseline. It's notable that the F-score of our method on all frames is higher than that on first ten frames due to the nature of online methods. Unlike the offline methods that hold the knowledge of the whole sequence at the beginning of the sequence process, our online approach has little prior knowledge and requires the self-initialization step during the first several frames, which surely leads to lower performance due to the insufficiency of prior knowledge. But it affects little as the sequence becomes longer.
Qualitative evaluations of our method and SLT are demonstrated in Figure 5. Our proposed method can separate the objects more accurately while SLT may falsely recognize two objects as only one when objects are very near and moving in the similar direction (see Block2). Furthermore, our method could detect new objects immediately when they enter the scene (see Block2). With the ability of accurate and robust foreground objects detection and seg- Figure 5: The Multilabel segmentation performance comparison with SLT on the sequence cars3, cars2 and peo-ple2, separated in three blocks. Row1: Groundtruth; Row2: MLBS; and Row3: SLT [26]. (Better seen in color) mentation, our method produces a proper number of foreground object regions, which is reflected by the higher recall in Table 3. Additionally, equipped with appearance models for each layer, our MLBS method is capable of dealing with the articulated motion (see Block3) to a certain extent.
tclb mtest mdprop grapcut upmdl Total 1109 3675 144 1369 559 6856 Table 4: Average computation time (msec) over different stages on a single frame from cars1. The stages are: trajectory clustering and label propagation (tclb), motion estimation(mtest), model propogation (mdprop), graphCut (grapcut) and update model(upmdl). Table 4 shows the average computational time per frame. Our un-optimized Matlab implementation takes around 7 seconds per frame with an Intel Xeon-E5 CPU and 16GB menory. The computational time is dominated by motion estimation and graphcut. Motion estimation is done mainly by matrix inversion and multiplication. Since such operations can be readily parallelized on the GPU, we believe the real-time performance can be achieved by the optimation with the GPU and faster multi-thread implementation.
Conclusion
We propose a novel online multilayer-based framework for background subtraction with moving camera. In our framework, every foreground object and the background are assigned to an independent processing layer. A processing block is carefully designed to perform the posterior inference using Bayesian Filtering Framework, and Multi-label Graph-cut is employed to produce the pixel-wise segmentation for every video frame based on the normalized probability maps. Experiments show that our method performs favorably against other state-of-the-art methods, with outstanding ability to segment multiple foreground objects.
|
2017-09-04T20:06:51.000Z
|
2017-09-04T00:00:00.000
|
{
"year": 2017,
"sha1": "d21ef3d00ec221e6e7b7c6a26ab328fee463d128",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1709.01140",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "388f80a1066e86fedfe2e9a29ddc3a90cdef0dec",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
234906334
|
pes2o/s2orc
|
v3-fos-license
|
THE IMPACT OF LEADERSHIP STYLES ON JOB SATISFACTION: A STUDY OF THE HOSPITALITY INDUSTRY
Job satisfaction is considered as an important factor in terms of reducing employee absenteeism and intention to quit. It can be clearly seen based on the findings of previous studies that job satisfaction has a specific role in in hospitality management. The literature on the hospitality industry emphasizes that appropriate leadership styles should be applied to maximize the job satisfaction of hotel employees. In order to further the understanding of the factors affecting job satisfaction and to expand the literature for the hospitality industry, a conceptual model has been developed that includes structural relationships between leadership styles and job satisfaction. Regression analysis was used to experimentally test the developed model. The data were obtained from employees of five-star hotels operating in Alanya, Turkey (N=311). As a result of the research, it has been determined that transformational and transactional leadership positively affects job satisfaction, whereas laissez-faire leadership has no effect on job satisfaction. While our research will enable a more in-depth understanding of the antecedents of job satisfaction, managerial suggestions for improving the service quality in hotels will be made, which will enhance the awareness amongst managers.
INTRODUCTION
The hospitality industry is an important determinant of countries' economic development as well as a significant source of employment. Due to the sector's excessive dependence on human resources, it requires substantially qualified human sources (Liu & Wall, 2006). However, the lack of a qualified labor force is an important issue which the hospitality industry must contend (Ashton, 2018;Zopiatis, Constanti & Theocharous, 2014). This problem remains a priority that needs to be resolved as it threatens the continuing viability of the fast-growing hospitality industry (Davidson & Wang, 2011).
The turnover costs the qualified employees in the organization reaches 200% of the annual salary (Allen, Bryant & Vardaman, 2010). Moreover, the hospitality industry has been struggling with high turnover for a long time (Stamolamproset et al., 2019) and unfortunately, this problem continues to increase by 72.1% (Ruggless, 2016). Therefore, the hospitality industry is characterized by high employee turnover in the literature (Stamolampros et al., 2019). Karatepe et al. (2006) emphasized that low job satisfaction is the basis of the high intention to leave the hospitality industry. Pizam and Thornburg (2000), supporting Karatepe et al. (2006), stated that almost 90% of employees decided to leave their jobs due to lack of job satisfaction. In a study conducted by Ashton (2018) on hotels, it was estimated that a 1% increase in employee satisfaction will result in a 54% increase in employees' intention to remain at work. In this sense, increasing the job satisfaction of the employees will eliminate an important problem experienced by the hospitality industry.
Employees with high job satisfaction tend to provide quality service to customers (Kong et al.,2018) and are more positive, productive and creative than those without job satisfaction (Korkmazer & Ekingen, 2017). The lack of job satisfaction causes some undesirable consequences, including reduced productivity, profitability and customer satisfaction (Book, Gatling & Kim, 2019). Stamolampros et al. (2019) emphasized that employees with high job satisfaction provide higher returns, particularly in the hospitality industry where service quality is considered as a critical success factor. From this perspective, increasing job satisfaction is a very important issue that enables organizations to achieve a sustainable competitive advantage in the hospitality industry (Ashton, 2018;Cheng & Yi, 2018). Due to its importance, job satisfaction has become an important research topic for researchers investigating the hospitality sector (Ashton, 2018;Kong et al., 2018).
Hotels operating in the hospitality industry are becoming increasingly aware that the lack of effective management of human resources could lead to companies experiencing large scale economic losses within the industry (Şeşen, Sürücü & Maşlakcı, 2019;Zopiatis, Constanti & Theocharous, 2014). This awareness has led organizations to change their perspectives towards the understanding of management. However, due to the nature of the hospitality industry, the management of people presents unique challenges for managers. Effectively managing human resources with an appropriate leadership style will increase efficiency and thus ensure survival within the rapidly changing and highly competitive environment (Surucu & Sesen, 2019). Therefore, it is important to understand the impact of leadership in the hospitality industry.
Studies focusing on the hospitality industry have shown that the leadership style adopted by managers is significantly related to the job satisfaction of hotel employees (Huang et al., 2016;Munir & Iqbal, 2018). In the literature, it is stated that leadership is a key factor contributing to the job satisfaction of employees. In fact, this is an expected situation as leaders can influence their subordinates and shape their attitudes and behaviors through their managerial talents. As a result, the use of an appropriate leadership style, which is an important determinant of job satisfaction, will increase the efficiency of the hospitality industry while simultaneously increasing the job satisfaction of the employees (Erkutlu, 2008). Rothfelder, Ottenbacher and Harrington (2012) mentioned that leadership as an empirical research subject has studied in various industries, although there is been minimal focus in the hospitality industry. For this reason, the hospitality industry has been chosen as the research universe. Thus, the research will contribute to the gap in the literature on the hospitality industry.
This study on the hospitality industry contributes to the literature in three ways. Firstly, although the effect of leadership on employees has been researched in various industries, there is a gap with regard to the hospitality industry (Rothfelder et al., 2012). In this context, this study: (a) contributes by filling this gap in the hospitality industry; (b) attempts to improve the quality of service in hotels by identifying factors affecting job satisfaction among hotel employees; and finally (c) the findings raise awareness among managers in the hospitality industry and enable managerial recommendations to be made for human resources managers. The research consists of three parts. In the first part, the literature on the relationship between leadership styles and job satisfaction is discussed and the relationship between the variables is mentioned. In the second part, the findings are shared. In the last section, the findings are discussed and some valuable recommendations are made for managers operating in the hospitality industry.
The Importance of Job Satisfaction in the Hospitality Industry
Social, political and economic transformations resulting from globalization have changed the hospitality industry significantly and increased its footprint across the world (Baquero et al., 2019). In this context, competition among hotels is increasing and hotel managers are focused on increasing service quality in order to gain a competitive advantage (Min & Min, 1997). Providing quality service in the hospitality industry is one of the most important factors that provides a sustainable competitive advantage as it increases confidence in the competitive market (Naseem, Ejaz & Malik, 2011;Surucu & Sesen, 2019). It is not possible for hotels that cannot provide quality service to survive in this competitive environment. In hotels, most services are provided by employees (Kong et al., 2018). Therefore, employees play an important role in determining success (Gani, Ghani & Nujum, 2019). The success criterion for hotels is to satisfy customers by providing quality service (Yuan & Jiaqing, 2019).
Customer satisfaction requires the combined efforts of hotel staff. However, the jobs performed by employees within hotels can be very difficult and demanding (Cheng & Yi, 2018). This is because while hotel employees try to satisfy their customers, they also have to struggle with long working hours, multitasking requirements and stressful working environments (Ashton, 2018;Şeşen, Sürücü & Maşlakcı, 2019). Previous studies on hotel staff have stressed that they are more vulnerable to negative consequences such as stress, burnout, "emotional injury", violence and physical injuries (Huertas-Valdivia, Gallego-Burín & Lloréns-Montes, 2019). Considering all the above, the job satisfaction of hotel employees is lower than other sectors (Kong et al., 2018). While the low level of job satisfaction reduces the quality of service, the intention to quit that it causes creates a serious financial burden for the hotels. The determination of how hotel employees' job satisfaction can be increased is significantly important (Yuan & Jiaqing, 2019).
While Locke (1976) defined job satisfaction as "a pleasant or positive emotional state arising from the evaluation of one's work or work experiences", Gani et al. (2019) defined it as "positive feelings that an employee feels towards work." In line with all these definitions, it is clear that job satisfaction is a psychological situation that reflects the positive feelings of the employee towards their job. These positive emotions, resulting from job satisfaction, increase both physical and mental health along with individual productivity, and can accelerate the process of employees learning new business skills the ability of (Cheng & Yi, 2018). They also contribute to the provision of quality service and sustainability, meaning that increasing job satisfaction among employees is of ongoing interest for social scientists and managers Organizational behavior researchers predict that job satisfaction will provide high profits to companies by increasing the job performance of employees (Prabowo et al., 2018) and the quality of service offered to customers (Choi & Kim, 2012;Stamolampros et al., 2019). In addition, it is stated that job satisfaction is an important factor that reduces employee absenteeism (Yang, 2010) and greatly reduces the intention to quit (Cheng & Yi, 2018).
According to the findings of previous studies, it is clear that job satisfaction is very important for the hospitality industry. The literature on the hospitality industry emphasizes that appropriate leadership styles should be applied to maximize the job satisfaction of hotel employees (Prabowo et al., 2018). The leader can influence and direct the employees by their position. Leadership styles in hotels are more important than other industries because unexpected results of the leadership style will affect the quality of service and therefore customer loyalty (Kara et al., 2013). In this sense, managers should direct and support employees (Ashton, 2018). As a result, understanding the influence of the leader in the hotel industry is vital to gain a competitive advantage through the effectively management of employees.
The Relationship Between Leadership and Job Satisfaction
Due to its importance, leadership is one of the most discussed topics in social sciences (Bass & Avolio, 1990). Leadership is defined as a process in which employees are influenced by leaders and their behavior is directed to achieve corporate goals (Munir & Iqbal, 2018). In line with this definition, it is clear that each leader is unique with different abilities (Eliyana & Ma'arif, 2019). Based on these unique talents of the leader, researchers have revealed many leadership styles. However, the overlap between these discussed leadership styles is extremely problematic. The basis of this problem lies in the efforts of leadership researchers to create new leadership theories without trying to compare the validity of existing theories (Derue et al., 2011: Sürücü & Yeşilada, 2017. Although researchers have claimed that the leadership styles they stated are conceptually different, there are some similarities in terms of the definition of the leader, impact processes and results (Sürücü & Yeşilada, 2017;Yukl, Gordon & Taber, 2002). However, the subject of our research is not to discuss these overlaps. For this reason, based on the "full-scale leadership" model of Bass (1985), transactional, transformational and laissez-faire leadership styles are included in the research. The "transactional, transformational and laissez-faire" leadership styles that form the "full-scale leadership" model proposed by Bass (1985) are widely accepted in the literature.
Transformational leadership, in line with the goals and strategies of the organization; undisputedly in the literature, it is the most researched leadership style in terms of affecting the motivation, well-being and business attitudes of the employees (Skogstad et al., 2015;Şeşen, Sürücü & Maşlakcı, 2019). According to Bass, the success of a transformational leader is demonstrated by both increased performance results and the degree to which followers develop their own leadership potential and skills (Bass & Avolio, 1990). Prabowo et al. (2018) mentioned that transformational leadership is the most appropriate leadership styles for the hospitality industry, considering the changing nature of the hospitality industry. The reason for this is that transformational leadership can influence employees by shaping the direction and mission of the organization. Transformational leaders increase the motivation of the employees by making intense efforts to understand their needs and desires through sincere communication and interaction. In addition, they make an important contribution to increasing the job satisfaction levels of employees through the trustworthy and supportive environment they create (DeRue et al., 2011).
In a recent study, Eliyana and Ma'arif (2019) emphasized that transformational leadership has a direct impact on job satisfaction and organizational commitment. Meta-analysis studies also support this finding (Judge & Piccolo, 2004). Research into the hospitality industry shows that perceived transformational leadership has a significant effect on the job satisfaction of the employees (Ashton, 2018;Prabowo et al., 2018;Piccolo et al., 2012). In this context; the following hypothesis is proposed:
Hypothesis 1: Transformational leadership affects the job satisfaction of hotel employees in a significant and positive way.
While transformational leadership is focused on change and development, transactional leadership is usually effective for maintaining the status quo (Bass, 1985;Bass & Avolio, 1990;Şeşen, Sürücü & Maşlakcı, 2019). Transactional leaders work to best meet the needs of employees based on Maslow's hierarchy of needs. Bass (1985) defined transactional leadership as an active form of management where the leader monitors the performance of employees in line with organizational goals and takes appropriate action (penalty-reward) depending on whether employees meet the standards (Bass & Avolio, 1990). Based on the definition of Bass (1985), it is clear that the transactional leader motivates hotel staff through reward and punishment. In fact, this leadership style indicates a clear chain of command, where targets are predefined, employee performance is tracked, and the employee is rewarded or punished according to the situation. Adopting this type of leadership, the executive closely monitors the execution of tasks and takes corrective actions when deviations from targets are detected (Northouse, 2013;Sürücü, Yeşilada & Maşlakçı, 2018). In addition, the transactional leader serves the organization as a whole in terms of the settlement of disputes within the organization (Torlak & Kuzey, 2019). In this respect, in the hotel environment, the transactional leader plays a key role in reducing employee anxiety because hotel employees tend to rely on managers' advice to fulfill their duties in uncertain situations while dealing with their customers (Şeşen, Sürücü & Maşlakcı, 2019;Quintana, Park & Cabrera, 2015). In line with the above discussions, it is considered that the operational leader will positively increase job satisfaction. Researches show that transactional leadership increases job satisfaction (Ashton, 2018; Torlak & Kuzey, 2019). The following hypothesis is proposed to be investigated in line with the theoretical framework and studies.
Hypothesis 2: Transactional leadership affects the job satisfaction of hotel employees in a significant and positive way.
Laissez-faire leadership can be perceived as the reverse of transactional leadership, as it focuses on helping employees solve the problems they face on a daily basis (Şeşen, Sürücü & Maşlakcı, 2019). As a result of its passive component, laissez-faire leadership has often been criticized in the literature due to its ineffectiveness, and therefore, many researchers view laissez-faire leadership as a non-leadership management approach (Sürücü, Yeşilada & Maşlakçı, 2018;Skogstad et al., 2015). However, with respect to liberal leadership, Bass and Avolio (1994) stated that "avoidance or absence of leadership is the most ineffective way according to almost all research in terms of leadership". Managers who prefer this style of leadership do not help their employees by avoiding taking responsibility and making decisions. In this sense, they negatively affect the psychological health of the members of the organization (Nguyen et al., 2017;Şeşen, Sürücü & Maşlakcı, 2019). Hotel employees cannot obtain sufficient help when they need to deal with a critical issue and this will have certain negative psychological effects on them (Nguyen et al., 2017). Recent research indicates that laissez-faire leadership increases work stress (Che et al., 2017) and bullying in the workplace, and lowers work confidence (Glambek, Skogstad & Einarsen, 2018). In addition to these psychological effects, studies shown that it is negatively related to stress, burnout, emotional exhaustion and job satisfaction (Madera, Dawson & Guchait, 2016). Some studies have also shown that laissez-faire leadership has negative psychological effects on employees. Considering the negative effects of laissez-faire leadership on employees' well-being and psychology, it is considered that this type of leadership will negatively affect job satisfaction. In the studies of DeRue et al (2011) supporting the literature, it is stated that the laissez-faire leadership negatively affects job satisfaction. The following hypothesis is proposed to be investigated in line with the theoretical framework and studies.
Hypothesis 3: Laissez-faire leadership affects the job satisfaction of hotel employees in a significant and positive way.
METHOD
Before the questionnaire was administered to the sample group, it was administered to a pilot group of 40 people. Questions that were not understood or caused hesitation were rearranged and then reapplied to the same group. The questionnaire results of the pilot application were analyzed with IBM SPSS 23 program and its reliability was tested. The questionnaire was administered to the sample with a sufficient reliability level.
The study was conducted on employees of five-star hotels in Turkey's Alanya. The survey study was carried out for two weeks (in June) and was applied to the participants selected by convenience sampling method. 500 questionnaires were prepared to collect survey data. The human resources managers of the hotels were interviewed by a team of 4 people, including the researchers. Surveys were delivered to the managers of the hotels where the research was permitted, in a sealed envelope and the surveys were collected in a sealed envelope. Feedback was provided for 385 questionnaires out of 500 prepared. When the obtained questionnaires were examined, 74 questionnaires that were not filled or were found to be filled incorrectly were not included in the study. Thus, the research was completed over 311 survey data.
Alanya is one of Turkey's most important tourism centers. The development of tourism in the city has led to the development of travel businesses, entertainment businesses and other sectors, especially food and beverage businesses, as well as touristic businesses (Tekin, 2012). In this context, tourism is an important source of income for Alanya as well as a serious source of employment for the people of the region (Tekin, 2012). For this reason, Alanya was chosen as the research universe. Five-star hotels were chosen as analysis units since they were considered to have a professional management understanding and well-developed human resources management structures (Torlak & Kuzey, 2019).
A number of studies were conducted to maximize the reliability of the research and to minimize common method bias. Firstly, the scales whose validity and reliability were tested were used and variables were mixed to prevent the participants from sensing the research model and questions. In addition, the questionnaires that each employee filled were returned in a sealed envelope. The 311 questionnaires obtained were then analyzed.
Scales
In the research, a questionnaire with 26 questions consisting of three different scales was used.
Demographic Structure: It consists of six questions aimed at determining the characteristic features of the employees.
Leadership Styles: In this study, leadership styles are independent variables. The three-dimensional leadership scale developed by Avolio and Bass (1995) was used to determine the perceived leadership style. All questions of the scale are from 1 ("strongly disagree") to 5 ("strongly agree") in a 5-point Likert system. Sample scale questions include: "Every time I perform well, manager gives me positive feedback", "It avoids getting involved in matters when it is needed", "It sets an example for employees with their behavior". The Cronbach's alpha reliability coefficients of the scales are (transformational leadership = 0.771, transactional leadership = 0.739, laissez-faire leadership = 0.679) Job Satisfaction: Job satisfaction is the dependent variable of this research. It was measured using a one-dimensional five-item scale developed by Hackman and Oldham (1975). The scale was adapted to Turkish by Basım and Şeşen (2009). The scale was prepared in the 5-point Likert system. Sample questions related to the scale include: "My job satisfies me in general." The Cronbach's alpha reliability coefficient of the scale is 0.79.
Control Variables:
In researches on organizational behavior, it is important to control the factors affecting the dependent or independent variables in order to generate correct results. Based on an examination of the relevant literature, it has been determined that various demographic factors are associated with job satisfaction (Derue et al., 2011). For this reason, demographic variables, which are determined to be important predictors of job satisfaction, have been taken under control.
RESULTS
The demographic characteristics of the participants included in the research are presented in Table 1. When Table 1 is analyzed, it is seen that the majority of the participants were in the "25 and under age group" (46.9%) and single (70.4%). In addition, 79% of the employees had five and under years of work experience.
To test the structural validity of the scales, factor analysis was performed using the varimax technique, one of the orthogonal rotation techniques using the principle component analysis method. Hair et al. (2014) state that the structure is valid if the factor load is above 0.40. As seen in Table 2, factor loads are between 0.515 and 0.727. These values show that the scales used in the study have structural validity. Since statistical methods have certain assumptions, it is important to determine the distribution of data (Sürücü & Maslakçı, 2020). Skewness and kurtosis values were checked to determine the distribution of the data. Skewness and kurtosis values were found to be between +1.5 and -1.5 in the analysis. These values show that the data have a normal distribution (Hair et al., 2014).
To determine the direction and strength of the correlation between the variables included in the research, the Pearson correlation coefficient was calculated using the IBM SPSS 23 program and the results are shown in Table 3. When Table 3 is examined, it is noteworthy that the Cronbach's alpha coefficients of the variables are between 0.68 and 0.79. This is above the minimum value of 0.60, which is considered an indication of the reliability of the scale. These values show that the internal consistency of the factors is good. The results of the correlation analysis show that transformational and transactional leadership has a significant and positive relationship with job satisfaction, while laissez-faire leadership has no significant relationship with job satisfaction. However, the transformational leadership perceptions of the hotel employees (mean 3.54) are higher than the transactional and laissez-faire leadership perceptions.
Regression analysis was applied to determine the effect of leadership styles on job satisfaction. In the first stage of the regression analysis performed in two stages, demographic features were taken under control, in the second stage; independent variables were included in the model. Transformational leadership (β=.459, p<.001) and transactional leadership (β=.186, p<.05), while demographic variables (gender, age and educational status) are under control, affect job satisfaction significantly and positively. However, it seems that laissez-faire leadership has no significant effect on job satisfaction. In line with the research findings, while Hypothesis 1 and 2 were accepted, hypothesis 3 was rejected.
DISCUSSION
The leadership styles used in the hospitality industry plays an important role in achieving the positive behaviors of the employees (Huertas-Valdivia et al., 2019). This study aims to increase the quality of service in hotels by studying the effects of different leadership styles on job satisfaction. For this purpose, the effects of transformational, transactional and laissez-faire leadership styles on job satisfaction were examined. The main findings are as follows.
The findings of the research with regard to transformational leadership are parallel to the results of other studies in the literature (Eliyana & Ma'arif, 2019;Prabowo et al., 2018). The transformational leader works intensely to reach the organizational aims by communicating and interacting with hotel staff. Thus the motivation of the hotel staff increases when they feel their leader is working for them. In addition, the trustworthy and supportive environment provided by the leader contributes to increasing the job satisfaction levels of the hotel employees. This finding obtained in the research shows that hotel managers who want to succeed in a rapidly changing and globalizing work environment should demonstrate transformational leadership behaviors rather than transactional and laissez-faire leadership styles.
The results of the research show that transactional leadership is not as effective on job satisfaction as transformational leadership, but it is effective in increasing the job satisfaction of hotel employees. Due to the nature of the hospitality industry, it is full of uncertainty. Hotel employees must deal with uncertainty in their work while providing services to customers. For this reason, hotel employees ask their managers for help due to the uncertainties they face. Transactional leaders play a key role in reducing hotel workers' worries. However, transactional leaders closely follow the execution of tasks and take corrective measures when they detect deviations from targets (Northouse, 2013). This approach in transactional leadership behavior will be an important factor for hotel employees to perform their job-related tasks efficiently and maintain their positive and optimistic feelings, values and perceptions about work (Torlak & Kuzey, 2019). While this increases the motivation of the employees, it also contributes to the formation of job satisfaction. This finding obtained in the scope of the research is consistent with previous researches, which stated that the transactional leadership approach can be effective in combating the problems and uncertainty employees face and increasing their job satisfaction (Ashton, 2018;Torlak & Kuzey, 2019).
The research has revealed some interesting findings in terms of the analysis of the effect of laissez-faire leadership on job satisfaction. Contrary to expectation, our research shows that laissez-faire leadership has no effect on job satisfaction. Having considered the negative psychological effects of laissez-faire leadership on employees, it is expected that it will negatively affect job satisfaction. This finding differs from the work of Munir and Iqbal (2018), who found that laissez-faire leadership negatively affects job satisfaction. There are two pos-sible explanations for this difference. Firstly, employees' perceptions of leadership differ according to the organization's industry, culture and demographic characteristics of the employees (Huertas-Valdivia et al., 2019). This study was specifically conducted in the summer months when the hospitality industry was concentrated and the majority of the employees were young people. This may have led to differences in laissez-faire leadership perceptions.
Another reason may be due to the cross-sectional design of the study. Cross-sectional studies cover the specific time when the research was conducted. However, the effects of leadership can manifest over time (see, Martinko et al., 2013). This systematic lack of time to examine the effects of leadership can lead to different outcomes in terms of the impact of leadership on individual and organizational outcomes. Skogstad et al. (2015) stated that the relationship between leadership styles and job satisfaction differ cross-sectionally with longitudinal designs and that the laissez-faire leadership style can have a cumulative effect over time. In fact, in his study conducted on the same sample group 6 months and 2 years later, he examined the effect of the laissez-faire leader on job satisfaction, and while the laissez-faire leader had no effect in the first study (6th month), it was determined in the study conducted 2 years later that he did have a significant effect. When considered in this context, this finding obtained may be considered reasonable due to the cross-sectional design of the research.
Success in the hospitality industry depends on the service skills of the employees. The literature states that employees with high job satisfaction tend to provide quality service to customers (Kong et al., 2018).Therefore, in order to ensure that the employees provide excellent service to the customers, it is necessary to increase their job satisfaction. Low levels of job satisfaction in employees produce a number of negative consequences such as customer dissatisfaction as well as reduced productivity and profitability. In this context, our research findings reveal valuable results for the hospitality industry and human resources managers in particular. One of the main implications of the research is that leadership is very important for the hospitality industry. In fact, leaders should be able to anticipate changes in the hospitality industry, to maximize the efficiency and profitability of the organization and to achieve the company's goals (Erkutlu, 2008).
CONCLUSION
Managers in the hospitality industry use a variety of leadership styles to influence employees and ensure the smooth operation of the organizations. Acting in ways that motivate and inspire their employees as well as creating a supportive organizational climate by making intense efforts to understand needs and desires are examples of transformational leadership behaviors. On the other hand, a management style based on the penalty reward system, while maintaining the current status quo, is defined as transactional leadership, while the avoidance or absence of leadership refers to a laissez-faire leadership approach. The empirical studies conducted in the hospitality industry, including this research, show that adopting transformational leadership behaviors will contribute to increasing employee satisfaction and hence high efficiency. Based on these findings, those managers who want to achieve a healthy and sustainable competitive advantage in the hospitality industry should exhibit transformational leadership behaviors, which are key to the success in the hospitality industry which is a dynamically evolving environment. However, Khan (2018) stated that transformational leadership can be improved through education. Hence, if human resources managers can teach transformational leadership to managers at all levels within an organization, they can positively impact the firm's performance. In addition, it would be beneficial to plan training for human resources managers to recruit staff who are suitable for the transformational leadership style and to acquire transformational leadership characteristics in training.
Hospitality industries follow the classic central decision making model, where traditional management styles are dominant. However, many unforeseen situations can arise in the hospitality industry. If employees are not responsible for making decisions, it is possible to prevent problems being solved quickly and providing quality service to customers (Huertas-Valdivia et al., 2019). Every customer and service experience is different in the hospitality industry and employees must have some degree of autonomy and discretion to meet their different needs, demands and expectations. All of these pose unique challenges for industry executives while attempting to increase employee satisfaction and service quality. Therefore, managers should be aware of the effects the leadership styles they apply can have on employees. In cases where the leadership behavior does not comply with organizational requirements, managers should take corrective action. As a result, each leader is unique with different abilities and hotels need to create a talent map for their chosen structural positions.
The study has some limitations. The leadership styles were analyzed based on the questionnaires that were answered only by the employees and not by the leaders. It should be considered that this approach may cause bias and errors in the interpretation of the results. Also, while examining the findings, it is important not to ignore sectorial and regional differences in terms of generalizability. The research was conducted in Turkey's Alanya province. It is recommended that other countries and different industries should be included to expand this research and compare the results accordingly.
|
2021-05-22T00:03:17.666Z
|
2020-11-17T00:00:00.000
|
{
"year": 2020,
"sha1": "3b945abb4cc2bc1750a898980db6c344106da4c8",
"oa_license": null,
"oa_url": "https://dergipark.org.tr/tr/download/article-file/1262783",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "8d00a8411593a7cb351f3c3d09277ebfcc1c5cdc",
"s2fieldsofstudy": [
"Business",
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
245940800
|
pes2o/s2orc
|
v3-fos-license
|
Environmental Assessment Using Integrated Risk Based Approach (IRBA) at Cahaya Kencana Landfill Site
—Cahaya Kencana landfill site located above the land belonging to the local government of Banjar District with land area 35,5 Ha, where used for Cahaya Kencana landfill 16,5 Ha, Kehati park 7,5 Ha, the remaining unused land is 11,5 Ha. Cahaya Kencana landfill site has been implementing the sanitary landfill system since 2014 with the existing area of 8.089,73 m 2 and the calculation results shows that sanitary landfill area can only use until the year 2021. So the goal that is to be achieved from this research is to evaluate the technical aspects and environment of Cahaya Kencana site with decision making tools. One of them through the assessment of environmental risk index or Integrated Risk Based Approach (IRBA). Risk Index (RI) assessment results using IRBA obtained 524,007 value with a category of moderate hazard evaluation, so that Cahaya Kencana site can be forwarded and rehabilitated into controlled landfill gradually. The strategy that needs to be done in the framework of Cahaya Kencana site is modifications of leachate treatment unit design.
I. INTRODUCTION
HE population growth in Banjar District is very rapid and increased especially in the capital of the district, Martapura subdistrict. This condition is also directly proportional to the increase in urban waste production. The increase in waste production led to increased area disposal. Cahaya Kencana landfill has been implementing the sanitary landfill system since 2014 until now with a condition that is almost full. Volume of garbage entered from the year 2014 to 2018 which is transported to the landfill reaches 376.621 m 3 or an average of 206,37 m 3 /day. Surely the problem requires environmental management as soon as possible, one of them by doing a risk assessment. It is necessary because lot of open dumping areas that are left abandoned without the proper mitigation [1].
Integrated Risk Based Approach (IRBA) was first used as a decision making tool in the location of Perungudi (PDG) and Kondungaiyur (KDG) in the city of Chennai, India where the calculation obtained the value of RI in the location of PDG of 569 and the location of the KDG 579 [2], [3]. Meanwhile, in the location of Eneka, Nigeria research obtained the value of RI 452,3 [4]. Other studies at Igbatoro landfill [5] have been found that the impacts include high health and environmental risks and the degree of silence on the impact of communities, where the risk index (RI) value gained is 571,58. Rehabilitation landfill needs to be carried out due to soil pollution in the landfill area is usually polluted by leachate [6].
Cahaya Kencana landfill apparently has leachate contamination in the sanitary landfill area with the resistivity value of soil tainted in the range of 1,50 -4,34 Ωm at a depth between 0,75 meters to 13 meters [7].
A. Description of the Studi Site
The research location is located at Cahaya Kencana landfill at Lihung village, Karang Intan sub district, Banjar district, province South Kalimantan. Where for the retrieval of loose garbage samples is garbage truck that comes from Sekumpul street, while for the sampling of solid waste samples derived from sanitary landfill area. Location of Cahaya Kencana landfill located at 03 O 27'29.0" Southern Latitude (SL) and 114 O 55'28.2" East Longitude (EL). Figure 1. explains the location of this research.
B. Risk Assessment
Risk Index/ RI calculated with this formula [3]- [5]: Where: Wi = Weightage of the with variable ranging from 0 -1000 Si = Sensitive index of the with variable ranging from 0 -1 RI = Risk Index variable from 0 -1000 Risk Index (RI) can be used for classification of landfill sites to be closed or rehabilitated. A value of 0 indicates no or less danger, a value of 1 indicating the highest potential danger. The higher the value indicates greater risk to human health and the actions that must be taken immediately at the site of landfill. The next priority decreases with the total decrease in value. Lowest values indicate low sensitivas and small environmental impacts. Hazard-level evaluation criteria based on the risk index value of landfill can be seen in table 1.
III. RESULTS
The measurement of maps with ArcGIS 10.2 software is derived data that the closest water source used in operational and maintenance activities in Cahaya Kencana landfill site is the river used for irrigation with the closest distance is 967 meters ( pebble sand, which can be assumed as a rough sand that has a permeability value between 1-0.01 cm/sec [9]. Distance Cahaya Kencana site with conservation forest or critical habitats that is the education forest based on Banjar district spatial plan year 2013-2032 is 2.205 meters (2,2 km) ( Accumulation rate uses 2,75 liters/person/day according to the major city classification [11].
The level of garbage in sanitary landfill in Cahaya Kencana site is 10 meters high with land area of 8089,73 m 2 . The results of future landfill age are obtained 1 year or until year 2021. While based on the results of the community kuisoner, it is obtained that the average community as much as 60% received the existence and rehabilitation of open waste landfill.
Laboratory test results obtained leachate results for the BOD parameter has value of 210 mg/L, COD has value of 501,73 mg/L, and the TDS has value of 2.254 mg/L. Solid waste moisture content is 58,23%. Quality of water well in Cahaya Kencana site can not be used as a source of drinking water. Because the parameter of hardness water has a value of 1444,20 mg/L, E. Coli as much as 1,3 x 10 3 amount/100mL Through Table 4. we can perform optimal processing efficiency on each pond, with a modification design. For example for anaerobic pond lenght, the comparison is 1/5, so the existing pond can be blocked into 5 parts. So that leachate can be processed optimally according to the hydraulic retention time (HRT). This also applies to each other pond with the addition of the connecting floodgates of each partition. This water door can be opened at the time of the
IV. CONCLUSION
Risk Index (RI) assessment results using IRBA obtained 524,007 value with a category of moderate hazard evaluation, so that Cahaya Kencana landfill site can be forwarded and rehabilitated into controlled landfill gradually. Leachate treatment unit can be design modification, so that the resulting effluent will be in accordance with the quality standards that have been set.
|
2022-01-15T16:13:40.694Z
|
2021-11-10T00:00:00.000
|
{
"year": 2021,
"sha1": "0b25f2f666b690044b31e6aae9dd3f2492eb8460",
"oa_license": "CCBYSA",
"oa_url": "https://iptek.its.ac.id/index.php/jps/article/download/11360/6351",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "76beff1855af7fdf13390d44eb6e0aab2aac7703",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
121607831
|
pes2o/s2orc
|
v3-fos-license
|
Results on high transverse momentum quarkonium production and dissociation in heavy ion collisions
We calculate the yields of quarkonia in heavy ion collisions at RHIC and the LHC as a function of the transverse momentum. Based upon NRQCD, these results include both color-singlet and color-octet contributions and feed-down effects from excited states. In reactions with ultra-relativistic nuclei, we focus on the consistent implementation of dynamically calculated nuclear matter effects, such as coherent power corrections, cold nuclear matter energy loss and the Cronin effect, in the initial state and collisional dissociation of quarkonia in the final state, as they traverse through the QGP. Theoretical results are presented for J/ψ and Υ and compared to experimental data where applicable. At RHIC, a good description of the high-pT J/ψ modification observed in central Cu+Cu and Au+Au collisions can be achieved within the model uncertainties. We find that J/ψ measurements in proton-nucleus reactions are needed to constrain the magnitude of cold nuclear matter effects. At the LHC, a good description of the experimental data can be achieved only in mid-central and peripheral Pb+Pb collisions.
Introduction
Melting of heavy quarkonium states, like the J/ψ and the Υ, due to color screening in a deconfined quark-gluon plasma (QGP) [1] has been proposed as one of the principal signatures for its formation. An expected experimental consequence of this melting in the thermal medium created in heavy ion collisions (HIC) is a suppression of the yields of heavy mesons, when compared to their yields in nucleon-nucleon (NN) collisions scaled with the number of binary interactions.
Description of quarkonia is provided by non-relativistic quantum chromodynamics (NRQCD) [2]. In this picture, the QQ state in the color-singlet combination is the lowest order Fock space component of the full quarkonium wavefunction. Higher Fock components in the wavefunction have additional partons and are suppressed by positive powers of the small parameter v = p/M (velocity). In our calculation we will take into account the Fock components that allow for both color-singlet and color-octet QQ contributions. The NRQCD formalism has been used to calculate differential yields of heavy mesons as a function of the transverse momentum p T in p+p collisions. Our goal is to combine the NRQCD formalism with cold nuclear matter (CNM) effects and the effects of propagation of quarkonia through a hot QGP to calculate the final yields in HIC [3]. We place special emphasis on the application of the collisional meson dissociation mechanism [4,5], which has provided good description of D and B meson suppression, to quarkonia. We present results for the nuclear modification ratio, in reactions with heavy nuclei. We discuss Au+Au and Cu+Cu collisions at RHIC at √ S = 0.2 TeV per nucleon pair and Pb+Pb collisions at LHC at √ S = 2.76 TeV per nucleon pair.
Quarkonium production in p+p collisions
In this section we describe the production of quarkonia at high transverse momentum in p+p collisions. It provides a baseline for the calculation of the nuclear modification factor R AB (p T ) defined above in Eq. 1. It also gives the initial unquenched spectrum of "proto-quarkonia" in the heavy ion collision, excluding CNM effects and effects of the propagation of the quarkonium states through the QGP medium.
The dominant processes in evaluating the differential yields of heavy mesons as a function of p T are the 2 → 2 processes of the kind g + q → H + q, q + q → H + g and g + g → H + g, where H refers to the heavy meson. We label the process generically as a + b → c + d, where a and b are light incident partons, c refers to H and d is a light final-state parton. Given the matrix element for the process, M ab→cd , the cross section has the form where φ a (φ b ) is the distribution function of parton a (b) in the incident hadron traveling in the +z (−z) direction. We denote by x a (x b ) the large lightcone momentum fraction carried by the parton. In Eq. (2) √ S is the center-of-mass energy of the incident hadrons andŝ,t andû are the parton level Mandelstam variables. Momentum-energy conservation fix We take the factorization and renormalization scales µ f , µ R to be m T = p 2 T + m 2 H . The invariant cross section is given by We use NRQCD [2] results to calculate the production of quarkonia in p+p collisions. NRQCD provides a systematic procedure to compute any quantity as an expansion in the relative velocity v of the heavy quarks in the meson. For example, the wavefunction of the J/ψ meson (analogous expressions hold for the ψ(2S), Υ, Υ(2S) and Υ(3S)) is written as The differential cross section for the prompt (as opposed to inclusive which includes contributions from B−hadron decay) and direct (as opposed to the indirect from the decay of heavier charmed mesons) production of J/ψ can also be calculated in NRQCD. It can be written as the sum of the contributions, dσ/dy/dp where the quantity in the brackets [ ] represents the angular momentum quantum numbers of the QQ pair in the Fock expansion. The subscript on [ ] refers to the color structure of the QQ pair, 1 being the color-singlet and 8 being the color-octet. The dots represent terms which contribute at higher powers of v. The short distance cross sections dσ(QQ) correspond to the production of a QQ pair in a particular color and spin configuration, while the long distance matrix element O(QQ) → J/ψ corresponds to the probability of the QQ state to convert to the quarkonium wavefunction. This probability includes any necessary prompt emission of soft gluons to prepare a color neutral system that matches onto the corresponding Fock component of the quarkonium wavefunction. Expressions similar to the ones shown in Eqs. (5), (6) hold for other quarkonium states (such as Υ, χ c , χ b , · · ·). The parton level cross sections and non-perturbative probabilities are available in the literature, see a list of references in [3]. An illustration of the lowest-order (LO) NRQCD calculation of quarkonium production is given in Fig. 1. The left panel shows the cocktail of contributions that give the baseline production yield for J/ψ at LHC at √ S = 2.76 GeV in the NRQCD formalism. For the direct production, there are four contributions: one color-singlet contribution ( For the prompt yield, we add the feeddown contributions from the χ cJ . The contributions of the χ cJ are also shown in Fig. 1. To obtain the feed-down contributions, we multiply the corresponding yields by the corresponding branching fractions, This gives the p+p baseline contribution for the J/ψ. We have included experimental data on prompt J/ψ production at √ S = 2.76 TeV [6]. Each of the species will undergo a different modification due to CNM and QGP effects. To obtain the p+A and A+A yields, we calculate the modified yields for each species and combine them again to obtain the net modification. The right panel of Fig. 1 shows a calculation of the yields of bb states. We obtain the results for Υ(1S), Υ(2S) and Υ(3S) separately (but include the relevant χ b feed-down for each). If only the Υ(1S) yields are measured, then we can obtain that by adding to these yields, the feed-down from Υ(2S) (branching fraction 31%) and Υ(3S) (branching fraction 16.4%). The theoretical calculation gives a good description of the experimental data within the limitations of the LO NRQCD approach. Specifically, the calculated spectra are somewhat harder than the experimental measurements and cannot be pushed down to low p T . Any difference in normalization cancels out in the calculation of R AA .
Cold Nuclear Matter effects
In heavy ion reactions, the production yields of energetic particles are always affected by cold nuclear matter (CNM) effects. These include nuclear shadowing, initial state energy loss and transverse momentum broadening (also interpreted as the origin of the Cronin effect). In our calculation, the CNM effects are evaluated from the elastic, inelastic and coherent scattering processes of partons in large nuclei, see Refs. [8,9,10]. For quarkonia, cold nuclear matter effects are still not well-understood. To our knowledge, Ref. [3] is the first attempt to calculate the Cronin effect for QQ production. We do not try to constrain its magnitude phenomenologically and point out that data from p+A reactions is needed to constrain all CNM effects. We find that including transverse momentum broadening in the same way as is done for light final states reduces the suppression for p T between 3 and 10 GeV significantly and may actually lead to a small enhancement of the charmonium cross section. It is not clear, given the large error bars, whether the Au+Au and Cu+Cu data at RHIC are better described by the Cronin calculation.
In contrast, it is evident that the Cronin effect is not consistent with Pb+Pb data at the LHC, which sees a large attenuation at p T ∼ 8 GeV. At present, at high p T , initial-state energy loss and the Cronin provide an estimate for the uncertainty range of CNM effects.
Quark-Gluon Plasma effects
In this section we present the details of the dissociation model of quarkonium propagation through the QGP. The essence of the dissociation model for heavy mesons is that they have short formation times and can therefore form in the medium on a time scale t form . Interaction with the thermal medium can dissociate the mesons on a time scale t diss . The final yield is given by a rate equation which takes into account the formation and dissociation processes. In the next section we first discuss the rate equation abstractly, using t form and t diss as parameters. In the later sections we will estimate t form and calculate t diss . For more details on the dissociation model and its application to the phenomenology of open heavy flavor, see [4]. The approach to estimating the formation time of quarkonium states differs considerably from the approach used for open heavy flavor [5,4] or light particles [11] that come from the fragmentation of a hard parton. For quarkonia, the QQ state is prepared instantly (∼ 1/ p 2 T + M 2 ) in the hard collision and subsequently expands to the spatial extent determined by the size of the asymptotic wavefunction. In this case all spatial directions are important. The velocity of the heavy quarks in the meson and a typical upper limit of the meson formation time can be evaluated as follows: where the typical momenta κ and sizes a are obtained by solving the Schödinger equation for the corresponding quarkonium state. In this paper we are interested in high transverse momentum mesons, in which case there is a boost in the direction of propagation and, consequently, time dilation Here, γ is the meson boost factor. Since the formation process is non-perturbative and can not be modeled accurately, the values of t form obtained from Eq. 9 should be considered as an estimate of the upper bound. Therefore, in addition to calculating the final yields for t form = t fmax = γa β Q , we also calculate the yields for t form = t fmin = γa 2β Q . The propagation of a QQ state in matter is accompanied by collisional interactions mediated at the partonic level, as long as the momentum exchanges can resolve the partonic structure of the meson. Two effects are related to these interactions: a) a broadening in the distribution of quarkonium states relative to the original direction; b) a modification of the quarkonium wavefunction. The former effect integrates out as long as we consider inclusive production. The latter effect leads to the dissociation of the meson state. Let us define: In Eq. (10) µ 2 is the typical squared transverse momentum transfer given by the Debye screening scale, µ = gT for a gluon-dominated plasma. λ q is the mean scattering length of the quark and ξ ∼ few is an enhancement factor from the power law tail of the differential scattering cross section. Finally, x 0 is the position of the propagating QQ and β is the velocity of the heavy meson. By evaluation the overlap between the initial and final meson wavefunctions the meson survival probability P surv. can be obtained. The dissociation rate is then given by The uncertainty in t diss arises from the uncertainty in the coupling between the heavy quarks and the medium (described by the strong coupling constant g) and the enhancement that arises from the power law tails of the Moliere multiple scattering in the Gaussian approximation to transverse momentum diffusion [5] (described by ξ). Let us denote by N hard QQ (p T , α) the number of perturbatively produced point-like QQ states at transverse momentum p T . Up to an overall multiplicative Glauber scaling factor T AB , these p T number distributions are directly proportional to the cross sections discussed in Section 2. The rate of formation of the corresponding hadronic state, with the appropriate quantum numbers {α}, is given by the inverse formation time 1/t form (p T , α). In the presence of a medium, the meson multiplicity, which we denote by N meson QQ (p T , α), is reduced by collisional dissociation processes at a rate 1/t diss (p T , α). Finally, the number of dissociated QQ pairs with a net transverse momentum p T is N diss. QQ (p T , α). The dynamics of such a system is governed by the following set of ordinary differential equations: subject to the constraint N hard QQ (t; p T , α) + N meson QQ (t; p T , α) + N diss. QQ (t; p T , α) = N hard QQ (p T , α), and is uniquely determined by the initial conditions N meson
Numerical results for the nuclear modification factors
In this section we neglect the Cronin effect but include initial-state cold nuclear matter energy loss and shadowing. Let us first consider the nuclear modification factor for J/ψ mesons. From Fig. 2 we see that the measurement of R AA in RHIC Au+Au collisions [12] shows a suppression factor of about 0.8 at p T ∼ 6 GeV. Even including the uncertainty in our model parameters, we obtain a somewhat higher suppression (for p T ∼ 6 GeV, R AA ∼ 0.35 − 0.45 for Au+Au) than the one currently observed at RHIC. In Fig. 2, our results for the prompt yields of J/ψ mesons are marked by upper and lower yellow bands corresponding to the upper (t max form ) and lower (t min form = t max form /2) limits of our formation time estimate respectively. The bands themselves correspond to our estimate of the uncertainly in the sets of parameters that determine the coupling of the heavy quarks with the in-medium partons [g = 1.85, ξ = 2 (minimum considered coupling gives the upper limit of the yellow band) and g = 2, ξ = 3 (maximum considered coupling for the lower limit of the yellow band)]. The pronounced effect of the variation of the formation time can be intuitively seen as follows. From Eq. 14, we see that the dissociation mechanism is operative only when N meson QQ is substantial, i.e. after t form . Since the upper limit for formation time of quarkonia can be on the order of several fm/c, the density of the medium at t max form is reduced considerably due to Bjorken expansion, giving weaker dissociation and smaller suppression.
In the right panel of Fig. 2 we show the p T -averaged suppression, of J/ψ mesons versus centrality. We present a comparison to the ATLAS central-to-peripheral data [13]. Our theoretical predictions compare well to the measurement in mid-central and Data is from STAR [14] peripheral reactions. The deviation between data and theory is only for N part > 300. (In the case of CMS data the deviation between data and theory is for N part > 200.) The CMS experiment at the LHC has also measured R AA for Υ(nS) states in Pb+Pb collisions at √ S = 2.76 TeV per nucleon pair [6]. We cannot calculate the equivalent ratio of R AA s for inclusive production because our formalism for the production and propagation of Υs is not applicable to p T (Υ) ≤ 6 GeV. In our approach the meson should be boosted relative to the medium. Instead, in the left panel of 3 we show comparison to the minimum bias p T -differential Υ(1S) CMS nuclear modification measurement. In this case, the theoretical calculation is performed for the average number of participants for minimum bias collisions. The combined results for R AA (0-20% central) and R pA (minimum-bias) for bottomonia are given in the right panel of Fig. 3. The effect of transverse momentum broadening is much smaller for bottomonia when compared to the one for charmonia. This can be intuitively understood as follows. The mechanism for Cronin enhancement in this calculation is that initial state scattering increases the typical transverse momentum carried by the incident partons by a few GeV. For quarkonia, there is an additional scale m H . For bottomonia the mass scale is considerably larger than the transverse momentum broadening scale and few additional GeV do not increase the yield significantly. Preliminary Υ suppression measurements are now available at RHIC [14]. More differential p T measurements will shed light on the similarities and differences in the CNM effects at RHIC and at the LHC.
Conclusions
In summary, we carried out a detailed study of high transverse momentum quarkonium production and modification in heavy ion reactions at RHIC and at the LHC [3]. We used a NRQCD approach to calculate the baseline quarkonium cross sections. We found that for J/ψ mesons the theoretically computed spectrum is slightly harder than the one observed in the experiment. For all Υ states (1S, 2S, 3S) the agreement is within a factor of two when we consider both the TeVatron and the LHC data. In reactions with heavy nuclei, we presented theoretical model predictions for the nuclear modification of quarkonium yields at high p T in minimum bias p(d)+A and 0-20% central A+A collisions. We focused on the consistent inclusion of both cold (CNM) and hot (QGP) nuclear matter effects in different colliding systems at different center-of-mass energies. We compared our results to published and preliminary experimental data, where applicable. We found that ignoring the Cronin effect leads to a small overestimate of the suppression of J/ψ mesons in the p T region between 5 GeV and 10 GeV in central Cu+Cu and Au+Au collisions at √ S = 0.2 TeV at RHIC. Including initial-state transverse momentum broadening leads to a somewhat better agreement between theory and the current experimental data only for the Cu+Cu reactions. A smaller Cronin enhancement will work better. We demonstrated that CNM effects can be easily constrained in d+A reactions at RHIC. For example, the d+Au calculation that includes power corrections and cold nuclear matter energy loss predicts 20% suppression of the J/ψ cross section. Including transverse momentum broadening may lead to as much as 50% enhancement in the region of the Cronin peak. We also found that the Cronin-like modification of the Υ spectrum is much smaller. Current data on high-p T quarkonium production at RHIC does not indicate the presence of thermal effects at the level of the quarkonium wavefunction within our theoretical framework.
The conclusions from our theoretical model comparison to the √ S = 2.76 TeV LHC data are not as clear cut. Our calculations underestimated the suppression for J/ψ production reported by the ATLAS and CMS experiments in the most central Pb+Pb collisions. On the other hand, it works quite well in mid-central and peripheral reactions. We found that the Cronin enhancement at the LHC is smaller than the one at RHIC due to the harder spectra. However, any Cronin enhancement appears incompatible with the experimental results. For Υ mesons, p T -differential data in A+A collisions is scarce. CMS measurements of minimum bias Υ(1S) indicate that the low p T suppression may decrease or disappear at high p T . We plan to address the possibility of thermal effects at the level of the quarkonium wavefunction in a separate publication. We also plan to extend the meson dissociation mechanism to photon-tagged heavy flavor [15,16] and heavy flavor-tagged jets [17].
|
2019-04-19T13:11:42.840Z
|
2012-11-09T00:00:00.000
|
{
"year": 2012,
"sha1": "53b4f2f08dea103a8e750259cc0af2218484f86b",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/389/1/012029",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "d2b4b6ed14b100b8cbeaf8852ce15b102c56f97a",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
73408103
|
pes2o/s2orc
|
v3-fos-license
|
Complementary & Alternative Methods
CA A Cancer Journal for Clinicians The term “complementary and alternative methods” (CAM) refers to products and regimens that individuals may employ either to enhance health and well being or to cure disease. Complementary therapies are used along with mainstream care to manage symptoms, relieve stress, and enhance quality of life. In contrast, alternative methods are used instead of evidence-based medical therapy. The dangers inherent in bogus alternatives are two-fold: First, they may cause direct harm, and second, they may be ineffective, resulting in disease progression. In this section, we hope to help providers guide patients toward CAMs that might improve quality of life and away from those that are ineffective, toxic, and wasteful of time and money. —David Rosenthal, MD and Terri Ades, RN, MSN, AOCN, Section Editors COMPLEMENTARY ALTERNATIVE METHODS &
The term "complementary and alternative methods" (CAM) refers to products and regimens that individuals may employ either to enhance health and well being or to cure disease. Complementary therapies are used along with mainstream care to manage symptoms, relieve stress, and enhance quality of life. In contrast, alternative methods are used instead of evidence-based medical therapy. The dangers inherent in bogus alternatives are two-fold: First, they may cause direct harm, and second, they may be ineffective, resulting in disease progression. In this section, we hope to help providers guide patients toward CAMs that might improve quality of life and away from those that are ineffective, toxic, and wasteful of time and money.
-David Rosenthal, MD and Terri Ades, RN, MSN, AOCN, Section Editors COMPLEMENTARY ALTERNATIVE METHODS
&
New in vitro research indicates that vitamin C can promote formation of highly reactive lipid metabolites that can lead to DNA mutations that might increase cancer risk. The report, published in Science (2001;292:2083-2086), 1 is raising questions about the increasingly common use of vitamin C supplements.
" [One] thing which has captured people's imaginations is the fact that there's never been a negative effect of vitamin C reported before," says Ian Blair, PhD, coauthor of the study, and A. N. Richards, professor of pharmacology and director of the center for cancer pharmacology at the University of Pennsylvania in Philadelphia.
Vitamin C is necessary to health and functions as a scavenger of free radicalshighly reactive compounds that can cause mutations. For this reason, many scientists and lay people have speculated that high doses of the vitamin should be useful for cancer chemoprevention. However, most observational and interventional studies have not supported this hypothesis. In a surprising twist, Blair and his colleagues found that vitamin C, like free radicals, also can promote the formation of harmful genotoxins.
The in vitro experiments started with linoleic acid, a polyunsaturated fatty acid found in sunflower, corn, and safflower oils. Linoleic acid is the major fatty acid found in blood and is normally broken down into a compound called lipid hydroperoxide. When vitamin C was added to lipid hydroperoxide under conditions found in the body, the dangerous genotoxins formed within hours.
Report No Cause for Alarm
However, David Ringer, PhD, MPH, a scientific program director for the American Cancer Society (ACS), believes that the report presents no cause for alarm."They're very interesting, preliminary results," says Ringer. "They definitely demonstrate that, in vitro, vitamin C can participate in the creation of a compound known to be genotoxic.Whether that actually occurs in the body or not-obviously they don't have data that speak to that. But they raise the interesting question,'does it happen and should we be concerned about it?'" Epidemiological studies have not found a link between taking vitamin C supplements and risk of developing cancer. In other words, the supplements do not appear to be either causing or preventing cancer. Perhaps the reason is that vitamin C has both beneficial antioxidant and harmful genotoxic effects in vivo, which balance each other out. The practical significance of this hypothesis is that if the lipid hydroperoxide pathway could be blocked, the chemopreventive potential of vitamin C might be realized. Blair speculates that megadosing with vitamin C may constitute the highest risk. "People will take one [vitamin C] tablet and then think,'well, if I take five I'll be even more protected,'" Blair says."When you examine the scientific literature, you'll see that there's absolutely no scientific evidence that more is better. Now there's a potential for more to be worse. It seems to have reawakened this notion that it's better to get your vitamin C from your diet than from supplementation." This study and Blair's interpretation provide additional support for the ACS view on the issue. According to the ACS Guidelines on Diet, Nutrition, and Cancer Prevention, "strong evidence associates a diet rich in fruits, vegetables, and other plant foods with reduced risk of cancer, but there is no evidence at this time that supplements can reduce cancer risk. The few studies in human populations that have attempted to determine whether supplements can reduce cancer risk have yielded disappointing results."
BLACK COHOSH NO BETTER THAN PLACEBO FOR BREAST CANCER SURVIVORS WITH HOT FLASHES
Many advocates of herbal medicine claim that black cohosh (Cimicifurga racemosa) reduces the intensity of hot flashes associated with menopause, but researchers at Columbia University in New York found that for women with breast cancer, the herb was no more effective than placebo for relieving most menopausal symptoms. The Columbia study appeared in the May 15 issue of the Journal of Clinical Oncology (2001;19:2739-2745). 2 "We initiated this trial because hot flashes have an adverse effect on breast cancer survivors' quality of life, and a number of patients have said that black cohosh relieved symptoms," said study co-author Judith S. Jacobson, DrPH, assistant professor of clinical public health at Columbia University's Mailman School of Public Health.
During the study, breast cancer patients who had completed primary therapy were randomly assigned, after stratification by Tamoxifen use, to receive either black cohosh or placebo. To gauge the intensity of hot flashes and other menopausal symptoms, subjects recorded the number and the intensity (on a scale of 1 to 3) of hot flashes, and completed a detailed menopausal symptom survey.
Those who received black cohosh reported significantly less sweating than those who received placebo (p < 0.04). However, there was no comparable decrease in other menopausal symptoms, including heart palpitations, headaches, poor sleep, depression, irritability, or nervousness. There were no significant differences in number or intensity of hot flashes between the placebo and study groups, although both groups showed a reduction in number and intensity of hot flashes during the course of the study. The observations applied to women taking Tamoxifen and to those not using the medication. Black cohosh did not have any estrogen-like effects on leuteinizing hormone or follicle-stimulating hormone levels. No adverse effects were attributed to the herbal treatment.
The study outcome does not mean that black cohosh has no medicinal value for menopausal women, according to Jacobson. "We asked only what we can learn about the effects of 40 mg of black cohosh daily for two months on breast cancer patients," she says.
Jacobson says the study illustrates the importance of controlled clinical trials in testing herbal treatments to determine if they are harmful or live up to claims made by
Complementary & Alternative Methods
manufacturers. Writing in the JCO article, Jacobson and colleagues note that "…the placebo effects in our study were significant; without a control group, we might easily have attributed all the improvement in menopausal symptoms to black cohosh." "I'm glad researchers are conducting these kinds of studies," says David Rosenthal, MD, professor of medicine at Harvard Medical School and chair of the ACS's Advisory Group on Complementary and Alternative Methods of Cancer Management. "We want an answer to the question of whether herbal substances work. They need to be tested because people are using them whether we like it or not." Rosenthal adds that though the study was small, involving fewer than 70 participants, the results shift the burden of proof to practitioners who claim that black cohosh is an effective remedy for hot flashes.
Jacobson says that another larger and longer randomized trial of black cohosh for women who do not have cancer is underway at Columbia.
MIGHT HELP IN PROSTATE CANCER PREVENTION OR TREATMENT
In a short-term pilot study of prostate cancer biomarkers, a low-fat diet supplemented with ground flaxseed reduced total serum testosterone, free androgen index, and tumor proliferation index, and increased apoptosis. Reported in the July issue of Urology (2001;58:47-52), 3 these results warrant further study of this dietary intervention as a strategy for prostate cancer risk reduction or as a complementary method for men with prostate cancer.
Flaxseed is high in lignans, fiber-related compounds that can bind androgens and increase their elimination during enterohepatic circulation. Lignans also act as phytoestrogens, and this combination of endocrine effects might be reasonably expected to inhibit prostate cancer formation and growth. In addition, flaxseed is a rich source of omega-3 fatty acids, which have been linked in some epidemiological studies to a lower incidence of prostate cancer.
Wendy Demark-Wahnefried, PhD, RD, LDN, and colleagues from Duke University Medical Center and Durham Veterans Affairs Medical Center, tested this dietary intervention in 25 men who were recently diagnosed with prostate cancer and were awaiting radical prostatectomy. Men in the study followed a low-fat (≤ 20% of total calories) diet supplemented daily with 30 grams of ground flaxseed for an average period of 34 days.
During this time, the participants' serum testosterone levels and their free androgen index decreased significantly, by 15% and 19%, respectively. Compared with historical controls (matched by age, race, PSA level at diagnosis, and biopsy Gleason score) prostatectomy specimens of the subjects in the study had a significantly lower tumor proliferation index and significantly higher apoptotic index, measured by MIB-1 labeling and TdTmediated dUTP-biotin nick end-labeling, respectively.
In their discussion, the authors point out that this exploratory study was intended as an initial look at their hypothesis and was limited by its use of historical controls and by its short duration. They were still able to conclude, "These findings suggest that a flaxseedsupplemented, low-fat diet may have an effect on prostate cancer biology that may be mediated through a hormonal mechanism." Tim Byers, MD, MPH, co-chair of the ACS Workgroup on Nutrition and Physical Activity for Cancer Survivors, and professor of preventive medicine, University of Colorado School of Medicine, agrees with the authors' CA Cancer J Clin 2001;51:316-320 conclusion but notes that it would be premature to make any firm recommendations for men with prostate cancer based on this small, uncontrolled study of intermediate biomarkers. Byers says one needs data on clinical outcomes. "This was a study in which two dietary changes were tested at the same time-a diet very much lower in total fat than most men consume (20% of calories from fat), and flaxseed oil supplements.The reductions in cholesterol and testosterone levels observed may be due as much to the low-fat diet as to the flaxseed. If this dietary intervention does have effects on testosterone metabolism it would likely not be a more effective alternative to conventional anti-androgen therapy and would probably have little impact as a complementary therapy for men already taking conventional hormonal therapy. On the other hand, as a complementary dietary intervention, this combination of substantial reduction in dietary fats along with flaxseed supplements might eventually turn out to be useful for men who have surgery, radiation, or expectant management without undergoing conventional hormonal therapy. Low-fat diets of this type might also be effective in reducing the risk of getting prostate cancer. The side effects of this type of intervention are minimal, and this is a very heart-healthy, low-fat diet. I look forward to continued research in this important area in which the interventions might be evaluated in the context of randomized, controlled trials."
RISK OF PERIOPERATIVE PROBLEMS
Unlike conventional over-the-counter or prescription drugs, which cannot be marketed without submitting laboratory and clinical data to document safety and efficacy, dietary supplements are exempt from these pre-market requirements. Instead, availability of such information depends on voluntary research efforts by health care professionals.
An article in the July 11 issue of JAMA (2001;286:208-216) 4 reviews the literature on the use of herbal remedies as a factor contributing to perioperative complications. In their discussion, Michael K. Ang-Lee and colleagues from the Pritzker School of Medicine, University of Chicago, highlight the importance of good patient-physician communication in helping to prevent such adverse effects.
Ang-Lee, et al. note concerns regarding perioperative use of eight common herbsechinacea, ephedra, garlic, ginkgo, ginseng, kava, St. John's Wort, and Valerian-and provide recommendations for their perioperative discontinuation. They note, "Complications may arise from the herbs' direct and pharmacodynamic or pharmacokinetic effects. Direct effects include bleeding from garlic, gingko, and ginseng; cardiovascular instability from ephedra; and hypoglycemia from ginseng. Pharmacodynamic herb-drug interactions include potentiation of the sedative effects of anesthetics by kava and Valerian. Pharmacokinetic herb-drug interactions include increased metabolism of many drugs used in the perioperative period by St. John's Wort." Dr. Barrie Cassileth, chief of integrative medicine service at Memorial Sloan-Kettering Cancer Center and member of the American Cancer Society Advisory Group on Alternative and Complementary Methods of Cancer Management, says, "Although the increasing prevalence of dietary supplement use has received considerable attention in medical journals and in the popular media, the issue is often not discussed by physicians and their patients." "Patients may be reluctant to talk about over-the-counter remedies," Cassileth says. "They may assume that these remedies do not 'count,' or are irrelevant, because they are natural products rather than prescription drugs.
Complementary & Alternative Methods
Concern about physician disapproval may play a role. On the other hand, physicians typically fail to ask about non-prescription therapies that patients are taking.They may not have the time to do so, and lack of knowledge about herbal remedies may make some reluctant to bring up a topic about which they can offer little in the way of informed advice." "Patients often incorrectly assume that herbal products are benign," Cassileth says. "Several herbs, including those mentioned in the study, have anticoagulant activity that may be further exacerbated in cancer patients with concomitant coagulopathies." Cassileth says, "Ang-Lee et al. effectively reviewed the available literature and raised awareness about herbs during the perioperative period. Radiation, chemotherapy, and surgery are three primary treatment modalities for cancer patients. Most oncologic surgery is scheduled well in advance, and disclosure of dietary supplement use should be requested and documented by all physicians caring for cancer patients." Cassileth concludes,"Herb use has important implications for radiation and chemotherapy as well as for surgery. Communication is a twoway street; physicians as well as patients have a responsibility to discuss patients' self-prescribed remedies. They should ask direct, open-ended questions to initiate communication and protect patients against disapproving reactions." CA
|
2019-03-11T13:11:11.592Z
|
2001-09-01T00:00:00.000
|
{
"year": 2001,
"sha1": "2d934ef492e30358435a9f5ee79ce2fb61612c6f",
"oa_license": null,
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.3322/canjclin.51.5.316",
"oa_status": "GOLD",
"pdf_src": "Wiley",
"pdf_hash": "c0e112757866325688e229133f473a9bdcb8769a",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245021897
|
pes2o/s2orc
|
v3-fos-license
|
Predicting 90-day survival of patients with COVID-19: Survival of Severely Ill COVID (SOSIC) scores
Background Predicting outcomes of critically ill intensive care unit (ICU) patients with coronavirus-19 disease (COVID-19) is a major challenge to avoid futile, and prolonged ICU stays. Methods The objective was to develop predictive survival models for patients with COVID-19 after 1-to-2 weeks in ICU. Based on the COVID–ICU cohort, which prospectively collected characteristics, management, and outcomes of critically ill patients with COVID-19. Machine learning was used to develop dynamic, clinically useful models able to predict 90-day mortality using ICU data collected on day (D) 1, D7 or D14. Results Survival of Severely Ill COVID (SOSIC)-1, SOSIC-7, and SOSIC-14 scores were constructed with 4244, 2877, and 1349 patients, respectively, randomly assigned to development or test datasets. The three models selected 15 ICU-entry variables recorded on D1, D7, or D14. Cardiovascular, renal, and pulmonary functions on prediction D7 or D14 were among the most heavily weighted inputs for both models. For the test dataset, SOSIC-7’s area under the ROC curve was slightly higher (0.80 [0.74–0.86]) than those for SOSIC-1 (0.76 [0.71–0.81]) and SOSIC-14 (0.76 [0.68–0.83]). Similarly, SOSIC-1 and SOSIC-7 had excellent calibration curves, with similar Brier scores for the three models. Conclusion The SOSIC scores showed that entering 15 to 27 baseline and dynamic clinical parameters into an automatable XGBoost algorithm can potentially accurately predict the likely 90-day mortality post-ICU admission (sosic.shinyapps.io/shiny). Although external SOSIC-score validation is still needed, it is an additional tool to strengthen decisions about life-sustaining treatments and informing family members of likely prognosis. Supplementary Information The online version contains supplementary material available at 10.1186/s13613-021-00956-9.
Introduction
Since January 2020, the world has been massively affected by the coronavirus-19 disease (COVID-19) outbreak. In that context, intensive care units (ICUs) are frequently forced to expand bed capacity in many countries.
Unusually long mechanical ventilation (MV) duration and ICU stays observed during the first wave are some of the most distinctive characteristics of treating severe acute respiratory syndrome-coronavirus 2 (SARS-CoV-2)-infection-related acute respiratory distress syndrome (ARDS), with 90-day mortality ranging from 31 to 53% [1][2][3][4]. Although accurately predicting patients' clinical outcomes throughout this prolonged ICU stay can be difficult, effective recognition-at ICU admission and within the first 14 days-of those at high risk of death in-ICU is crucial to inform clinical decision-making and families of likely prognoses. It could also facilitate adequate resource allocation, including hospital beds and critical care resources, and risk-adjusted comparison of center-specific outcomes. Predicting outcomes of critically ill patients with COVID-19 being treated in the ICU is a major challenge, aimed at avoiding futile prolonged ICU stays and resource use, and provide additional reliable information for decision-making concerning withholding or withdrawing life-sustaining treatment, especially within disease epicenters needing to triage the high-volume influx of patients.
COVID-19-survival models published to date tried to predict the risk of clinical deterioration of acute cases [5,6] using data from hospitalization day (D)1 [7]. To the best of our knowledge, none focused on predicting the survival of patients after 1-to-2 weeks in ICU. Taking advantage of the COVID-ICU-cohort database containing prospectively collected characteristics, management, and outcomes of patients admitted to ICUs for severe COVID-19 in France, Belgium, and Switzerland, between February and May 2020 [3], we used machine learning to develop three dynamic, clinically useful models able to predict 90-day mortality using in-ICU data collected on ICU D1, D7 or D14, respectively.
Study population and data collection
COVID-ICU is a multicenter, prospective cohort study, conducted in 149 ICUs from 138 centers, across three countries (France, Switzerland, and Belgium), launched by the Reseau Europeen de recherche en Ventilation Artificielle (REVA) network. Most of the centers were in France (135/138) whereas two were in Belgium and one in Switzerland. All consecutive patients, over 16 years old, admitted to the participating ICUs between February 25, 2020, and May 4, 2020, with laboratory-confirmed SARS-CoV-2 infection, were included. Among the 4643 patients admitted to the ICU, 4244 had available survival status up to D90 post-ICU admission.
Every day, study investigators completed a standardized electronic case report form. Details of the information collected are described elsewhere [3]. Briefly, baseline information collected within the first 24 h post-ICU admission (D1) were: age, sex, body mass index (BMI), Simplified Acute Physiology Score (SAPS)-II [8], Sequential Organ-Failure Assessment (SOFA) score [9], comorbidities, clinical frailty-scale category [10], date of the first symptom(s) and ICU admission date. A dailyexpanded dataset included respiratory support devices (oxygen mask, high-flow nasal cannula, noninvasive ventilation, or invasive MV), arterial blood gases, standard laboratory parameters, and adjuvant therapies for ARDS until D90. In-ICU organ dysfunctions included acute kidney failure requiring renal replacement therapy, proven thromboembolic complications, confirmed ventilator-associated pneumonia (VAP) or bacterial coinfection, and cardiac arrest. Each patient's vital status was obtained 90 days post-ICU admission.
COVID-ICU received approval from the French Intensive Care Society Ethics Committee (CE-SRLF 20-23) in accordance with local regulations. All patients, or close relatives, were informed that their data were included in the COVID-ICU cohort. This study was conducted in accordance with the amended Declaration of Helsinki.
Candidate predictors
We included candidate predictors considered in our previous multivariate Cox regression analyses, which assessed baseline risk factors of death by D90 [3]. D7 and D14 candidate predictors were defined a priori among data available in the COVID-ICU cohort [3] (i.e., before the building of the SOSIC models), based on recent publications describing risk factors and specific complications associated with COVID-19 prognosis [11,12]. VAP was diagnosed by quantitative distal bronchoalveolar lavage cultures growing ≥ 10 4 CFU/mL, blind protected specimen-brush distal samples growing ≥ 10 3 CFU/mL, or endotracheal aspirates growing ≥ 10 6 CFU/mL. Pulmonary embolism was proven by pulmonary computedtomography angiography or echocardiography.
Model development
We implemented a systematic machine learning-based framework to construct three mortality-prediction models (SOSIC-1, SOSIC-7, and SOSIC-14) from randomly selected development datasets, comprising 90% of the study sample; the remaining 10% were randomly assigned to the test datasets. Each prediction model was built using a gradient-boosting machine with decision trees, as implemented in the eXtreme Gradient-Boosting (XGBoost) classification algorithm [13]. XGBoost algorithm contains several tuning parameters (e.g., the number of decision trees, the maximal length of the component decision trees). The best set of parameters was chosen among a large grid of tuning parameters using tenfold cross-validation to maximize the prediction model's discrimination ability, as assessed by the area under the receiver operating characteristics curve (AUC). We aimed to build models that could accurately estimate D90 survival for patients alive on D1, D7, or D14 following ICU admission. The SOSIC-1 model included only baseline candidate predictors, while SOSIC-7 and SOSIC-14 models combined baseline and D7 or D14 patient characteristics. The variable importance, which quantifies how much each variable contributed to the classification was extracted from the models. SHAP (SHapley Additive exPlanations) values were also computed to visualize the influence of each input variable on the final score [14].
Model validation
The performances of the three SOSIC models predicting 90-day mortality were evaluated using AUC-assessed discrimination (i.e., the probability that patients who experience the outcome will be ranked above those who do not), and calibration (i.e., the agreement between predicted and observed risks) assessed by the calibration curve (i.e., the ideal calibration intercept is 0 and ideal calibration slope is 1). The Brier score was also computed; it combines calibration and discrimination by quantifying how close predictions are to the observed outcomes (i.e., better performance is observed with a lower Brier score) [15].
A double internal validation was applied for the three SOSIC prediction models. First, internal validity was assessed by estimating the model performance corrected from optimism using bootstrap resampling with 100 repetitions. All the steps leading to the final prediction model (including the selection of the set of XGBoost tuning parameters) were applied to every bootstrap sample [16]. Second, model performance was assessed on the independent test datasets, distinct from development datasets used for model construction. One of the advantages of the XGBoost algorithm is its sparsity awareness that can handle the possibility of missing values [17]. Therefore, no missing value was imputed before model development or validation. Because the COVID-19 pandemic did not hit similarly all regions, we tested the performances of the three SOSIC models in two distinct populations namely in centers from Paris-greater areas and Grand Est compared to centers from other regions. Lastly, the performances of the SOSIC-1 were also compared to the SOFA and the SAPS II scores in the development and test datasets.
Descriptive analysis
Characteristics of the data included in the SOSIC scores are expressed as number (percentage) for categorical variables and means ± standard deviations or medians (interquartile ranges) for continuous variables. In a univariate analysis, categorical variables were compared with χ 2 or Fisher's exact test and continuous variables were compared with Student's t-test or Wilcoxon's ranksum test. A P value < 0.05 was considered statistically significant. Statistical analyses and predictive model construction were computed with R v4.0.3, caret package v6.0-86, and XGBoost package v1.3.2.1.
Study population
Among 4643 patients enrolled by May 4, 2020, 399 were lost to follow-up by D90. Thus, the predictive survival models were built based on the remaining 4244 patients with available D90 vital status. Then, 4244, 2877, and 1349 patients, respectively, were included in the development datasets to construct the SOSIC-1, SOSIC-7, and SOSIC-14 scores, with 424, 292, and 185 from each group, respectively, randomly assigned to the corresponding test datasets (Fig. 1). The three models selected 15 ICU (baseline) variables: i.e., age; sex; BMI; treated hypertension; known diabetes; immunocompromised status; clinical frailty-scale category; bacterial coinfection; ventilation profile; SOFA-score respiratory, cardiovascular, and renal components; lactate concentration; and lymphocyte count). d-Dimers were also selected a priori but were not retained for model development because of their inconsistent collection at ICU admission (Additional file 1). Selected in-ICU parameters obtained on D7 (SOSIC-7) or D14 (SOSIC-14) were: SOFA-score respiratory, cardiovascular, and renal components; lactate level, and ventilation profile. In addition, on D7 or D14, the duration of invasive MV, extubation procedure, prone-positioning, continuous neuromuscular blockade, VAP, cardiac arrest, and/or proven pulmonary embolism since ICU admission were integrated into the SOSIC-7 and SOSIC-14 scores. Table 1 reports the distributions of these variables according to D90 vital status in the D1, D7, or D14 development and test datasets.
Univariate analyses of patient characteristics in the development datasets showed that those who died were significantly older and had a higher clinical frailty-scale category, lower BMI, and shorter intervals between first symptom(s) and ICU admission (except for the SOSIC-14 dataset) compared to D90 survivors (P < 0.01). Similarly, patients who had died by D90 were more likely on invasive MV in ICU D1 and had significantly higher SOFA-score respiratory, cardiovascular, and renal components. Their lactate levels during the first 24 h in-ICU were significantly higher and lymphocyte counts were lower.
Among patients still in-ICU on D7 or D14, the same differences were observed regarding their SOFA-score components, lactate levels, and ventilation profiles on those days ( Table 2). As expected, patients who died were more likely to have undergone prone-positioning or received neuromuscular blockade and experienced significantly more complications in-ICU (i.e., VAP, cardiac arrest, pulmonary embolism) within the first 7 or 14 days. Figure 2 and Additional file 2 illustrate variable-weighting in the machine-learning models used to build the D1, D7, and D14 SOSIC scores. Briefly, age, clinical frailty-scale category, D1 lymphocyte count, and the interval between first symptom(s) and ICU admission were given significant weight to predict D90 mortality. However, the weights of these baseline characteristics tended to decrease when the prediction was estimated after 7 or 14 days in-ICU. Conversely, other baseline comorbidities, such as known diabetes, immunocompromised status, or treated hypertension, were accorded similar weights in all three scores.
Importance of the 90-day mortality predictors
Interestingly, when the prediction was estimated on D7 (SOSIC-7) or D14 (SOSIC-14), SOFA-score respiratory and cardiovascular components, and respiratory support at ICU admission were accorded greater importance compared to the D1 prediction. Moreover, cardiovascular, renal, and pulmonary functions on the prediction D7 or D14 were among the inputs with the highest preponderance in both models, while in-ICU complications since admission had only modest weight.
Discussion
We developed and validated three prognostic models (SOSIC-1, SOSIC-7, and SOSIC-14) to predict 90-day mortality of 4244 critically ill patients with COVID-19 treated in France, Belgium, and Switzerland, evaluated during the 2 weeks following ICU admission. The SOSIC scores showed that entering 15 to 27 baseline and dynamic clinical parameters (depending on the score day) into an automatable XGBoost algorithm had the potential to accurately predict likely mortality 90 days post-ICU admission. Although external validations of the SOSIC scores in other critically ill populations with COVID-19 are still needed, these dynamic tools could enable clinicians to objectively assess the in-ICU mortality risk of patients with COVID-19 for up to 14 days. It offers an additional tool to strengthen decisions about life-sustaining treatments, hospital and ICU resources, and informing family members of likely prognosis. Predicting outcomes of critically ill COVID patients is challenging. Patients hospitalized with COVID-19 can be classified into three phenotypes that have prognostic implications [18]. Indeed, patients with more chronic heart, lung, or renal disease(s), obesity, diabetes, an intense inflammatory syndrome, higher creatinine level, and poorer oxygenation parameters were classified as having the highest risk of deterioration that was associated with poorer outcomes [18]. Age is frequently associated with higher rates of hospitalization, ICU admission, and mortality of patients with COVID-19 [18][19][20]. Frailty is a useful tool to stratify the risk of death 90 days post-ICU admission and offers important additional prognostic information to combine with age over 70 years for patients with COVID [21]. Interestingly, the weights accorded age and frailty in our predictive models declined over the ICU stay. In other words, those two variables more weakly affected mortality prediction after 7 or 14 days in-ICU, compared to the prediction at ICU admission (Fig. 2). Similarly, a shorter interval from the onset of COVID-19 symptom(s) to ICU admission, which was associated with a higher risk of death [22], weighed less in SOSIC-7 and SOSIC-14.
Despite being collected on D1, SOFA-score cardiovascular, respiratory, and renal components strongly impacted later predictions but only modestly affected the D1 prediction. Similarly, in an observational multicenter cohort of patients with moderate to severe COVID-19 ARDS, the decrease of the static compliance of the respiratory system observed between ICU day-1 and day-14 was not associated with day-28 outcome [23]. Besides, cardiac injuries appear frequent with nearly 70% of COVID-19 patients experienced cardiac injury within the first 14 days of ICU stay [24].
The poor discriminant accuracy of the SOFA-score to predict mortality of patients before intubation for COVID-19 pneumonia was recently highlighted [25]; indeed, these patients generally have severe single-organ dysfunction and globally less SOFA-score variations. However, the impacts D7 and D14 respiratory, cardiovascular and renal statuses are of the utmost importance in the mortality prediction at those times. The SOSIC scores put the spotlight on the possibility of some variables exerting variable influence to predict mortality of patients with COVID-19, e.g., demographic variables had less weight after 1 or 2 weeks in-ICU. However, the discrimination of the SOSIC scores did not improve over time. Indeed, the AUC was not better at day-7 or 14 compared to day-1 and its better performance compared to the SOFA-score was reduced over time. A greater number of variables inducing a higher heterogeneity associated with a reduction of the sample size of the development and the test datasets at day-7 and 14 could explain this finding.
Because no models predicting COVID-19 outcomes focused on patients already in the ICU [5,6,26], the SOSIC scores have the potential for clinical usefulness and generalizability. Internal validations of the SOSIC scores showed consistent discrimination and calibration, which obviously deserves further external validation. With an AUC around 0.80, external validation is desirable to assess the mortality prediction beyond population levels and to fully assess the mortality risk of the individual being admitted and cared for up to 2 weeks in-ICU. Although discrimination was largely consistent for the different validation methods, SOSIC-14's calibrations were lower; that finding suggests its performance using an external independent sample might be lower than those of SOCIC-1 or SOSIC-7. By construction, SOSIC-14 was developed on a smaller sample size than the other two models, which might explain its lower quality in terms of predicting 90-day mortality.
Despite being developed and validated on a substantial cohort with a large number of participating ICUs, these scores were constructed during the first COVID wave in Europe, a period with high pressure on the health systems and before the publication of core randomized trials [4,27]. Moreover, ventilator strategies have also changed, as the pandemic has evolved and the medical community acquired a greater understanding of the pathophysiology of the disease and how to treat it. Caregiver reluctance to provide noninvasive oxygen strategies has been overcome [3], leading to higher percentages of patients on high-flow oxygen and noninvasive ventilation, and lower rates of intubation on ICU D1 [3,28].
Debates are still ongoing as to the best timing of intubation in that population, as recent data have suggested poorer outcomes associated with an early intubation strategy [29][30][31]. Thus, the very high percentages of our patients intubated on ICU D1 will probably differ during subsequent COVID-19 outbreaks, in countries with different public healthcare organizations or ICU admission policies.
Indeed, SOSIC predictions should be interpreted as reflecting a profile of critically ill patients with COVID-19 not routinely treated with corticosteroids and outside vaccination campaigns, which may have changed since May 2020. Besides, this cohort was conducted at a time where the national health system was extremely pressured which lead to an important reorganization of intensive care supplies in some regions although we did not find a region effect on the performances of the SOSIC scores. However, we cannot rule out that outside a surge situation, the model could slightly overestimate the mortality. As commonly done for other scoring systems [32], prospective external validations of the SOSIC scores are warranted to determine the need for temporal recalibration and to evaluate model performance in diverse international settings. External validations in more recent cohorts of patients who received recent treatments and ventilation management are warranted. The publicly available calculator (sosic.shinyapps.io/shiny) should help achieve these goals. Another limitation is that we only included predictors that were routinely collected in the COVID-ICU database during the study period. Thus, we cannot rule out that some additional laboratory or ventilatory parameters reflecting respiratory mechanics (especially measured on ICU D7 or D14) would have improved SOSIC-score performances. We were also unable to integrate d-dimer concentrations as initially planned [33], because of their inconsistent collection at ICU admission. Although the XGBoost algorithm incorporates missing data in its split finding algorithm, we cannot guarantee that this method can handle any pattern of missing data effectively [34]. As this algorithm potentially exploits the data missingness patterns for prediction, a major shift in the missingness mechanism in an external independent sample may affect SOSIC scores performance. Lastly, important detailed information on therapy withholding or withdrawing is lacking.
Conclusion
The SOSIC-1, -7, and -14 scores were able to fairly predict 90-day mortality of critically ill patients with COVID-19 admitted and managed in-ICU (sosic.shinyapps.io/shiny). These machine-learning models, built with XGBoost algorithms, showed good discriminations and excellent calibrations. The patient's demographic characteristics contributed most to SOSIC-1, while ventilatory status and extrapulmonary dysfunctions were the preponderant predictors in SOSIC-7 and SOSIC-14. Further studies are now warranted to externally validate these scores in recent cohorts of critically ill COVID-19 patients and assess their performances at individual levels as the pandemic evolves.
|
2021-12-12T14:14:22.868Z
|
2021-12-01T00:00:00.000
|
{
"year": 2021,
"sha1": "df8ab12b1b4c7f93c11ddb3ad80a69a49519601b",
"oa_license": "CCBY",
"oa_url": "https://annalsofintensivecare.springeropen.com/track/pdf/10.1186/s13613-021-00956-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df8ab12b1b4c7f93c11ddb3ad80a69a49519601b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219958500
|
pes2o/s2orc
|
v3-fos-license
|
Structural mechanism underlying primary and secondary coupling between GPCRs and the Gi/o family
Heterotrimeric G proteins are categorized into four main families based on their function and sequence, Gs, Gi/o, Gq/11, and G12/13. One receptor can couple to more than one G protein subtype, and the coupling efficiency varies depending on the GPCR-G protein pair. However, the precise mechanism underlying different coupling efficiencies is unknown. Here, we study the structural mechanism underlying primary and secondary Gi/o coupling, using the muscarinic acetylcholine receptor type 2 (M2R) as the primary Gi/o-coupling receptor and the β2-adrenergic receptor (β2AR, which primarily couples to Gs) as the secondary Gi/o-coupling receptor. Hydrogen/deuterium exchange mass spectrometry and mutagenesis studies reveal that the engagement of the distal C-terminus of Gαi/o with the receptor differentiates primary and secondary Gi/o couplings. This study suggests that the conserved hydrophobic residue within the intracellular loop 2 of the receptor (residue 34.51) is not critical for primary Gi/o-coupling; however, it might be important for secondary Gi/o-coupling.
G protein-coupled receptors (GPCRs) are the largest receptor superfamily that perceive extracellular signals, including light, smell, taste, hormones, and neurotransmitters 1 . Due to their critical functions in physiology and pathology, GPCRs are good therapeutic targets, and one third of approved medicines acts on GPCRs. Thus, it is important to understand the precise signaling mechanism of GPCRs for our fundamental knowledge on the cellular signaling and for the development of better GPCR-targeting therapeutics.
Approximately 800 GPCRs have been identified in humans. Previous studies have reported that many GPCRs exhibit promiscuous GPCR-G protein coupling; i.e., a single receptor can interact with more than one G protein subtype [4][5][6] . One receptor often couples to more than one Gα isoform within the same family due to high sequence similarity 5 . For example, the serotonin 1 A receptor interacts with three Gi/o family proteins, Gi2, Gi3, and Go 5 . Moreover, several receptors can couple to different G protein families; the α 2 -and β 2 -adrenergic receptors (α 2 AR and β 2 AR) interact with Gs and Gi/o 7,8 , the PAR1 receptor with Gi/o and G12/13 9 , and the melanin-concentrating hormone receptor 1 with Gi/o and Gq/11 10 .
The interaction efficiency and/or binding kinetics of one receptor to different G proteins often differ. The most prominent coupling, which shows the highest coupling efficiency with fast kinetics, is referred to as 'primary coupling'. The minor coupling, which shows lower coupling efficiency and/or slower kinetics, is referred to as 'secondary coupling' 11,12 . The known primary and secondary GPCR-G protein pairs have been summarized in the IUPHAR/BPS Guide to Pharmacology 11 and GPCRdb (gpcrdb. org) ( Supplementary Fig. 1a).
Several biochemical/biophysical studies have revealed the conformational dynamics and high-resolution structures of G proteins in various states 1,3 . These structures of GPCR-G protein complexes reveal the interactions between GPCRs and nucleotide-free G proteins [13][14][15] . The X-ray crystal structure of the β 2 AR-Gs complex 15 and the cryo-electron microscopy (cryo-EM) structure of the muscarinic acetylcholine receptor type 2-GoA (M2R-GoA) complex 16 are shown in Supplementary Fig. 2. The major binding interface between a receptor and a G protein for both structures is between the C-terminal part of α5 of the Gα subunit and the cytosolic core of the receptor formed by transmembrane domains ( Supplementary Fig. 2). The distal Cterminus of Gα (the so-called 'wavy hook') ( Supplementary Fig. 2c, green dotted line) 17,18 forms an additional α-helical-turn as an extension of α5 when it interacts with a receptor (Supplementary Fig. 2c). Another interface is the intracellular loop 2 (ICL2) of the receptor interacting with the hydrophobic pocket of Gα formed by hydrophobic residues at the αN/β1 hinge, β2/β3 loop, and α5 ( Supplementary Fig. 2a, blue dotted box). This interaction appears to be weaker for M2R-GoA structure compared with β 2 AR-Gs, which is discussed below.
Recently, using a combination of pulsed hydrogen/deuterium exchange mass spectrometry (HDX-MS), pulsed hydroxyl radical footprinting mass spectrometry (HRF-MS), and mutational studies, we proposed a model that delineates the sequential events during β 2 AR-Gs coupling, the primary GPCR-Gs pair 19 . In brief, the C-terminal region of Gαs initially associates with the β 2 AR followed by interaction of ICL2 with the hydrophobic pocket within Gαs, which is the key step for GDP release (see below). Stable helix formation of the Gαs wavy hook occurs slowly after GDP release.
Although there has been a great progress in understanding the structural mechanism of GPCR-G protein coupling as described above, the structural mechanism of different coupling efficiencies observed from the primary and the secondary coupling has not been clearly elucidated. In the current study, we took advantage of HDX-MS and investigate the structural mechanism that differentiates primary and secondary GPCR-Gi/o coupling using the M2R and the β 2 AR as model GPCRs, and Gi3 and GoA as model Gi/o proteins. The data suggest that the C-terminus of Gαi/o differentiates primary and secondary Gi/o-coupling and that residue 34.51 of the receptor is not important for the primary Gi/ o coupling.
Results
M2R-and β 2 AR-induced GDP release from Gi/o proteins. The IUPHAR/BPS Guide to Pharmacology 11 specifies that the M2R transduces signals primarily through Gi/o family and secondarily through Gs and Gq/11 families, and the β 2 AR transduces signals primarily through Gs family and secondarily through Gi/o family ( Supplementary Fig. 1b). Especially, the coupling efficiencies of the β 2 AR with Gs or Gi/o families have been studied extensively both in vitro and in vivo [20][21][22] , and the coupling efficiencies of the M2R with Gs, Gi/o, and Gq/11 families have been studied in vivo 23,24 .
To confirm the different coupling efficiencies of the two different receptors (i.e., primary vs. secondary) for Gi/o proteins, we analyzed receptor-mediated GDP release using purified proteins in vitro. Herein, we used Gi3 and GoA as model Gi/o proteins (Fig. 1a) as the β 2 AR-Gi3 interaction has been wellcharacterized in a previous study 21 , and the high-resolution structure of M2R-GoA has been resolved by cryo-EM 16 . To achieve optimal β 2 AR-Gi/o coupling, we used micelles composed of negatively charged lipids (DDM with POPE at a ratio of 5:1) as the previous study had shown that the charge state of the lipid surrounding the β 2 AR affects the efficiency of Gi/o coupling 21 . As expected, the M2R coupled more efficiently to both Gi3 and GoA than the β 2 AR, as demonstrated by faster GDP release, with higher efficacy (Fig. 1b) confirming that M2R is primary Gi/ocoupled receptor and β 2 AR as secondary Gi/o-coupled receptor. Notably, our previous study showed that the β 2 AR induced almost complete GDP release from Gs within 10 s 19 , confirming that the β 2 AR primarily couples to Gs.
M2R-or β 2 AR-induced HDX profile changes in GαoA. Previously, we analyzed the conformational differences between GDP-bound Gαs and β 2 AR-bound nucleotide-free Gαs using HDX-MS 25 ; HDX levels near the nucleotide-binding pocket were higher in β 2 AR-bound nucleotide-free Gαs than in GDP-bound Gαs, which reflects GDP release and increased structural dynamics in this region ( Supplementary Fig. 3). The C-terminal part of α5 of Gαs displayed a lower HDX level in β 2 AR-bound nucleotide-free Gαs than in the GDP-bound Gαs ( Supplementary Fig. 3) reflecting a stable helix formation and insertion into the receptor cytosolic core (described in Supplementary Fig. 2c). ARTICLE NATURE COMMUNICATIONS | https://doi.org/10.1038/s41467-020-16975-2 To understand the structural mechanism of different GDP release efficiencies between M2R-Gi/o and β 2 AR-Gi/o pairs, here we analyzed HDX levels of GαoA in GDP-bound GoA heterotrimer alone and in GoA heterotrimer after M2R or β 2 AR co-incubation (Fig. 1c, d and Supplementary Fig. 4a). The co-incubation of GoA with the M2R or the β 2 AR did not affect HDX profiles of Gβγ subunits (Supplementary data) implying that Gβγ subunits do not undergo significant conformational changes upon complex formation with the M2R or the β 2 AR. When GoA was incubated with its primary coupling receptor M2R, we detected decreased HDX at the C-terminal part of α5 ( Fig. 1c and Supplementary Fig. 4a, peptides 341-348 and 349-353), which suggests decreased dynamics and/or exclusion from the buffer. We detected increased HDX near the nucleotide binding pocket (Fig. 1c and Supplementary Fig. 4a, peptides 37-52, 268-275, and 324-333), which suggests GDP release and/or increased dynamics. We also detected increased HDX between Ras-like and α-helical domains ( Fig. 1c and Supplementary Fig. 4a, peptides 53-61, 276-285, and 289-298) and a few peptides from the α-helical domain (AHD) (Fig. 1c and Supplementary Fig. 4b, peptides 65-77 and 161-173), which could be attributed to domain opening and dynamic movement of AHD 26 . These results are consistent with those of the previous HDX-MS study that analyzed β 2 AR-Gs complex ( Supplementary Fig. 3a) 25 . We detected decreased HDX at αN (Fig. 1c and Supplementary Fig. 4a, peptides 14-18 and 19-29), which was not detected in β 2 AR-Gs complex due to lack of identified peptides in this region; the decreased HDX at αN may reflect the interaction of this region with the M2R (Supplementary Fig. 2b) and/or the blockage of this region from the buffer by micelles surrounding the receptor.
When GoA was incubated with the β 2 AR, a secondary coupling receptor, the HDX profile changes were similar in most regions except the C-terminal part of α5 (Fig. 1c, d). For instance, we detected increased HDX near the nucleotide-binding pocket and AHD, and decreased HDX at αN, although to a lesser extent than the M2R-GoA pair ( Fig. 1d and Supplementary Fig. 4a). However, surprisingly, the C-terminal part of GαoA did not show HDX profile changes ( Fig. 1d and Supplementary Fig. 4a, peptides 341-342 and 349-353), which implies that the Cterminal part of GαoA may not form the stable helix upon β 2 AR-GoA coupling and/or may not be deeply inserted into the receptor core.
Time-resolved HDX during M2R-and β 2 AR-Gi/o coupling. The HDX profile changes shown in Fig. 1 are comparisons between before and after 3 h of co-incubation of GoA with the receptors. Three hours of co-incubation of the receptor and GoA is sufficient for completion of GDP release and formation of the final receptor-GoA complex 16,21 . Therefore, the data in GPCR-Gs coupling using β 2 AR-Gs pair as model system 19 . In this previous study, the β 2 AR and Gs were sampled before coincubation, and after 10 s, 5 min, 20 min, 60 min, 90 min, 110 min, 150 min, and 180 min of co-incubation. The sampled proteins were then pulsed with deuterated buffer for 10 or 100 s 19 . This approach provides the HDX profiles of the indicated time points, which reflect the conformational states of the β 2 AR and Gs at the sampled time points (Supplementary Fig. 5a). Pulsed HDX-MS analysis of β 2 AR-Gs revealed HDX profile change at ICL2 of the β 2 AR within 10 s of co-incubation ( Supplementary Fig. 5a, peptides 133-144) whereas the HDX profile change continued until 110 min at the N-terminal part of ICL3 ( Supplementary Fig. 5a, peptide 223-240), which suggests that ICL2 undergoes conformational changes faster than ICL3 upon coupling to Gs. Analysis of Gαs detected rapid HDX profile changes at the nucleotide-binding pocket ( Supplementary Fig. 5a, peptides 49-59 and 367-371) and slow and prolonged HDX profile changes at the C-terminus of α5 ( Supplementary Fig. 5a, peptide 382-390) 19 . We confirmed that GDP release occurs within 10 s of co-incubation with the β 2 AR indicating that the rapid HDX profile changes at the nucleotide-binding pocket was mainly due to GDP release 19 . Interestingly, the prolonged HDX profile changes at the Cterminus of α5 suggested that this region undergoes prolonged conformational changes even after GDP release 19 .
In the current study, we adopted the same strategy to investigate whether the C-terminal part of Gαi/o does ever form a stable helix and/or is inserted into the receptor cytosolic core, during the time-course of β 2 AR-Gi/o coupling ( Fig. 2 and Supplementary Fig. 5b). We used 10 s deuterium pulse for M2R-Gi/o coupling and 10 and 100 s deuterium pulses for β 2 AR-Gi/o coupling because 10 s deuterium pulse was not sufficient to observe HDX differences between Gi/o alone and Gi/o coincubated with the β 2 AR, for a few peptides (for example, peptides 37-52, 53-61, 268-275, and 324-333 in Supplementary Fig. 4a). Again, we did not detect any HDX profile changes of Gβγ subunits during the co-incubation time course (Supplementary data).
When GoA or Gi3 was co-incubated with the M2R, the HDX profile change at αN was initiated within 10 s and completed within 10 s to 5 min (peptides 14-18 and 19-29 for GαoA in Fig. 2 and peptides 6-18 and 19-33 for Gαi3 in Supplementary Fig. 5b). The HDX profile changes of αN showed similar results when Gi3 or GoA was incubated with the β 2 AR ( Fig. 2 and Supplementary Fig. 5b). However, the previous study with the β 2 AR and Gs reported that HDX profile at αN of Gαs increased within 10 s of incubation with the β 2 AR (Supplementary Fig. 5a) suggesting that the conformational changes at αN may differ between GPCR-Gs and GPCR-Gi/o coupling.
The HDX profile change near the nucleotide-binding pocket was initiated within 10 s and completed within 5-20 min in M2R-Gi3/GoA pairs (peptides 37-52 and 324-333 for GαoA in Fig. 2 and peptides 37-52 and 324-334 for Gαi3 in Supplementary Fig. 5b). Interestingly, GDP release was completed within 10 s to 5 min for M2R-Gi/o coupling (Fig. 1b), which was faster than the changes in HDX-MS profile near the nucleotide-binding pocket. This may reflect further conformational changes after GDP release; the initial HDX profile changes within 10 s to 5 min were due to GDP release, while the later HDX profile changes observed at 20 min were due to additional conformational changes after GDP release. However, we cannot exclude the possibility that these discrepancies between GDP release and HDX-MS profile change kinetics could also be due to different experimental conditions between the two assay systems such as protein concentration (see the Methods for details).
When Gi3 or GoA was incubated with the β 2 AR, the HDX profile changes near the nucleotide-binding pocket were initiated within 10 s and completed within 10 s to 5 min, which was faster than M2R-Gi3/GoA pairs ( Fig. 2 and Supplementary Fig. 5b). The GDP release was initiated within 5 min and completed within 20 min of co-incubation of the β 2 AR and Gi3/GoA (Fig. 1b), and the discrepancies between GDP release and HDX-MS profile change kinetics could be due to different experimental conditions between the two assay systems; the coupling may be slower in the GDP release assay probably due to lower protein concentrations. Overall, it is tempting to propose that the nucleotide-binding pocket undergoes further conformational changes after GDP release in M2R-Gi3/GoA coupling whereas the conformational changes near the nucleotide-binding pocket in β 2 AR-Gi3/GoA pairs are smaller and quicker probably due to the lack of further conformational changes after GDP release.
The HDX profile of the C-terminal part of GαoA continued to decrease until 150-180 min during M2R-GoA coupling (Fig. 2, peptides 342-348 and 349-353), which is similar to what we observed with β 2 AR-Gs coupling 19 ( Supplementary Fig. 5a, peptides 382-390). On the other hands, during M2R-Gi3 coupling, the HDX profile change at the C-terminal part of Gαi3 was initiated within 10 s and continued to decrease until 20 min ( Supplementary Fig. 5b, peptides 342-348 and 349-353). The molecular mechanism underlying the different HDX profile change kinetics at the C-terminal part of GαoA and Gαi3 needs further investigation. Interestingly, when incubated with the β 2 AR, the C-terminal part of GαoA or Gαi3 never showed decreased HDX at any time point during coupling ( Fig. 2 and Supplementary Fig. 5b).
In summary, the M2R-induced HDX profile changes in GαoA or Gαi3 were similar to what we observed for the β 2 AR-induced HDX profile changes in Gαs except at the αN. However, β 2 AR-Gi/o complex showed no HDX-MS profile change at the Cterminal part of Gαi/o, which implied that this region of Gαi/o does not form stable helices during β 2 AR-Gi/o coupling and/or may not be deeply inserted into the cytoplasmic core of the receptor.
The role of the wavy hook for Gi/o coupling. As the C-terminal part of Gαi/o displayed the most differences in HDX-MS profiles between M2R-Gi/o and β 2 AR-Gi/o coupling, we hypothesized that the C-terminal part of Gαi/o differentiates between primary and secondary GPCR-Gi/o coupling.
The C-terminal part of Gα that we analyzed with HDX-MS can be divided into two parts; one is the distal C-terminus ( Fig. 3a, green-colored residues, 'wavy hook'), and the other is the Cterminal helical region of α5 (Fig. 3a, black residues). In our previous study with the β 2 AR and Gs, we observed that the Cterminal part of Gα is the initial site that undergoes a change in hydroxyl radical foot printing upon interaction with the β 2 AR 19 . Moreover, the β 2 AR failed to induce GDP release from the wavy hook-truncated Gαs, which suggested that the wavy hook is the critical initial binding site during β 2 AR-Gs coupling 19 (see below).
To test the role of the wavy hook in GPCR-Gi/o coupling, we also generated a Gαi3 mutant construct in which the last five residues were truncated (hereafter denoted Gi3_Δ5). Unlike β 2 AR-Gs coupling, both the M2R and the β 2 AR could induce GDP release from Gi3_Δ5 (Fig. 3b). Interestingly, β 2 AR-induced GDP release profile did not change upon C-terminal truncation, while M2R-induced GDP release was decreased so that the extent of this GDP release became similar between β 2 AR-Gi3, β 2 AR-Gi3_Δ5, and M2R-Gi3_Δ5 (Fig. 3b). These data suggest that the wavy hook of Gαi3 has a role in differentiating between primary and secondary coupling; it is likely that the engagement of the wavy hook of Gαi3 with a receptor is necessary for primary (i.e., strong) coupling, while failure in doing so (evident from HDX-MS studies of β 2 AR-Gi/o coupling or from GDP release analysis of M2-Gi3_Δ5 coupling) leads to secondary (i.e., weak) coupling.
The role of the wavy hook for GPCR-Gi/o coupling selectivity. As Gi3_Δ5 could still couple with receptors ( Fig. 3b), we hypothesized that truncation of the wavy hook of Gi3 results in loss of coupling selectivity and leads to promiscuous coupling with non-Gi/ o-coupling receptors. To test this hypothesis, we analyzed the GDP release activity of the M1R on WT Gi3 and Gi3_Δ5. According to the IUPHAR/BPS Guide to Pharmacology and a previous report, the M1R primarily couples with Gq/11 24 (Supplementary Fig. 1b), and we confirmed that the M1R does not couple with WT Gi3 (Fig. 3c). However, we clearly observed M1R-induced GDP release from Gi3_Δ5, although the coupling efficiency was low (approximately 20% GDP release) (Fig. 3c). These data support our hypothesis that truncation of the distal C-terminus of Gi3 induces promiscuous coupling with non-Gi/o-coupling receptors.
The potential mechanism of promiscuous coupling of Gi3_Δ5 with the M1R may be attributed to its higher intrinsic dynamics relative to WT Gi3. In other words, the wavy hook may assist in retaining Gi3 in the GDP-bound state. However, we did not detect any difference in basal GDP release between WT Gi3 and Gi3_Δ5 (Fig. 3c). To further investigate the basal GDP/GTP exchange tendency, we analyzed the uptake of BODIPYconjugated GTPγS (BODIPY-GTPγS) into WT Gαi3 and Gαi3_Δ5. We used the α subunit of Gi3-without forming a heterotrimer with Gβγ-to facilitate GDP/GTP turnover (Fig. 3d, e). The BODIPY fluorescence increases when BODIPY-GTPγS is located within the nucleotide-binding pocket compared with when it is free in the buffer allowing the detection of GTPγS binding by measuring the BODIPY fluorescence 27,28 . BODIPY-GTPγS binding to WT Gαi3 and Gαi3_Δ5 occurred with similar kinetics (Fig. 3d), although the maximal binding was slightly The selected analyzed peptides are color-coded on the X-ray crystal structure of Gαi1 (PDB 1GP2) and on the titles of deuterium uptake graphs. Statistically significant changes during coupling were analyzed by repeated-measures ANOVA (rANOVA). To compare two different time points, a two-tailed paired Student's t-test was performed, and p < 0.05 was considered as statistically significant. * Indicates the first time point that showed a statistical difference compared with before co-incubation. + Indicates the first time point that showed a statistical difference from previously marked (either * or +) time point. Error bars represent mean ± S.E.M of three independent experiments. Please note that the data is plotted using a non-linear/non-logarithmic scale.
higher in WT Gαi3 than that in Gαi3_Δ5 (Fig. 3e). These data suggest that the intrinsic GDP release activity of Gαi3_Δ5 is not higher than that of WT Gαi3, and therefore M1R-induced GDP release from Gi3_Δ5 is not due to increased intrinsic GDP release activity.
Gi/o-induced HDX profile changes of M2R and β 2 AR. To gain more insights into the structural changes that occur upon receptor-Gi/o coupling, we compared the HDX levels of the M2R and the β 2 AR before and after 3 h of co-incubation with GoA ( Fig. 4a, b, and Supplementary Fig. 4b) and analyzed the timeresolved HDX changes in the M2R and the β 2 AR during M2R-Gi/ o and β 2 AR-Gi/o coupling (Fig. 4c).
As discussed above, we have previously analyzed the HDX profile changes in the β 2 AR upon co-incubation with Gs, and observed decreased HDX at ICL2 and the N-terminal region of ICL3 of the β 2 AR (Supplementary Fig. 5a) 19 . The decreased HDX at ICL2 reflects helix formation and interaction of F139 with the hydrophobic pocket of Gαs (as shown in Supplementary Fig. 2a, blue dotted box), and the decreased HDX at the N-terminus of ICL3 reflects extended helix formation upon interaction with the C-terminal part of Gαs (as shown in Supplementary Fig. 2a, green square).
We could not identify peptides from ICL2 region of the M2R but could analyze the N-terminal region of ICL3. Unlike the β 2 AR-Gs complex, this region did not undergo HDX changes (Fig. 4a). This is consistent with the high-resolution structures of the M2R, where the N-terminal region of ICL3 does not form an extended helix upon interaction with GoA ( Supplementary Fig. 4c). Helix 8 (H8) of the M2R showed decreased HDX ( Fig. 4a and Supplementary Fig. 4b), which may be caused by the interaction between H8 and the C-terminal part of Gαi/o, as observed in the high-resolution structures of GPCR-Gi/o complexes 17 . Unfortunately, we could not obtain the pulsed HDX-MS data from H8, and therefore we could not correlate the time-course of conformational changes between the C-terminal part of Gαi/o and M2R H8.
For the β 2 AR-Gi/o complex, when we compared the HDX profiles before and after 3 h of co-incubation of the β 2 AR with GoA, HDX levels on the β 2 AR were not altered at any of the analyzed regions (Fig. 4b). A time-resolved HDX-MS study also showed that the N-terminal region of ICL3 of the β 2 AR does not undergo decreased HDX at any of the tested timepoints during coupling with Gi3 (Fig. 4c, peptides 223-240). A lack of HDX change at the N-terminus of ICL3 suggests that the β 2 AR fails to form extended helices at the N-terminal region of ICL3 when coupled with Gi/o. This observation is consistent with the lack of HDX change at the C-terminal part of GαoA or Gαi3 (Fig. 1d, Fig. 2, and Supplementary Fig. 5b), which strengthens the hypothesis that the C-terminus of GαoA or Gαi3 is not deeply inserted into the β 2 AR core.
However, time-resolved analysis of HDX profile change showed that the HDX level at ICL2 underwent a transient decrease (within 10 s of co-incubation) upon co-incubation with Gi3 (Fig. 4c, peptides 133-144). After 10 s, the HDX level was not statistically different either from the 10 s time point or from the point of β 2 AR alone. These results suggest that ICL2 engages with Gi3 during an early event, but that the interaction may not be stable.
Importance of ICL2 in primary and secondary Gi/o coupling.
Our previous studies suggested that for β 2 AR-Gs coupling, the interaction of a bulky hydrophobic residue, F139 at ICL2, (residue 34.51, based on GPCRdb numbering scheme) with the hydrophobic pocket formed by H41, V217, F219, and F376 (Supplementary Fig. 2a) is the critical step to induce GDP release 19,29 . For example, mutation of Phe139 to Ala in the β 2 AR (hereafter denoted β 2 AR_F139A) abolished β 2 AR-induced GDP release from Gs, although this construct could still interact with Gs 19 . On the other hands, the currently available high-resolution structures of GPCR-Gi/o complexes show that hydrophobic residues at 34.51 form weak hydrophobic interactions with the hydrophobic pocket formed by V34, L194/L195, F196/F197, and F336 of the Gαi/o families (Fig. 5a).
To test the role of the interaction of residue 34.51 in primary and secondary GPCR-Gi coupling, we generated mutant constructs in which the bulky hydrophobic residue at 34.51 was replaced with Ala (M2R_L129A and β 2 AR_F139A). M2R_L129A could still induce GDP release from Gi3, albeit to a lesser degree than WT M2R (Fig. 5b). This result is consistent with the previous report in which M2R_L129A induced GDP/GTP turnover, although the degree was reduced to 50% relative to that in WT M2R 16 . Unlike M2R_L129A-Gi3 coupling, β 2 AR_F139A failed to release GDP from Gi3 (Fig. 5c), which suggests that the bulky hydrophobic residue 34.51 at ICL2 is critical for β 2 AR-Gi3 coupling.
To gain more insights into the role of residue 34.51 in GPCR-Gi/o coupling, we analyzed the amino acid residue at position 34.51 in class A GPCRs with known coupling G proteins ( Supplementary Fig. 1 and Fig. 5d-f). Among the Gs-coupled receptors, 26% contain Phe or Tyr, and 68% contain large hydrophobic residues such as Ile, Leu, or Met (Fig. 5d). Thus, a majority (94%) of Gs-coupled receptors comprise of very large/ large hydrophobic or aromatic ring-containing amino acid residues at position 34.51. Similarly, a majority (82%) of Gq/11coupled receptors contain very large/large hydrophobic or aromatic ring-containing amino acid residues at position 34.51 (Fig. 5d). In contrast, the proportion of very large/large hydrophobic or aromatic ring-containing amino acid residues at position 34.51 decreases in Gi/o-coupled receptors (56%), while the proportion of medium or small hydrophobic amino acids at position 34.51 increases in Gi/o-coupled receptors (25.3%) (Fig. 5d). Moreover, 17.7% of Gi/o-coupled receptors contain non-hydrophobic residues (His, Pro, Ser, Thr, Arg, Lys, Glu, and Gln) whereas only 4.5% of Gs-coupled receptors and 13.1% of Gq/11-coupled receptors contain these residues (Fig. 5d). These sequence analyses imply that the bulky hydrophobic residue at 34.51 is important for Gs or Gq/11 coupling, but this may not be necessarily true for Gi/o coupling.
We further analyzed amino acids at position 34.51 of all class A Gi/o-coupled receptors (Fig. 5e). Comparison of the The changes in HDX are color-coded on the snake maps from GPCRdb (gpcrdb.org). Gray residues in M2R indicate truncated ICL3 for better expression and purification. White residues indicate regions without identified peptides; yellow residues indicate regions without HDX changes; and blue residues indicate regions with decreased HDX upon co-incubation with GoA. c Pulsed HDX-MS analysis of selected peptides of the β 2 AR during β 2 AR-Gi/o coupling. The selected peptides analyzed are color-coded on the X-ray crystal structure of β 2 AR (PDB 3SN6) and on the titles of the deuterium uptake graphs. Light pink color represents Gαs. Statistically significant changes during coupling were analyzed by repeated-measures ANOVA (rANOVA). To compare two different time points, a two-tailed paired Student's t-test was performed, and p < 0.05 was considered as statistically significant. * Indicates the first time point that showed a statistical difference compared with pre-incubation. Error bars represent mean ± S.E.M of three independent experiments. Please note that the data is plotted using a non-linear/non-logarithmic scale.
exclusively-Gi/o-coupled and promiscuously-Gi/o-coupled receptors revealed a broad range of amino acid residues in the exclusively-Gi/o-coupled receptors contain, whereas the promiscuously-Gi/o-coupled receptors contain mostly very large/large hydrophobic or aromatic ring-containing amino acid residues (56 receptors out of 78 promiscuously-Gi/o-coupled receptors) (Fig. 5e). When we further subcategorized the promiscuously-Gi/o-coupled receptors into primarily-Gi/o-coupled receptors and secondarily-Gi/o-coupled receptors, we found that all the receptors that secondarily couple to Gi/o contain Phe/Tyr, Ile/ Leu/Met, or Val and do not contain other amino acids while the receptors that primarily couple to Gi/o contain these amino acids as well as other residues such as Ala, His, Pro, Ser/Thr, Arg/Lys, and Gln (Fig. 5f). These findings imply that for the secondary GPCR-Gi/o interaction, the bulky hydrophobic residue at ICL2 may be important, which is consistent with the observation that β 2 AR_F139A failed to release GDP from Gi3 (Fig. 5c). Taken together, we suggest that the interaction of residue 34.51 with the hydrophobic pocket within the Gα subunit is not critical for primary GPCR-Gi/o coupling, but it is important for secondary GPCR-Gi/o coupling.
Discussion
Previous studies have suggested that a single receptor differentially activates different G proteins to varying degrees and/or with different kinetics, which results in complex signal transduction 6,9,30 . This fine-tuning of GPCR-G protein coupling is important for the precise regulation of cellular functions. However, only few studies have suggested the mechanism underlying differential coupling of promiscuous receptors; for example, the availability of G proteins limits the GPCR-G protein coupling selectivity 4,7 , and different ligand types or ligand concentrations differentially regulate the promiscuity of GPCR-G protein coupling 31,32 .
The current study presents the conformational factors that differentiate between primary and secondary Gi/o-coupling. We found that one of the key structural factors is the engagement of the wavy hook of Gαi/o with the receptor (Fig. 6b vs. Fig. 6c). Failure of strong engagement of the wavy hook with the receptor leads to secondary Gi/o coupling (i.e., β 2 AR-Gi/o, β 2 AR-Gi3_Δ5, or M2R-Gi3_Δ5).
These results are surprising because the engagement of the distal C-terminus of Gα with receptors has been considered to be critical for GPCR-G protein interaction and coupling selectivity 19,20,[33][34][35][36][37][38][39] . In our previous study using the β 2 AR and Gs, truncation of the wavy hook resulted in the complete loss of β 2 AR-induced GDP release, and we suggested that the wavy hook is the initial binding site 19 (Fig. 6a). However, the present data indicate that the M2R and the β 2 AR could still couple to the 'wavy hook-truncated' Gαi3 (Fig. 3b), and moreover, the M1R, a non-Gi/o-coupled receptor, can induce GDP release from Gi3_Δ5 but not from WT Gi3 (Fig. 3c). It is tempting to suggest that the wavy hook of Gi3 facilitates interaction with primary Gi/o-coupled receptors but inhibits interaction with non-Gi/o-coupled receptors (Fig. 6d). The detailed molecular mechanism of the role of the wavy hook in differentiating primary vs. secondary Gi/o-coupling and preventing interaction with the non-Gi/o-coupled receptors needs further investigation using systematic mutagenesis or swap mutation of the wavy hook.
Different coupling modes between primary and secondary Gi/ocoupling were also found at receptor residue 34.51 (Fig. 6b vs. Fig. 6c) as the bulky hydrophobic residue at 34.51 is critical for β 2 AR-Gi3 coupling but not for M2R-Gi3 coupling (Fig. 5b, c). Previous studies reported that residue 34.51 within ICL2 of the receptor is critical for receptor-G protein coupling 19,40 . Interestingly, the importance of residue 34.51 has been mostly observed for GPCR-Gs interaction 19,40 but not for GPCR-Gi/o interaction ( Supplementary Fig. 2a vs. Fig. 5a). The analysis of the amino acid type at residue 34.51 supports the hypothesis that the bulky hydrophobic residue at 34.51 is important for primary GPCR-Gs or GPCR-Gq/11 coupling, but may not be critical for primary GPCR-Gi/o coupling (Fig. 5). Interestingly, all the receptors that secondarily couple to Gi/o contain large hydrophobic amino acids at residue 34.51 (Fig. 5f) suggesting that such amino acids at position 34.51 may be necessary for secondary GPCR-Gi/o coupling.
The current study also suggests that primary Gi/o coupling might follow certain different structural mechanisms compared with primary Gs coupling (Fig. 6a vs Fig. 6b). First, αN showed increased HDX in β 2 AR-Gs coupling and decreased HDX in M2R-Gi3/GoA coupling ( Fig. 2 and Supplementary Fig. 5). Second, the β 2 AR failed to induce GDP release from Gs_Δ5 19 , but the M2R still induced GDP release from Gi3_Δ5 (Fig. 3b). Third, β 2 AR_F139A failed to induce GDP release from Gs 19 , but M2R_L129A still induced GDP release from Gi3 (Fig. 5b). These discrepancies may provide clues to understand GPCR-G protein selectivity, which needs further investigation.
In conclusion, we propose the potential conformational mechanism differentiating primary and secondary Gi/o-coupling and compared the conformational mechanism of primary Gi/ocoupling with that of primary Gs-coupling. The findings raise questions about the detailed functional mechanism of the wavy hook in facilitating primary Gi/o coupling and preventing non-Gi/o coupling. Moreover, the critical step for GDP release during primary GPCR-Gi/o coupling is remained to be elucidated because the interaction of the residue 34.51 was not critical for GPCR-Gi/o coupling as suggested in GPCR-Gs coupling.
Methods
Expression and purification of Gi/o. Following protocol is for expression and purification of samples used for Figs. 1, 2, 4, and 5. Human GαoA or Gαi3 was cloned into pFastBac1 vector; Gβ1 with 3C protease-cleavable 6xHis-tag and Gγ2 were cloned into pFastBac_Dual vector. The G proteins were expressed in High Five insect cells (Expression Systems, 94-001F) using Bac-to-Bac system. Cell cultures were grown at 27°C to a density of 3 × 10 6 cells mL −1 and then infected with Gαo or Gαi3 and Gβ1γ2 baculovirus (10-20 mL L −1 and 1-2 mL L −1 respectively). After 48 h incubation, the infected cells were harvested by centrifugation and stored at −80°C until use.Cell pellets were resuspended in 100 mL lysis buffer (10 mM Tris, pH 7.5, 0.1 mM MgCl 2 , 5 mM β-mercaptoethanol (β-ME), 10 μM GDP, 2.5 μg mL −1 leupeptin, and 160 μg mL −1 benzamidine) per liter of culture volume and stirred at room temperature for 15 min. Cell membranes were then spun down and resuspended in 100 mL solubilization buffer (20 mM HEPEs, pH 7.5, 100 mM NaCl, 1% sodium cholate, 0.05% DDM, 5 mM MgCl 2 , 2 μL CIP, 5 mM β-ME, 15 mM imidazole, 10 μM GDP, 2.5 μg mL −1 leupeptin, and 160 μg mL −1 benzamidine) per liter of culture volume using a Dounce homogenizer. The sample were stirred at 4°C for 1 h, and then centrifuged for 20 min to remove insoluble debris. Nickel-NTA resin (2 mL L −1 cell culture) pre-equilibrated in solubilization buffer was added to the supernatant and shaken for 2 h at 4°C. After incubation, the Ni-NTA resin was spun down, poured into a glass column, and washed with 50 mL solubilization buffer. The heterotrimeric GoA or Gi3 was then gradually exchanged into E2 buffer (20 mM HEPEs pH 7.5, 50 mM NaCl, 0.1 % DDM, 1 mM MgCl 2 , 5 mM β-ME, 10 μM GDP, 2.5 μg mL −1 leupeptin, and 160 μg mL −1 benzamidine). The protein was then eluted with E2 buffer supplemented with 250 mM imidazole. The protein was then dephosphorylated by treating with 5 μL lamda phosphatase (supplemented with 1 mM MnCl 2 for activity), 1 μL CIP, and 1 μL Antarctic phosphatase, then incubated at 4°C overnight. The 6xHis-tag was removed using 3C protease. Cleaved GoA or Gi3 was purified by an additional negative Ni-NTA purification step. The Ni-NTA chromatography purified GoA or Gi3 was further purified with MonoQ column (GE Healthcare). The peak fractions of MonoQ column were collected and concentrated using a 50 kDa molecular weight cutoff Millipore concentrator. The concentrated heterotrimeric GoA or Gi3 was aliquoted, flash frozen in liquid nitrogen and frozen at −80°C before use.
Expression and purification of the M2R and the M1R. Human M2R and M1R genes, containing N-terminal FLAG-tag and C-terminal His-tag, were subcloned into pFastBac1 vector. The L129A mutation of M2R was generated by Quick-Change mutagenesis and confirmed by DNA sequencing. The primers used for mutagenesis are listed in Supplementary Table 1. All M2R and M1R constructs used in this study were expressed in Sf9 insect cells (Expression Systems, 94-002F) using the Bac-to-Bac baculovirus system. Sf9 cells were grown in the ESF 921 medium and were infected with recombinant baculovirus at a density of 4 × 10 6 cells mL −1 , in the presence of 10 μM atropine. The cells were harvested after 48 h of infection at 27°C. Cell pellets were lysed using a lysis buffer (10 mM Tris pH 7.5, 1 mM EDTA, 10 μM atropine, 2.5 μg mL −1 leupeptin, and 160 μg mL −1 benzamidine). Cell membranes were then spun down and solubilized with a buffer containing 20 mM HEPES (pH 7.5), 750 mM NaCl, 1% DDM, 0.2% sodium cholate, 0.03% CHS, 10 μM atropine, 2.5 μg mL −1 leupeptin, 160 μg mL −1 benzamidine, and 30% glycerol. The solubilized receptor was then purified through Ni-NTA chromatography and eluted with a buffer containing 20 mM HEPES (pH 7.5), 750 mM NaCl, 0.1% DDM, 0.02% sodium cholate, 0.03% CHS, 10 μM atropine, and 30% glycerol and supplemented with 250 mM imidazole. The Ni-NTA purified receptor was then loaded onto an anti-FLAG column with M1 affinity resin and washed extensively with a buffer containing 20 mM HEPES (pH 7.5), 750 mM NaCl, 0.1% DDM, 0.02% sodium cholate, 0.003% CHS, and 10 μM iperoxo and supplemented with 2 mM CaCl 2 . Thereafter, it was eluted with the same buffer supplemented with 0.2 mg mL −1 of FLAG peptide and 5 mM of EDTA. The anti-FLAG-chromatography-purified receptor was finally purified by size exclusion chromatography against a buffer containing 20 mM HEPES (pH 7.5), 100 mM NaCl, 0.1% DDM, 0.003% CHS, and 10 μM iperoxo. The monodisperse peak fractions were concentrated, flash frozen, and stored at −80°C until further use.
Expression and purification of the β 2 AR. The β 2 AR was expressed in Sf9 insect cells (Expression Systems, 94-002F) using the BestBac expression system. The F139A mutation was generated by Quick-Change mutagenesis and confirmed by DNA sequencing. The primers used for mutagenesis are listed in Supplementary Table 1. Proteins were expressed by infecting sf9 cells at 4 × 10 6 cells mL −1 with second-passage baculovirus using 20 mL L −1 of cell culture supplemented with 2 µM alprenolol. The cells were harvested after 48 h incubation at 27°C. Cell pellets were resuspended in lysis buffer (20 mM HEPES pH 7.5, 5 mM EDTA, 1 µM alprenolol, 2.5 µg mL −1 leupeptin, and 160 µg mL −1 benzamidine) at 10 mL g −1 of cell pellet and stirred for 15 min. The collected cell membrane was then homogenized by a Douncer device with the sample in solubilization buffer (20 mM HEPES, pH 7.5, 100 mM NaCl, 1% DDM, 1 µM alprenolol, 2.5 µg mL −1 leupeptin, and 160 µg mL −1 benzamidine) for 1 h at room temperature to extract the receptor. After adding 2 mM CaCl 2 , the supernatant was collected by centrifugation at 18,000 × g for 30 min and loaded onto anti-FLAG column with M1-antibody resin. The column was thoroughly washed with HMS-CHS buffer (20 mM HEPES pH 7.5, 350 mM NaCl, 0.1% DDM, and 0.01% cholesterol hemisuccinate) supplemented with 2 mM CaCl 2. The receptor was then eluted with HMS-CHS buffer supplemented with 5 mM EDTA and 200 µg mL −1 FLAG peptide. The eluted protein was kept frozen and thawed immediately prior to use. The thawed receptor was further purified via alprenolol-Sepharose affinity chromatography using HMS-CHS buffer with 300 µM alprenolol as the elution buffer. The eluted receptors were once again loaded onto anti-FLAG column and washed with HMS-CHS buffer to achieve unliganded receptors by removing alprenolol. The bound receptor was then eluted with HMS-CHS buffer with 5 mM EDTA, 200 µg ml −1 FLAG peptide and 10 µM BI-167107. The functional receptors were further purified by size exclusion chromatography using a Superdex-200 column in HLS-CHS buffer 20 mM HEPES pH 7.5, 150 mM NaCl, 0.1% DDM, 0.01% CHS, 2 µM BI-167107. The receptors were concentrated, flash frozen in liquid nitrogen, and stored at −80°C until use.
GPCR-Gi/o complex formation and HDX-MS.
To form GPCR-Gi/o complex, Gi/o (65 µM) was mixed with 1.15-fold molar excess of iperoxo-bound M2R or BI-167107-bound β 2 AR at room temperature. Apyrase (200 mU mL −1 ) was added after 90 min of incubation to hydrolyze GDP, and to generate a stable complex. For continuous labeling deuterium exchange, 5 µL of complex, agonist bound receptor or GDP-bound Gi/o was mixed with 25 µL of D 2 O buffer (20 mM HEPES (pD 7.4), 100 mM NaCl, 100 mM TCEP, and 0.1% DDM supplemented with 5 µM agonist, 20 µM GDP, or both for receptor alone, G protein alone, or complex, respectively) and incubated for 10, 100, 1000, and 10,000 s at room temperature. For pulselabeling deuterium exchange, GPCR and Gi/o were mixed at room temperature as described above, and 5 µL aliquots were collected at the indicated time points (before mixing, 10 s, 5 min, 20 min, 60 min, 90 min, 110 min, 150 min, and 180 min), mixed with 25 µL of D 2 O buffer, and incubated for 10 s or 100 s at room temperature. The deuterated samples were quenched using 30 µL of ice-cold quench buffer (0.1 M NaH 2 PO 4 and 20 mM TCEP (pH 2.01)), snap-frozen on dry ice, and stored at −80°C. Non-deuterated samples were prepared by mixing 5 µL of protein sample with 25 µL of their respective H 2 O buffers, followed by quenching and freezing, as described above.
The quenched samples were digested and isolated using the HDX-UPLC-ESI-MS system (Waters, Milford, MA, USA). Briefly the quenched samples were thawed and immediately injected to an immobilized pepsin column (2.1 × 30 mm) (Life Technologies, Carlsbad, CA, USA) at a flow rate of 100 µL min −1 in 0.05% formic acid in H 2 O at 12°C. The peptic peptides were then collected on a C18 VanGuard trap column (1.7 µm × 30 mm) (Waters) for desalting with 0.05% formic in H 2 O and subsequently separated by ultra-pressure liquid chromatography using an Acquity UPLC C18 column (1.7 µm, 1.0 × 100 mm) (Waters) at a flow rate of 40 µL min −1 with an acetonitrile gradient starting from 8 to 85% over 8.5 min using two pumps. The mobile phase A was 0.1% formic acid in H 2 O and the mobile phase B was 0.1% formic acid in acetonitrile. Buffers were adjusted to pH 2.5 and the system was maintained at 0.5°C (except pepsin digestion was performed at 12°C) to minimize the back-exchange of deuterium to hydrogen. Mass spectra were analyzed by Xevo G2 quadrupole-time of flight (Q-TOF) equipped with a standard electrospray ionization (ESI) source in MS E mode (Waters) with positive ion mode. The capillary, cone, and extraction cone voltages were set to 3 kV, 40 V, and 4 V, respectively. Source and desolvation temperatures were set to 120°C and 350°C, respectively. Trap and transfer collision energies were set to 6 V, and the trap gas flow was set to 0.3 mL min −1 . The mass spectrometer was calibrated with sodium iodide solution (2 µg µL −1 ). [Glu1]-Fibrinopeptide B solution (200 fg µL −1 ) in MeOH:water (50:50 (v/v) + 1% acetic acid) was utilized for the lock-mass correction and the ions at mass-to-charge ration (m/z) 785.8427 were monitored at scan time 0.1 s with a mass window of ±0.5 Da. The reference internal calibrant was introduced at a flow rate of 20 µL min −1 using the lock mass sprayer and the acquired spectra were automatically corrected using the lock-mass. Two independent interleaved acquisitions were automatically created: the first function, typically set at 4 eV, collected low energy or unfragmented data while the second function collected high energy or fragmented data typically obtained by using a collision ramp from 30-55 eV. Ar gas was used for collision induced dissociation (CID). Mass spectral were acquired in the range of m/z 100-2000 for 10 min. ProteinLynx Global Server 2.4 (Waters) was utilized to identify peptic peptides from the non-deuterated samples with variable methionine oxidation modification and a peptide score of 6. DynamX 2.0 software (Waters) was used to determine the level of deuterium uptake for each peptide by measuring the centroid of isotopic distribution. The average back-exchange in our system was~30-40%, but we did not correct for back exchange because the proteins aggregate in fully deuterated samples. All the data were derived from at least three independent experiments. The detailed HDX-MS results are summarized in the supplementary data, which were generated according to Masson et al.'s recommendation 41 .
GDP release assay. Purified Gα subunit (200 nM) of each G protein (GoA or Gi3) was mixed with 50 nM of [ 3 H]GDP for 1 h at room temperature, in a buffer containing 20 mM HEPES (pH 7.5), 100 mM NaCl, 0.1% DDM, 100 μM TCEP, and 2 μM GDP. Thereafter, 2 μM of purified Gβγ was added. After 10 min of incubation, 5 μM of BI-167107-bound β 2 AR, iperoxo-bound M2R, or the corresponding DDM buffer, of similar volume, was further added to initiate GDP release, in the presence of 1 μM GDP. The reaction mixture was aliquoted at indicated time points, and immediately loaded onto calibrated G-50 columns. The flow through was collected with 1.1 mL of buffer (20 mM HEPES (pH 7.5), 100 mM NaCl, and 0.1% DDM), and GoA-or Gi3-bound [ 3 H]GDP was measured using a scintillation counter (Beckman Coulter, Brea, CA, USA), after adding 15 mL of scintillation fluid. The initial sample represents [ 3 H]GDP binding capacity of GoA or Gi3, before initiation of GDP release.
BODIPY-GTPγS assay. Nucleotide binding to GαoA and Gαi3 were determined by measuring change in fluorescence intensity of BODIPY-FL-GTPγS (Thermo Fisher Scientific, Waltham, MA, USA) in an imaging buffer comprised of 20 mM Tris-HCl (pH 8.0), 1 mM EDTA, 10 mM MgCl 2 , and 100 µM TCEP. The fluorophore was excited at 485 nm (bandwidth 14 nm) and the emission spectrum was recorded at 535 nm (bandwidth 25 nm) using TriStar2 S LB 942 Multimode Microplate Reader (Berthold, Germany). Baseline in absence of protein samples was determined by measuring fluorescence intensity of imaging buffer with or without 50 nM BODIPY-FL-GTPγS for 120 s. GαoA and Gαi3 prepared in 20 mM HEPES (pH 7.4), 100 mM NaCl, 2 mM MgCl 2 , 100 µM TCEP, and 10 µM GDP were mixed with imaging buffer with or without 50 nM BODIPY-FL-GTPγS in 1:10 dilution (1.5 µM final GαoA and Gαi3 concentration). The changes in fluorescence were measured for 60 min. Data points were collected every 10 s using a 96-well black plate. All steps were carried out at room temperature. The spectra were corrected by measurements in absence of BODIPY-FL-GTPγS and normalized by setting the peak fluorescence of BODIPY-FL-GTPγS in presence of protein as 100.
Statistics and reproducibility. Results are representatives of at least three independent experiments and are expressed as mean ± S.E.M. Statistical analysis was performed using GraphPad Prism software (San Diego, CA, USA). Statistical significance of time-dependent changes was determined by repeated-measures oneway ANOVA (rANOVA) at an α level = 0.01 followed by Turkey's multiple comparison test; change in a time series as a whole was considered significant when the F statistic was >1. The significance of differences between two different time points within series or two groups was determined by paired or unpaired two-tailed Student's t-test. The significance of differences between more than two groups was determined by one-way ANOVA. Analysis was considered significant at p < 0.05.
Reporting summary. Further information on research design is available in the Nature Research Reporting Summary linked to this article.
Data availability
Data supporting the findings of this manuscript are available from the corresponding authors upon reasonable request. A reporting summary for this Article is available as a Supplementary Information file.
HDX-MS data have been deposited to ProteomeXchange Consortium 42 via PRIDE 43 partner repository with the set identifier PXD019367. The source data underlying Figs. 1b, 3b-e, and 5b-f are provided as a Source Data file. Data for sequence analysis in Fig. 5d-f are available from GPCR database (gpcrdb.org).
|
2020-06-22T13:53:21.907Z
|
2020-06-22T00:00:00.000
|
{
"year": 2020,
"sha1": "09c9a0e993d00ec73493dd0036181bf0cd68d8f5",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-020-16975-2.pdf",
"oa_status": "GOLD",
"pdf_src": "Springer",
"pdf_hash": "192ba46f5f0ba9adb658d6b33097618675e7be08",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
10589900
|
pes2o/s2orc
|
v3-fos-license
|
Dosimetric impact of source-positioning uncertainty in high-dose-rate balloon brachytherapy of breast cancer
Purpose To evaluate the dosimetric impact of source-positioning uncertainty in high-dose-rate (HDR) balloon brachytherapy of breast cancer. Material and methods For 49 HDR balloon patients, each dwell position of catheter(s) was manually shifted distally (+) and proximally (–) with a magnitude from 1 to 4 mm. Total 392 plans were retrospectively generated and compared to corresponding clinical plans using 7 dosimetric parameters: dose (D95) to 95% of planning target volume for evaluation (PTV_EVAL), and volume covered by 100% and 90% of the prescribed dose (PD) (V100 and V90); skin and rib maximum point dose (Dmax); normal breast tissue volume receiving 150% and 200% of PD (V150 and V200). Results PTV_EVAL dosimetry deteriorated with larger average/maximum reduction (from ± 1 mm to ± 4 mm) for larger source position uncertainty (p value < 0.0001): from 1.0%/2.5%, 3.3%/5.9%, 6.3%/10.0% to 9.8%/14.5% for D95; from 1.0%/2.6%, 3.1%/5.7%, 5.8%/8.9% to 8.7%/12.3% for V100; from 0.2%/1.5%, 1.0%/4.0%, 2.7%/6.8% to 5.1%/10.3% for V90. ≥ ± 3 mm shift reduced average D95 to < 95% and average V100 to < 90%. While skin and rib Dmax change was case-specific, its absolute change (∣Δ(Value)∣) showed that larger shift and high dose group had larger variation compared to smaller and lower dose group (p value < 0.0001), respectively. Normal breast tissue V150 variation was case-specific and small. Average ∣Δ(V150)∣ was 0.2 cc for the largest shift (± 4 mm) with maximum < 1.7 cc. V200 was increased with higher elevation for larger shift: from 6.4 cc/9.8 cc, 7.0 cc/10.1 cc, 8.0 cc/11.3 cc to 9.2 cc/ 13.0 cc. Conclusions The tolerance of ± 2 mm recommended by AAPM TG 56 is clinically acceptable in most clinical cases. However, special attention should be paid to a case where both skin and rib are located proximally to balloon, and the orientation of balloon catheter(s) is vertical to these critical structures. In this case, sufficient dosimetric planning margins are required.
Purpose
In the three-dimension (3D) computed tomography (CT) image-based treatment planning and delivery using the 192 Ir high-dose-rate (HDR) afterloader, there are several factors associated with source-positioning uncertainty. The first is the reconstruction uncertainty of 3D planning CT images in the commercial treatment planning system (TPS). The second results from the nature of manual process in catheter-positioning during 3D CT image-based planning. Particularly, the catheter(s) inside the balloon can be easily identified in the CT image based planning for HDR balloon breast implant. Because image contrast fluid is injected into the balloon, the catheter(s) can be shown either as clearly as a black line due to its low density or as a white line if a dummy metal wire (high density) is inserted through the catheter(s). However, the blurring artifact at the tip of catheter makes it inaccurate to define the tip of catheter on the planning CT images. This makes all possible dwell positions along the catheter have the same magnitude of positioning uncertainty because they all are determined by the single coordinate of catheter tip. And the last is the mechanical accuracy of the afterloader in positioning the 192 Ir source to the intended dwell positions for radiation delivery. The afterloader positioning accuracy within ± 1 mm has been considered as clinically acceptable [1]. In the American Association of Physicists in Medicine (AAPM) task group 56 (TG 56) [2], the clinically acceptable accuracy in positioning an 192 Ir source was recommended within ± 2 mm relative to the applicator system in HDR brachytherapy.
However, it is not well known how much these source-positioning uncertainties perturb the delivered dose compared to the planned dose. Therefore, it is clin-ically important to investigate the overall dosimetric impact from these uncertainties combined together. In conventional 2D image-based planning, a point dose is used for prescription and dosimetric reporting. On the contrary, in the 3D CT image-based planning, the 3D CT images enable to compute dose-to-volume information such as dose volume histogram (DVH) for plan evaluation. Therefore, the source-positioning uncertainty can be easily translated into its clinical impact, i.e., DVH change.
In the literature, the same investigators published two Monte Carlo simulation studies [3,4] regarding dose perturbation due to source positioning uncertainty in HDR balloon brachytherapy of breast cancer. One study [3] was performed for a single MammoSite patient whose skin spacing was 0.7 cm. The other study [4] was performed for a phantom with three different sizes of the balloon (4, 5, and 6 cm diameter). In both studies, an HDR 192 Ir source was positioned at the center of balloon (a single dwell position method) to generate the reference treatment plan. Both studies reported the same amount of dose perturbation due to positioning uncertainty: +1 mm and +2 mm positioning uncertainty reduced the surface dose at 1 cm away from the balloon by 7% and 14% of the prescribed dose, respectively, while increasing dose at other part of target volume by 9% and 19%, respectively. In the second study [4], the maximum dose perturbation was observed in the smallest diameter of balloon. Therefore, both studies suggested that a maximum source deviation should be ≤ 1 mm for clinically acceptable positioning uncertainty.
However, there are several concerns to apply the suggestion from these studies to clinical cases. In the clinical treatment planning, it has been known that multiple dwell position method together with surface optimization can produce better dose distribution than a single dwell position method [5,6,7,8]. Dose perturbation due to source positioning uncertainty is highly dependent upon the reference dose distribution. In these two studies [4,5], the reference dose distribution was less clinically relevant because a single dwell position method was used. Second, in order to provide clinically useful guidelines, the sample size should be large enough to include various clinical cases in terms of different balloon positions relative to organs at risk (OARs). In addition, in 3D CT image-based treatment planning, clinically relevant dosimetric parameter is not only target coverage but also OARs doses for plan evaluation. Therefore, in this study source positioning uncertainty was simulated for 49 clinical HDR balloon breast patients and its dosimetric impact was investigated for target coverage and OARs dose.
3D CT image-based treatment planning
The total dose of 34 Gy was prescribed to the 1 cm surface expanding from the intracavitary balloon and delivered twice a day with at least 6 hours apart on five consecutive working days using an HDR 192 Ir source. A 3D CT image based treatment plan was made by following a national joint clinical trial by the National Surgical Adjuvant Breast and Bowel Project/the Radiation Therapy Oncology Group (NSABP B-39/RTOG 0413) [9]. The thickness of CT slice was 3 mm for single lumen MammoSite ® (Hologic Inc.) applicator and 2 mm for Contura ® MLB (Hologic Inc.) applicator, respectively.
First, a volume of spherical shell was constructed by excluding balloon volume from the volume of 1 cm expansion of balloon in 3D. Afterwards, breast tissue underlying 0.5 cm close to skin surface, and chest wall and pectoralis muscles are excluded from the spherical shell if the balloon is located close to the skin and rib. The final volume is considered as the PTV_EVAL. Normal breast tissue, skin, and rib were delineated on 3D CT images as OARs and their DVHs were generated accordingly. In order to objectively report the maximal point dose of skin, a virtual skin volume was constructed as 0.5 cm expansion from the skin surface outside the patient body. Hence, the maximal point dose of this virtual skin volume is located on the skin surface the same as the manual method to report maximal point dose of the skin [10].
For the possible multiple dwell positions defined by the catheters inside the balloon, volume optimization technique commercially available in a TPS (Brachy-Vision TM version 8.1.2.0, Varian Medical Systems Inc., Palo Alto, CA, USA) was utilized to determine a set of optimal dwell time distribution. In this TPS, AAPM TG 43 formalism was used for dose calculation without taking tissue heterogeneity into account. The planning goals are adapted from the NSABP B-39/RTOG 0413 protocol [9] and Contura ® registry protocol [11]. For the target coverage, dose (D 95 ) to the 95% of the volume of PTV_EVAL is preferred to be more than 95% of the prescribed dose (PTV_EVAL D 95 ≥ 95%). If it is difficult, at least 90% of PTV_EVAL volume is covered with 90% of the prescribed dose (V 90 ≥ 90%) after subtracting air gap and seroma volume relative to the PTV_EVAL volume. The dose to OARs was limited as follow. The maximal point dose (D max ) to skin and rib was limited to 125% and 145% of the prescribed dose, respectively (skin D max ≤ 125% and rib D max ≤ 145%). To avoid necrosis of breast tissue, the volume of high dose region within the normal breast tissue was limited, so that the volume (V 150 and V 200 ) receiving 150% and 200% of the prescribed dose can be less than 50 cc and 10 cc, respectively (V 150 ≤ 50 cc and V 200 ≤ 10 cc). For a clinically difficult case, in which the balloon is located close to both skin and rib, either target coverage or dose to OARs was compromised to produce a clinically acceptable plan.
Source-positioning uncertainty simulation
In this study, it is assumed that all source-positioning uncertainty in the 3D CT image-based treatment planning and delivery could be combined together and considered as the source-positioning uncertainty ranging from ± 1 mm to ± 4 mm although AAPM TG 56 guideline is ± 2 mm tolerance and afterloader positioning accuracy is ± 1 mm. The positioning uncertainty ranging from ± 1 mm to ± 4 mm was simulated to investigate its dosimetric impact for 49 clinical CT image based HDR planning data.
For 25 single lumen MammoSite ® balloon HDR patients, all source dwell positions were manually shifted along the catheter distally (+) and proximally (-) with a magnitude of 1 mm, 2 mm, 3 mm, and 4 mm. A 3D visual inspection was performed to verify the shift of source positions relative to the balloon. For 24 Contura ® MLB HDR patients, the same source position shift simulation as for single lumen MammoSite ® patients was performed to all five catheters simultaneously. Although in a real clinical case, the source positioning uncertainty occurs individually for each catheter, it is impractical to simulate this independent shift of 5 catheters for every patient. In theory, 59,049 simulations are required to mimic individual catheter shift for every Contura ® MLB patient because each catheter has possible 9 simulations independently such as 8 shift cases ranging from -4 mm to +4 mm and a case without shift. Instead, in this study, 8 simulations were performed for each Contura ® MLB patient the same as for MammoSite ® patients to investigate the maximum range of dose variation for each simulation.
Therefore, a total of 392 plans (8 simulations for 49 patients) was retrospectively produced and compared to clinical treatment plans. For each simulated plan, seven dosimetric parameters were evaluated as follow: PTV_EVAL D 95 , V 100 , and V 90 values for target dosimetry; skin and rib D max values, and normal breast tissue V 150 and V 200 values for OARs dosimetry.
Results
For each patient, eight simulated plans were generated and seven dosimetric parameters for each simulated plan were compared with those of treatment plan (reference plan). Hence, the total 2744 dosimetric data points from 392 retrospectively simulated plans were compared to the 343 reference dosimetric data points from 49 clinical treatment plans. Table 1 classifies dosimetric changes into three categories: increase, decrease, and invariance in comparison with dosimetry data in the reference treatment plan. PTV_EVAL dosimetric indices (D 95 , V 100 , and V 90 ) were decreased in most simulations (96.4%) while increased in 3.6% simulations. Skin and rib D max values were increased in 51% simulations, decreased in 48.1% simulations, and invariant in 0.9% simulations. For normal breast tissue dosimetry, V 150 value was increased in 58.2% simulations, decreased in 32.4% simulations, and invariant in 9.4% simulations. V 200 value was increased in most simulations (92.1%) while decreased in 6.6% simulations. It was invariant for 1.3% simulations.
When it comes to presenting group data, if most data in a group show a trend for change (i.e., decrease), descriptive statistics of the group is useful to show the trend. Hence, statistical box graphs were used to show a trend of deviation of PTV_EVAL D 95 , V 100 , and V 90 (Figure 1), and normal breast tissue V 200 values ( Figure 2). On contrast, if each individual datum is case-specific within the group, the variation of individual datum cannot be shown with group statistics. Positive and negative deviations can be cancelled each other when grouping the individual data. Therefore, individual line graphs were employed to show individual variations of skin and rib D max ( Figure 3). To avoid the cancellation between positive and negative changes within a simulation group, particularly for skin and rib D max values and normal breast tissue V 150 values, absolute variation (modulus of change: |(DValue)| = |(Value Simulation -Value Reference )|) was also investigated between 8 simulation groups.
Clinical treatment plan (reference plan)
The Contura ® registry protocol target coverage requirement (PTV_EVAL D 95 ≥ 95%) was satisfied for 44 clinical treatment plans while violated for one single lumen MammoSite ® plan and four Contura ® plans. In those five cases, PTV_EVAL coverage was compromised to reduce rib D max because the balloon was located proximally to the rib (< 5 mm). The average and maximum rib D max value for those five cases was 150.5% and 160.9% of the prescribed dose, respectively. Furthermore, two out of those five patients had the Contura ® balloon proximal to skin as well. Hence, treatment plan was optimized to reduce skin D max to ≤ 125% of the prescribed dose (105% and 113% of the prescribed dose for these two Contura ® MLB patients). For those five cases, the minimum target coverage requirement (V 90 > 90) was satisfied and the Skin D max was ≤ 125% of the prescribed dose for 47 patients while > 125% for two Contura ® patients. One patient (137.5% of skin D max ) had a stringent dose constraint to normal breast tissue V 200 [cc] (9.7 cc), and the other patient (129.4% of skin D max ) had a higher priority to target coverage than skin D max in volume optimization. Rib D max was ≤ 145% of the prescribed dose for 44 patients while > 145% for two single-lumen MammoSite ® patients and three Contura ® patients. In the treatment planning for single lumen MammoSite ® patients, rib structure was not considered as an OAR. For three Contura ® plans, PTV_EVAL coverage (D 95 ≥ 95%) constraint had a higher priority than rib D max and the highest value of rib D max was 156% of the prescribed dose. Normal breast tissue V 150 [cc] and V 200 [cc] values were always < 50 cc (maximum of 44.5 cc) and < 10 cc (maximum of 9.7 cc), respectively. The descriptive statistics for clinical treatment plans was summarized in Table 2 as a reference (Ref) group. Four descriptive statistical parameters were used to summarize group data: average (Mean), standard deviation (SD), minimum (Min), and maximum (Max) values.
Skin and rib D max variation
To display skin and rib D max variation for 8 simulation scenarios, the 49 patients data were ranked based on the value of the reference plans and categorized into two groups: low dose group (25 patients) and high dose group (24 patients). creased in 400 simulation cases while decreased in 377 simulation cases (Table 1). It was invariant in 7 simulation cases. In some patients, skin and rib D max change was large. For instance, for a certain patient, -1 mm and -4 mm shifts increased skin D max by 9.1% and 41.9% of the prescribed dose, respectively. For another patient, +1 mm and +4 mm shift increased rib D max by 9.3% and 40.2% of the prescribed dose, respectively. In contrast, skin and rib D max change was small in some patients. For example, for a patient -4 mm shift reduced skin D max by only 0.03% of the prescribed dose and +4 mm shift reduced rib D max by only 0.14% of the prescribed dose for another patient. Therefore, skin and rib D max variation is highly case-specific and independent of direction and magnitude of shift. However, the modulus of the change (|D(Skin D max ) | and |D(Rib D max ) |) showed statistically significant difference between 8 different simulation groups (p value < 0.0001) using a nonparametric repeated measures ANOVA, Friedman test. Hence, in general, the larger source position uncertainty can cause the larger |D(Skin D max )| and |D(Rib D max )|. In addition, for each simulation, statistical comparison was performed between |D(Skin D max )| and |D(Rib D max )| using nonparametric test (Mann-Whitney Test). All 8 p values corresponding to each simulation were > 0.75 and there was no statistical difference of |D(D max )| between skin and rib D max values.
Furthermore, to investigate the significance of dose variation difference between low dose (25 patients) and high dose (24 patients) groups, |D(Value)| for 8 simulations were averaged for each patient and statistical analysis was performed between two patient groups. A non-parametric test (Mann-Whitney) was used because the sample size was not sufficiently large and statistics of data did not follow normal distribution. Although the individual dose variation was highly case-specific as seen in Figure 3, statistical analysis (Table 3) demonstrated that high dose group showed higher average |D(D max )| than low dose group for skin D max (p = 0.0003) and rib D max (p = 0.0004). For the combined skin and rib D max data, statistical difference was more pronounced (p < 0.0001) between low dose and high dose groups.
Normal breast tissue dose (V 150 and V 200 ) variation
The normal breast tissue V 150 value change was casespecific: either increased (58%) or decreased (32%) in Table 1. It was invariant for 10% of simulation cases. The variation of V 150 value was too small to be noticeable for all patients. For two largest simulations (± 4 mm shift), the average |D(V 150 ) | was 0.2 cc. Even, in the worst scenario, the maximum deviation of V 150 value was less than 2 cc: 1.6 cc and 0.8 cc for -4 mm and +4 mm shift scenarios, respectively.
The normal breast tissue V 200 value was increased in majority of simulations (92%, Table 1). The amount of elevation was gradually increased as the magnitude of positioning uncertainty was increased as shown in Figure 2. The average V 200 value was 6.2 cc at the reference plans and increased from 6.4 cc (± 1 mm), 7.0 cc (± 2 mm), 8.0 cc (± 3 mm) to 9.2 cc (± 4 mm). The average increase of V 200 value was as high as up to 2.9 cc and 3.2 cc for -4 mm and +4 mm shift scenarios, respectively. The worst case of ≥ ± 2 mm simulations violated the requirement of V 200 (> 10 cc). The Friedman test performed among 8 different simulation groups and the reference group and it showed statistically significant difference among groups (p value < 0.0001).
Discussion
Due to the source positioning uncertainty, the prescribed dose isodose curve which originally conforms to the outer surface of the spherical shell will be shifted along the axis of balloon either proximally or distally depending upon the direction and magnitude of positioning inaccuracy. Hence, PTV_EVAL coverage (D 95 , V 100 and V 90 in Figure 1) deterioates due to any shift of the optimal dose distribution along the axis of balloon. However, in some clinical cases, the position of total treatment length (sequential sum of possible dwell positions) inside the balloon was slightly off from the center of balloon. In this study, the source stepping size was set to 5 mm the same as the physical size of 192 Ir source. For those cases (42 out of 1176 cases in Table 1), small shift along the axis of balloon could increase target coverage (PTV_EVAL D 95 , V 100 or V 90 values): 39 cases with ± 1 mm shift and 3 cases with ± 2 mm shift. Table 3. Statistical comparison between low dose and high dose groups of skin and rib D max changes. For each patient, the modulus of D max change was averaged for all 8 positioning uncertainty simulations. The 49 patients' data were ranked based on the D max value of reference plan and categorized into two groups (low and high doses of D max ) In general, the 200% isodose line overlaps with the balloon surface and the normal breast tissue volume (V 200 ) receiving 200% of the prescribed dose is limted to ≤ 10 cc. If the 200% isodose line is spherical shape, it should always increase due to the shift of source positions. However, in clincially difficult case where skin and rib structures are proximally located to the balloon simultaneously, the optimal dose distribution should conform to the shape of outer surface of PTV_EVAL. The PTV_EVAL shape is not a spherical shell by excluding both spherical caps in skin and rib sides, resulting in ellipsoidal shape of 200% isodose curves. In addition, the asymmetry of total treatment length inside the balloon can induce the asymmtery of 200% isodose lines from the center of balloon. Because of these effects, normal breast tissue V 200 value can be decreased due to source-position uncertainty for some simulation cases (26 out of 392 cases in Table 1).
Schematic diagrams in Figure 4 depict three representative clinical cases for single lumen MammoSite ® balloon applicator depending upon the minimal distances from the balloon to the skin/rib structures (skin spacing and rib spacing): (A) both spacings ≥ 0.7 cm; (B) either spacing (particularly, skin spacing in this example) < 0.7 cm; (C) both spacings < 0.7 cm. In Case (A), an optimal dose distribution can be obtained with eight available dwell positions, comforming to the outer surface of spherical shell. Skin and rib D max values are less than the prescribed dose because both spacings are > 1 cm. In Case (B), skin D max is more than the prescribed dose due to skin spacing of < 0.7 cm. It can be reduced to less than the prescribed dose if MLB applicator is used. In Case (C), both skin and rib D max would be higher than the prescribed dose due to < 0.7 cm of skin and rib spacings. Both D max values can be reduced using MLB applicator and dose distribution may be ellipsoidal shape, conforming to the outer surface of PTV_EVAL (grey color in Figure 4C). However, even though MLB is able to reduce OARs dose with multi-lumen, the dose shaping capability is highly limited if the orientation of balloon insertion is vertical to the OARs: Orient (V) in Figure 4B and 4C. All outer lumens are perpendicular to skin and rib and the minimal distances from each outer lumen to OARs is the same. Therefore, the best orientation of balloon insertion is parallel to the skin and rib: Orient (P) in Figure 4 in order to maximize the dose shaping capability of MLB applicator.
The deviation of skin and rib D max values due to shift of source positions is highly dependent upon the relative location of skin and rib to the balloon. In brachytherapy, the inverse square law is the dominant factor to determine dose. The farther away from the balloon, the smaller skin or rib D max values, resulting in the smaller in their dose A) Balloon is located more than 1 cm away from the volume of (skin -5 mm) and pectoralis muscles. The shape of PTV_EVAL is a spherical shell with 1 cm thickness denoted with gray color. B) Skin spacing is less than 0.7 cm and the volume of (skin -5 mm) is excluded from the spherical shell. Hence, the shape of PTV_EVAL is a spherical shell excluding the cap in skin side. C) The (rib + pectoralis muscle) spacing is also less than 0.7 cm and the volume is also excluded from the spherical shell. Hence, the shape of PTV_EVAL is a spherical shell excluding both caps in skin and rib sides. In all diagrams, two extreme balloon insertion orientations are displayed: one is vertical "Orient (V)" and the other is parallel "Orient (P)" to the skin and rib variation due to position uncertainty. In this study, another important factor affecting dose variation is the orientation of balloon insertion because the source-positioning uncertainty occurs exclusviely along the catheter(s). Examining two extreme cases in Figure 4 will elluciate the importance of the orientation of balloon insertion regarding the dose variation due to source-positioning uncertainty. If the orientation of balloon insertion is vertical to the skin/rib such as Orient (V) in Figure 4, the optimal dose distribution conformal to the outer surface of PTV_EVAL volume will be shifted either toward or away from skin/rib structures due to source positioning uncertainty. Hence, minimum distances from skin and rib structures to source positions change drastically due to the position uncertainty, thereby resulting in high dose variation of skin or rib D max values. In contrast, if the orientation of balloon insertion is parallel to the skin/rib such as Orient (P) in Figure 4, the shift of optimal dose distribution along the catheter will insignificantly affect skin and rib D max values because both structures are located relatively parallel to the balloon catheter. Therefore, the variation of minimum distances from skin and rib structures to source positions is small and thus the variation of skin/rib D max is also expected to be small. Typically, low dose group has smaller dose variation of skin and rib D max than high dose group (p < 0.0001 in Table 3) due to the inverse square law of brachytherapy as seen in Figure 3 (left panel vs. right panel). However, the effect of orientation of balloon insertion is noticeable in some patients in Figure 3. The average |D(Skin D max )| was more than 3% of the prescribed dose for 5 patients (5 solid lines in left panel of Figure 3A) in low dose group, while average |D(Skin D max )| was less than 3% of the prescribed dose for 8 patients (8 dash lines in right panel of Figure 3A) in high dose group. The same phenomenon was observed for rib D max variation in Figure 3B. The average |D(Rib D max )| was > 3% of the prescribed dose for 6 patients (6 solid lines in left panel of Figure 3B) in low dose group, while average |D(Rib D max )| was less than 3% of the prescribed dose for 8 patients (8 dash lines in right panel of Figure 3B) in high dose group.
For MLB applicator, the real clinical dosimetric variations due to the source positioning uncertainty should be less than the data from this study. Although in this study all five Contura ® lumens were shifted all together, in the real clinical case the source positioning uncertainty occurs independently for each catheter. If the individual positioning uncertainty for each catheter happens in random fashion, the direction of shift simulation for each catheter can be opposite among multiple lumens and resultant dose perturbation is less than the data in this study. It is noted that the data in this study are limited to the extreme cases. However, still this study can provide clinically meaningful information such as a guideline on the range of maximum dose variation due to source positioning uncertainty in HDR balloon brachytherapy of breast cancer.
In general, HDR brachytherapy is performed in hypofractionation regimen and interfraction variation of the applicator shape and position relative to patient anatomy has become an issue in 3D image based HDR brachyther-apy. Kim et al. [12] measured interfraction change of MammoSite ® balloon applicator and evaluated its dosimetric impact for 19 patients. They concluded that interfraction variations were patient-specific and fractionspecific. Though the average variation and its dosimetric impact are clinically insignificant, the maximum variation is not negligible and applicator shape and position should be verified prior to each fraction. In another study, Kim et al. [13] investigated the rotation issue of MLB applicator between fractions. Based on virtual simulation for device rotation in two representative clinical cases, it was reported that even device rotation as little as 30 degrees could negate the benefit from MLB applicator if the device rotation was disregarded. Hence, they concluded that verification and correction of device rotation is essential prior to delivery of each fraction. Recently, Kuo et al. [14] investigated geometric uncertainty and internal uncertainty for MLB applicator using 42 CT scans (one planning CT scan and 5 daily verification CT scans for 7 patients). They reported interfraction variation of balloon shape measurement in anterior-posterior and lateral directions, balloon volume, skin and rib spacing, and interfaction dosimetric variation for target coverage. In addition, each catheter was systematically shifted up to ± 4 mm the same as this study. The data were similar to this study. As magnitude of shift was increased, PTV_EVAL V 90 was decreased from 0.5% (± 1 mm), 1.7% (± 2 mm), 3.5% (± 3 mm) to 5.7% (± 4 mm). Hence, they came to the conclusion that ± 2 mm tolerance for HDR quality assurance is clinically reasonable although the maximum deviation should be avoided based on the verification of applicator prior to each fraction.
The American Brachytherapy Society (ABS) recently published consensus statement [15] for accelerated partial breast irradiation and provided clinical guidelines in order for clinicians to appropriately select patients. The dosimetry guidelines in this ABS consensus were referred to NSABP B-39/RTOG 0413 protocol for HDR balloon applicator and their update was recommended for new emerging applicators. Hence, dosimetry planning goals in this study followed the NSABP B-39/RTOG 0413 protocol for single lumen MammoSite ® applicator and Contura ® registry protocol for Contura ® applicator in accordance with the ABS consensus. Also, a guideline regarding clinical brachytherapy uncertainties was recently endorsed between Groupe Européen de Curie thérapie and the European Society for Therapeutic Radiology and Oncology (GEC-ESTRO) and AAPM, and it was recommended "to present data on the analyzed parameters and also their influence on absorbed dose for clinically-relevant dose parameters" [16]. This study followed the recommendations and source positioning uncertainty was simulated by an analyzed parameter (source position shift ranging from ± 1 mm to ± 4 mm), and its impact was evaluated by clinically-relevant dosimetry changes in target coverage (D 95 , V 100 , and V 90 ) and OAR dose (maximal point dose of skin and rib, and V 150 and V 200 values of normal breast tissue). Therefore, the data from this study can be used as general guidelines on dosimetric change due to catheter positioning uncertainty ranging from ± 1 mm to ± 4 mm for balloon HDR brachytherapy. In addition, special cases were emphasized where OARs are located close to the balloon and/or the orientation of balloon catheter is vertical to the OARs.
Conclusions
The tolerance of ± 2 mm recommended by AAPM TG 56 for catheter-positioning uncertainty is clinically acceptable in most clinical cases for HDR balloon brachytherapy of breast cancer. However, in a case where the dosimetry data of treatment plan are close to dosimetry limits of the clinical protocol, more caution should be paid because even ± 1 mm positioning uncertainty can make dosimetrics violate the clinical protocol dose limit. Particularly, special concern should be taken for the case where OARs are proximally located to the balloon and the orientation of balloon catheter is vertical to the OARs.
|
2017-06-23T10:46:45.559Z
|
2015-10-01T00:00:00.000
|
{
"year": 2015,
"sha1": "d68c789682e20a17141f6ff923d4365cad616169",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.termedia.pl/Journal/-54/pdf-26051-10?filename=Dosimetric.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d68c789682e20a17141f6ff923d4365cad616169",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
265351021
|
pes2o/s2orc
|
v3-fos-license
|
Emergence of multiple fluconazole-resistant Candida parapsilosis sensu stricto clones with persistence and transmission in China
Abstract Objectives We explored the epidemiological and molecular characteristics of Candida parapsilosis sensu stricto isolates in China, and their mechanisms of azole resistance. Methods Azole susceptibilities of 2318 non-duplicate isolates were determined using CLSI broth microdilution. Isolates were genotyped by a microsatellite typing method. Molecular resistance mechanisms were also studied and functionally validated by CRISPR/Cas9-based genetic alterations. Results Fluconazole resistance occurred in 2.4% (n = 56) of isolates, and these isolates showed a higher frequency of distribution in ICU inpatients compared with susceptible isolates (48.2%, n = 27/56 versus 27.8%, 613/2208; P = 0.019). Microsatellite-genotyping analysis yielded 29 genotypes among 56 fluconazole-resistant isolates, of which 10 genotypes, including 37 isolates, belonged to clusters, persisting and transmitting in Chinese hospitals for 1–29 months. Clusters harbouring Erg11Y132F (5/10; 50%) were predominant in China. Among these, the second most dominant cluster MT07, including seven isolates, characteristically harbouring Erg11Y132F and Mrr1Q625K, lent its carriage to being one of the strongest associations with cross-resistance and high MICs of fluconazole (>256 mg/L) and voriconazole (2–8 mg/L), causing transmission across two hospitals. Among mutations tested, Mrr1Q625K led to the highest-level increase of fluconazole MIC (32-fold), while mutations located within or near the predicted transcription factor domain of Tac1 (D440Y, T492M and L518F) conferred cross-resistance to azoles. Conclusions This study is the first Chinese report of persistence and transmissions of multiple fluconazole-resistant C. parapsilosis sensu stricto clones harbouring Erg11Y132F, and the first demonstration of the mutations Erg11G307A, Mrr1Q625K, Tac1L263S, Tac1D440Y and Tac1T492M as conferring resistance to azoles.
Isolate collection, genotyping and azole susceptibility testing
Non-duplicate clinical invasive isolates studied were available from the China Hospital Invasive Fungal Surveillance Net (CHIF-NET 2013-2017 programme, from August 2012 to July 2017), and identified by sequencing of the internal transcribed spacer.Azole susceptibility testing and microsatellite typing were performed as previously reported. 13,14ioNumerics software v7.6 was employed to examine the genetic relationship of genotypes.Singletons were defined as genotypes found in only one isolate.Endemic genotypes or clusters were defined as those with at least two isolates with identical allelic profiles. 7
Gene sequencing and construction of mutants
MRR1, TAC1, UPC2, ERG11 and ERG3 genes were sequenced and aligned as described previously. 6Single site-directed mutagenesis was performed using the pCP-tRNA plasmid-based system as described previously (Table S1, available as Supplementary data at JAC Online). 14
Statistical analysis
Statistical analyses were performed with GraphPad Prism 8.4.2.Differences in MICs as log 2 MIC of isolates from different sources were analysed by non-parametric Mann-Whitney tests. 14Chi-squared tests were used for categorical variables.
Clinical characteristics of fluconazole-resistant isolates
A total of 2318 episodes of invasive candidiasis caused by C. parapsilosis were submitted from 83 hospitals in 28 provinces in China.Regarding admission, 29.5% of patients were admitted to ICUs, 38.0% to surgical wards and 22.0% to medical wards.The isolates recovered from patients in ICUs showed a significantly higher fluconazole geometric mean (GM) MIC (0.671 mg/ L) compared with those from surgical wards (0.569 mg/L; P < 0.05) and from medical wards (0.548 mg/L; P < 0.001).Among all the samples, blood-derived isolates (62.9%) had the lowest susceptibility to fluconazole (GM 0.609 mg/L) (Figure S1).
Microsatellite-genotyping analysis yielded 29 genotypes among 56 fluconazole-resistant isolates (Figure 1 and Table S2).Ten of these genotypes, including 37 isolates (66.1%), were considered as belonging to clusters, which endemically persisted in China over a period of 1 to 29 months.The 28 isolates carrying Erg11 Y132F were scattered across 12 genotypes, comprising five clusters and seven singletons, and these isolates were more likely to form clusters than were fluconazole-resistant isolates without Y132F (75%, n = 21/28 versus 59.3%, n = 16/27).Of these 10 clusters, 8 were restricted to occurrence within a single hospital, while the second major cluster, MT07, which contained seven isolates causing transmission across two hospitals, characteristically harbouring Erg11 Y132F and Mrr1 Q625K , displayed the highest MICs of fluconazole (>265 mg/L) and voriconazole (2-8 mg/L).The clone of MT34 transmitted across three hospitals in two provinces (Figure 1 and Table S3).
GOF mutations conferring azole resistance
A total of 16 mutations were identified to be potentially associated with elevated azole MICs (Figure 2a-c
Ning et al.
and a high-level increase of voriconazole MIC (8-fold for all), leading to intermediate susceptibility of ATCC 22019.Furthermore, a 4-fold elevation was observed upon introduction of the Tac1 L263S mutation into ATCC 22019, while rectifying L263S mutation of CP574 restored its fluconazole profile from resistant (16 mg/L) to susceptible (1 mg/L) (Figure 2d and e and Table S4).
2][3][4][5][6][7][8][9] We speculate that Erg11 Y132F C. parapsilosis isolates have a heightened ability to adapt to the environment and host, a property that would facilitate nosocomial transmission.This claim was made in part because our study, and other recent studies, have reported that Y132F isolates tend to persist chronically in hospitals and to resist the action of disinfectants. 3,6,16oreover, in-host adaptation has been demonstrated by the significantly higher mortality rates among patients infected with Y132F C. parapsilosis than those infected with other isolates. 1,6,16owever, the lower biofilm-formation capacity found among Y132F isolates is seemingly inconsistent with this view. 6,9edicated studies are required to elucidate the adaptive changes caused by drug resistance.
G307A is another Erg11 mutation that imparts fluconazole resistance.Previous studies indicated that G307, which is well conserved among fungi, occurs in the catalytic site of Erg11 and interacts with fluconazole; thus, such substitution may interfere with fluconazole binding. 17The allelic substitution G307S has been shown to confer fluconazole resistance both in Candida albicans and Aspergillus fumigatus, consistent with our verification results for G307A in C. parapsilosis. 17,18Unlike activating mutations of Upc2 in C. albicans, only one mutation, D394N, found in our C. parapsilosis isolates, is located outside of the C-terminal domain.This residue hasn't been shown to influence ERG11 expression; therefore, it has no effect on azole susceptibility. 17ndeed, several Upc2 mutations have been identified in C. parapsilosis; however, no association has been found between them and Erg11 expression or azole resistance. 6,19n C. albicans and Candida glabrata, Cdr1 and Cdr2 were confirmed to extrude all azoles, while Mdr1 only has fluconazole as a substrate. 20Accordingly, we found the area surrounding the predicted TF domain (322-486 amino acids) of Tac1 protein may be the main region responsible for cross-resistance to azoles in C. parapsilosis.Mutations outside of this region tend to be nonfunctional, except for L263S, which leads only to fluconazole resistance.Interestingly, our results suggested that GOF mutations in Mrr1, the transcription factor of Mdr1, affect multi-azole susceptibility in C. parapsilosis.This specificity is likely endowed by the C. parapsilosis species-specific regulatory function of Mrr1 to ABC pumps Cdr1B. 10,15Its dual function may also further enhance the contribution of Mrr1 mutation to fluconazole resistance, lending Mrr1 Q625K to be the one yielding the highest MIC increase of any mutations tested in our study.However, these mutations couldn't fully account for azole resistance in our isolates; therefore, there are novel resistance-relative genes or mechanisms, needed for in-depth exploration.
In conclusion, this study is the first report of clonal transmissions by fluconazole-resistant C. parapsilosis harbouring Erg11 Y132F in China.We firstly demonstrated mutations Erg11 G307A , Mrr1 Q625K , Tac1 L263S , Tac1 D440Y and Tac1 T492M as conferring resistance of C. parapsilosis to azoles, and provided quantifiable data on the impact of these mutations on MICs, which laid a crucial foundation for the development of molecular methods for rapid detecting resistance.
). Introduction of the Erg11 mutations Y132F, K143R or G307A into the susceptible background C. parapsilosis ATCC 22019 led to 16-, 4-and 4-fold increases of fluconazole MIC, respectively.Conversely, 8-, 4and 4-fold decreases were observed upon correcting these mutations in resistant clinical isolates, respectively.InterPro analysis demonstrated the Q625K mutation falls within the fungal specific transcription factor (TF) domain of Mrr1.Replacing the WT with MRR1 Q625K resulted in a maximum increase of 32-fold in fluconazole MIC and modest increases for the MICs of voriconazole (4-fold) and posaconazole (2-fold).Restoration of this mutation in clinical isolate CP79 led to a reductions of 32-, 8-and 4-fold in fluconazole, voriconazole and posaconazole MICs, respectively.Introduction of the Tac1 mutations D440Y, T492M or L518F, respectively, which lay in or near the predicted TF domain, imparted increased cross-resistance to fluconazole (8-fold for all), posaconazole (2-to 4-fold) and itraconazole (2-fold for all), Fluconazole-resistant C. parapsilosis clones in China
Figure 1 .
Figure 1.Geographical distributions and relationships of microsatellite genotypes among C. parapsilosis isolates.(a) Map detailing fluconazole-resistant isolates distribution by province, and the separation time of the clone clusters.(b) Minimum spanning tree showing the relationships of 56 fluconazole-resistant isolates, 14 susceptible dose-dependent and 21 susceptible isolates obtained by microsatellite genotypes according to fluconazole-susceptibility profiles and partial mutations.(c) Relationships of isolates from different hospitals; only those hospitals yielding at least two resistant isolates are labelled.FLC, fluconazole; R, resistant; SDD, susceptible-dose dependent.The grey outer circles indicate the clonal complex, which is defined as those genotypes with difference in only one locus.
Figure 2 .
Figure 2. Association among azole susceptibility, mutations and genotypes.(a) Overview of azole MICs, mutation types and microsatellite genotypes.(b) Venn diagram indicating the distribution of mutations in C. parapsilosis.(c) MIC distributions of mutation combinations to fluconazole and voriconazole; mutations in polymorphisms are not shown.(d) Functional verification of mutations identified to be potentially associated with elevated azole MICs in our study.(e) Mutations specifically identified among fluconazole-resistant C. parapsilosis. 2-4,6,10,12,15Red lines indicate mutations that have been functionally validated to confer fluconazole resistance by our or other studies, while the yellow lines indicate mutations that confer SDD, and green lines indicate mutations without resistance function.FLC, fluconazole; R, resistant; SDD, susceptible-dose dependent; I, intermediate; S, susceptible; Zn DB, Zn(II) 2Cys6 DNA binding domain; TF domain, the fungal-specific transcription factor domain; aa, amino acid.*Heterozygous mutation; W-M, the WT and azole-susceptible strain ATCC 22019 was introduced with the homozygous the corresponding mutation; M-W, the azole-resistant clinical isolates were reverted to WT base among the corresponding mutation site.R # , drug susceptibility changed from susceptible or WT to resistant or non-WT; I # , from susceptible to SDD or intermediate (W-M), while from resistant or non-WT to SDD or intermediate (W-M); S # , resistance or non-WT converted to susceptibility or WT; S b , intermediate converted to susceptibility.a Heterozygous mutation in clinical isolate was reverted to WT base.
|
2023-11-23T06:17:48.785Z
|
2023-11-22T00:00:00.000
|
{
"year": 2023,
"sha1": "c59670bf1bfe11774757f8d2910248b5cfdb380c",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/jac/advance-article-pdf/doi/10.1093/jac/dkad356/53678929/dkad356.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "b4e596696159684473f4b21d0f065e2c3fe2b22c",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
254523576
|
pes2o/s2orc
|
v3-fos-license
|
The Role of Nuclear-Encoded Mitochondrial tRNA Charging Enzymes in Human Inherited Disease
Aminoacyl-tRNA synthetases (ARSs) are highly conserved essential enzymes that charge tRNA with cognate amino acids—the first step of protein synthesis. Of the 37 nuclear-encoded human ARS genes, 17 encode enzymes are exclusively targeted to the mitochondria (mt-ARSs). Mutations in nuclear mt-ARS genes are associated with rare, recessive human diseases with a broad range of clinical phenotypes. While the hypothesized disease mechanism is a loss-of-function effect, there is significant clinical heterogeneity among patients that have mutations in different mt-ARS genes and also among patients that have mutations in the same mt-ARS gene. This observation suggests that additional factors are involved in disease etiology. In this review, we present our current understanding of diseases caused by mutations in the genes encoding mt-ARSs and propose explanations for the observed clinical heterogeneity.
Aminoacyl-tRNA Synthetases and the Mitochondria
Aminoacyl-tRNA synthetases (ARSs) are essential, highly conserved enzymes that ligate tRNA molecules to cognate amino acids, which is the first step of protein synthesis [1,2]. The human nuclear genome encodes 37 ARSs: 18 charge tRNA in the cytoplasm, 17 charge tRNA in the mitochondria, and 2 function in both compartments (specifically, glycyl-tRNA synthetase and lysyl-tRNA synthetase) by encoding two separate protein isoforms [1]. ARS-encoding genes are named by the single-letter code of the associated amino acid, followed by 'ARS' (e.g., AARS for alanyl-tRNA synthetase). Genes encoding ARSs that function specifically in the cytoplasm (or that encode bifunctional ARSs) are noted with a 1 (e.g., AARS1), while genes encoding ARSs that function exclusively in the mitochondria are noted with a 2 (e.g., AARS2).
To perform aminoacylation in the mitochondria, mitochondrial ARSs (mt-ARSs) must be transcribed in the nucleus, translated in the cytoplasm, and imported into the mitochondria ( Figure 1A). Mt-ARSs and cytoplasmic ARSs function via a two-step reaction in which a specific amino acid is activated by the ARS using a molecule of ATP, resulting in an aminoacyl adenylate intermediate. Next, the ARS binds to the appropriate tRNA molecule, most often (but not always) via an anticodon recognition domain. Finally, the amino acid is transferred to the acceptor stem, and the charged tRNA is delivered to the protein synthesis machinery ( Figure 1B) [3,4]; all of these steps are essential for enzyme function, although there are certain cases where the order of the steps differ. Of note, mitochondrial glutaminyl-tRNA molecules do not have a dedicated mt-ARS; rather, glutamine aminoacylation occurs via the transamidation of glutamic acid. Here, mitochondrial glutamyl-tRNA synthetase (EARS2) aminoacylates tRNA Gln as Glu-tRNA Gln . Next, the GatCAB complex (composed of three subunits encoded by QRSL1, GATB, and GATC) converts glutamic acid into glutamine [5]. The primary function of the mitochondria, known as the "powerhouse of the cell", is to generate energy for cells via the production of ATP using oxidative phosphorylation [6]. This pathway uses FADH 2 and NADH-generated by processing glucose through glycolysis and the tricarboxylic acid cycle-to generate ATP via the production of a proton gradient created by the oxidative phosphorylation complexes [7]. The mitochondrial genome encodes thirteen proteins, all of which are components of this pathway and are essential for oxidative phosphorylation [6]. The mitochondrial genome also encodes ribosomal RNA subunits and a full set of transfer RNAs, which are charged by mt-ARSs [8,9]. Additionally, mitochondria have secondary functions, including (i) the generation of reactive oxygen species, (ii) the regulation of metabolites, (iii) iron metabolism and heme synthesis, (iv) the biosynthesis of pyrimidines and lipids, and (v) the regulation of the nuclear epigenome [10,11]. It is therefore interesting to consider that mutations in genes important for mitochondrial function may have impacts beyond affecting cellular respiration.
Human Inherited Diseases Associated with Mt-ARSs
Combined, mitochondrial diseases are the most common group of neuro-metabolic disorders [12]. Because mitochondria are dependent on both mitochondrial-and nuclearencoded genes, mitochondrial disease can be caused by mutations in the mitochondrial DNA or by mutations in the nuclear genome [13]. Mitochondrial DNA mutations are inherited maternally, and the associated diseases are often complicated by mitochondrial heteroplasmy, which arises due to the fact that an individual cell may have thousands of mitochondria, each containing 2-10 copies of mitochondrial DNA [13]. Heteroplasmy occurs when a cell has a mixed population of wild-type and mutant mitochondrial DNA, with more severe phenotypes typically associated with a higher percentage of mutant compared with wild-type [14]. Nuclear DNA encodes over 1000 mitochondrial-localized proteins [15], and while the majority of variants in nuclear-encoded mitochondrial genes are inherited in a recessive manner, there are some cases of dominantly inherited mitochondrial disease, such as paragangliomas associated with mutations in SDHC (succinate dehydrogenase complex subunit C) [13,16]. Additionally, some phenotypes can be inherited in both dominant and recessive fashions, such as optic atrophy caused by variants in SSBP1 (single-stranded DNA-binding protein 1) [13,17]. Mitochondrial disease often presents in tissues with high energy demands, including the central nervous system, the cardiovascular system, and the musculoskeletal system, among other tissues [18,19]. Mitochondrial disease is also often associated with diabetes, along with other endocrine disorders [20]. Overall, mitochondrial disease is highly heterogeneous, and clinical phenotypes vary widely depending on which gene is affected.
Given their essential role in the translation of mitochondrial-encoded proteins, it is not surprising that all 17 mt-ARSs have been implicated in human disease [21]. Biallelic variants in genes encoding mt-ARSs are associated with a broad range of clinical phenotypes affecting organ systems with high energy requirements (Table 1) [21]. Many mt-ARSs are associated with central nervous system phenotypes, including encephalopathies and leukoencephalopathies (e.g., DARS2 [22]) [23]. Another commonly affected tissue is the heart, and patients with recessive mt-ARS-associated disease often present with cardiomyopathy (e.g., AARS2 [24]). Clinical phenotypes are often gene-and variant-specific, and they are highly heterogeneous depending on what gene is mutated. Thus far, there have been no cases of dominantly inherited mt-ARS-related disease. It is hypothesized that mt-ARS-associated disease is caused by a loss-of-function effect that severely reduces enzyme function and therefore impairs mitochondrial protein synthesis; it is important to note that a total loss-of-function would be incompatible with life. However, the diverse roles of mitochondria raise the possibility that defects in mitochondrial translation caused by mt-ARS variants will affect not only oxidative phosphorylation but also secondary mitochondrial functions, causing additional stress on susceptible tissues. Table 1. mt-ARS genes and associated clinical phenotypes. Acronyms not defined here (CAGSSS, HLASA, HUPRA, and MLASA) are defined in the body of the text.
GARS1
Charcot-Marie-Tooth Type 2 [65]; spinal muscular atrophy [65]; systemic mitochondrial disease, including cardiomyopathy [66] KARS1 Sensorineural hearing loss [67]; Charcot-Marie-Tooth disease, recessive intermediate [68]; optic neuropathy [69]; hypertrophic cardiomyopathy and mitochondrial complex deficiency [70]; microcephaly [71]; leukoencephalopathies [42] GatCAB Complex Lethal metabolic cardiomyopathy [72]; pediatric cardiomyopathy with early onset brain disease [73]; tachypnea, hypertrophic cardiomyopathy, adrenal insufficiency, hearing loss, and combined respiratory chain complex deficiencies [70] This review addresses outstanding questions related to the clinical heterogeneity of mt-ARS-associated human diseases. First, a simple impairment to mitochondrial protein synthesis does not explain the variability in clinical phenotypes observed between patients with mutations in different mt-ARSs. Second, the reduced function of a specific mt-ARS does not explain how different variants in that mt-ARS can lead to highly variable clinical phenotypes. Third, clinical phenotypes associated with mt-ARSs do not directly align with clinical phenotypes associated with variants in their respective mitochondrial tRNA genes. Finally, there is evidence that variants in mt-ARSs may signal downstream cellular stress response pathways, which may contribute to disease phenotypes. All of these observations indicate that mt-ARS-associated diseases may arise due to multiple factors downstream of the mutated mt-ARS. Exploring these questions more deeply will provide a better understanding of how mt-ARS mutations cause human disease.
Clinical Heterogeneity among Patients with Mutations in Different Mt-ARSs
Since the prevailing hypothesis for the mechanism of mt-ARS-associated disease is a loss-of-function effect and, therefore, a downstream reduction in mitochondrial protein synthesis, one expectation might be that mt-ARS-associated phenotypes would be similar, regardless of which locus is mutated. However, some disease phenotypes appear to be specific to a particular mt-ARS and are not observed in patients with mutations in other mt-ARS genes. One example of an mt-ARS being associated with a unique clinical phenotype is mitochondrial tyrosyl-tRNA synthetase (YARS2), which is the only mt-ARS associated with a syndrome characterized by myopathy, lactic acidosis, and sideroblastic anemia (MLASA), which can variably occur along with pancreatic insufficiency [64,74]. YARS2-associated MLASA is heterogenous in terms of age of onset and severity; some patients experience infantile-onset MLASA that is fatal, while other patients experience adolescent-onset, progressive MLASA [75]. Another example of highly specific phenotypes associated with mt-ARSs is mitochondrial isoleucyl-tRNA synthetase (IARS2), which is associated with a condition characterized by cataracts, growth hormone deficiency, sensory neuropathy, sensorineural hearing loss, and skeletal dysplasia (CAGSSS) [37]. While CAGSSS is not the only phenotype associated with IARS2, other phenotypes are less common.
One possible explanation for these observations is that defects in a given mt-ARS differentially affect the translation of a specific subset of proteins due to the amino acid content [76]. The thirteen mitochondrial-encoded proteins all have different amino acid compositions; for example, MT-ATP6 has nearly three times the isoleucine content compared with that of MT-ATP8 (12.8% vs. 4.4% isoleucine, respectively). Tyrosine content in mitochondrial-encoded proteins ranges from 1% (MT-ATP6) to 6% (MT-ND6), and the most extreme example is valine content, which ranges from 1% (MT-ATP8) to nearly 18% (MT-ND6) [76]. One way to assess this would be to carefully examine and compare patients with mutations in mt-ARSs associated with high amino acid content in the mitochondrial proteome with patients with mutations in mt-ARSs associated with low amino-acid content. For example, the mitochondrial proteome consists of 17% leucine but only 1.6% arginine [76]; as a result, patients with pathogenic variants in LARS2 may be expected to have a more severe disease that affects a broader panel of tissues compared with those of patients with pathogenic variants in RARS2. A second possibility, which will be discussed below, is that certain mt-ARSs may have secondary functions; in this situation, the combined loss of protein synthesis and secondary function could result in distinct phenotypes.
Clinical Heterogeneity among Patients with Mutations in the Same Mt-ARSs
In addition to clinical heterogeneity among patients with pathogenic variants in different mt-ARS loci, there are cases of diverse phenotypes associated with variants in the same mt-ARS. That is, certain variants in a given mt-ARS can lead to one clinical phenotype, while other variants can lead to a distinct second phenotype. One example of this is AARS2, or mitochondrial alanyl-tRNA synthetase. AARS2 has been associated both with leukoencephalopathy (often in combination with ovarian failure) and separately with hypertrophic cardiomyopathy [24]. These clinical phenotypes are seemingly nonoverlapping. That is, patients with AARS2-related cardiomyopathy have not been reported to have leukoencephalopathy, and those with leukoencephalopathy have not been reported to have cardiomyopathy; in a review of 48 patients, no patients had both cardiomyopathy and neurological conditions [26]. The age of onset of clinical phenotypes in AARS2 patients is also highly variable, ranging from infancy to over 40 years of age, and there does not seem to be an association between specific phenotypes and the age of onset [26].
Another gene associated with an interesting spectrum of clinical phenotypes is mitochondrial seryl-tRNA synthetase (SARS2). Patients with SARS2 variants present with (i) a progressive spastic paresis [53]; (ii) a syndrome characterized by hyperuricemia, pulmonary hypertension, renal failure in infancy, and alkalosis (HUPRA) that is typically lethal within the first few years of life [52]; or (iii) a syndrome that includes both neurological and HUPRA phenotypes [77][78][79]. Interestingly, HUPRA syndrome is exclusively associated with SARS2, providing another example of unique mt-ARS phenotype.
It is unclear why certain mutations in a given synthetase, such as AARS2 and SARS2, lead to clinically distinct phenotypes, especially when the hypothesized mechanism is reduced enzyme function; based on the common role in mitochondrial protein synthesis, one would hypothesize that severely reducing the function of any mt-ARS would result in a similar clinical phenotype. One explanation for the above observations is that disparate phenotypes are not actually clinically distinct, but rather that the reports are prone to ascertainment bias based on the expertise of the examining physician. For example, if a patient with SARS2 variants primarily sees a neurologist, HUPRA syndrome may be missed if the phenotype is subtle. This explanation would remain in line with a severe reduction of enzyme function if the effect of different genotypes on overall mt-ARS function varies. A related explanation is that different mutations-and therefore different genotypes-may have different effects on protein function; for example, some mt-ARS variants might affect tRNA recognition, while others might alter catalytic activity or mitochondrial localization, leading to a genotype-dependent spectrum of properly charged tRNA in the mitochondria. Alternatively, some mt-ARSs have an editing domain (such as AARS2) that deacylates incorrectly charged amino acids. Thus far, no patients with AARS2-associated, adultonset leukoencephalopathy have variants in the editing domain, but there have been such variants identified in patients with AARS2-associated, infant-onset cardiomyopathy, indicating that certain variants may differentially affect aminoacylation and/or editing [25,80]; interestingly, the effect of a variant in the editing domain might result in a phenotype similar to those of variants that increase the likelihood of a given mt-ARS charging the incorrect amino acid via an alternative mechanism (e.g., altering the structure of the amino acid binding pocket). Relatedly, it is possible that some variants result in stably expressed proteins, while others result in proteins that are degraded. In this case, the stable expression of a defective protein might allow some level of function that could modify the clinical phenotype.
Incongruence of Phenotypes Associated with Mt-ARSs and tRNA Pairs
Mutations in mitochondrial tRNAs are also associated with a broad range of human disease phenotypes [81]. Like pathogenic mt-DNA variants, mutations in mt-tRNA genes can display heteroplasmy, further complicating the effects these variants have on mitochondria function since wild-type and mutant copies can be present in each cell [81]. As a result, the ratio of functional to non-functional mitochondria might vary significantly between patients with the same mt-tRNA mutation, which could lead to differential phenotypic effects. Clinical phenotypes associated with mt-tRNA genes include mitochondrial myopathy, encephalopathy, and stroke-like episodes (MELAS); maternally inherited diabetes and deafness (MIDD); Leigh syndrome; epilepsy; cardiomyopathy; and ataxia [21,81]. Interestingly, the phenotypes associated with mutant mt-ARSs do not always correspond with the phenotypes associated with mutated mt-tRNAs for the same amino acid. In general, mutations in mt-tRNAs have a more global effect on tissues than that of mutations in mt-ARSs [21].
One example of this incongruence is mitochondrial leucyl-tRNA synthetase (LARS2), which is associated with Perrault syndrome [40], a condition that affects the nervous system (leading to sensorineural hearing loss) and the ovaries (leading to premature ovarian failure). Indeed, these two tissue types are typically the only ones affected in patients with pathogenic LARS2 variants. LARS2 has also been associated with HLASA (hydrops, lactic acidosis, and sideroblastic anemia), which is another rare phenotype unique to LARS2 [41,43]. In contrast, mt-tRNA Leu mutations are associated with a broader array of clinical phenotypes. Mt-tRNA Leu was first linked to MELAS [12] but has since been associated with various conditions, including diabetes mellitus and deafness [82], Kearns-Sayre syndrome [83], cardiomyopathy [84], and renal disease [85]. YARS2 is another example of this incongruence; as previously discussed, YARS2 is only associated with the MLASA phenotype. Mt-tRNA Tyr mutations, however, have been associated with exercise intolerance [86], chronic progressive external ophthalmoplegia (CPEO) with myopathy [87], and focal segmental glomerulosclerosis (FSGS) and dilated cardiomyopathy [88]. While exercise intolerance and myopathy are somewhat consistent with an effect on skeletal muscle shared with MLASA, CPEO and FSGS affect two distinct tissue types-the ocular system and kidneys, respectively-that are not affected by the variants in YARS2.
There are two likely explanations for the observation that mutations in mt-tRNAs do not cause the same disease phenotypes as mutations in the corresponding mt-ARS. The first is that certain mutations in a given mt-ARS could lead to similar phenotypes associated with corresponding mt-tRNA mutations, and that these patients simply have not yet been identified. Alternatively, it is possible that a lack of a particular charged mt-tRNA leads to different cellular effects than that of deficits in the total amount of that mt-tRNA. For example, mutations in mt-tRNAs might not impact tRNA charging but might instead cause decreased tRNA binding with the ribosome or other translation factors, leading to a different phenotype than that of depletions of charged mt-tRNAs. In that case, some undefined mechanism may compensate for insufficient mt-ARSs or mt-tRNAs may have other cellular functions even when uncharged that are lost when mt-tRNAs are mutated.
Potential Role of Non-Canonical Mt-ARS Functions in Disease Phenotypes
There is an increasing body of work suggesting that cytoplasmic and mitochondrial ARSs have additional cellular functions aside from aminoacylation [89]. For example, cytoplasmic threonyl-tRNA synthetase (TARS1) has documented roles in angiogenesis [90] and translation initiation [91], and cytoplasmic seryl-tRNA synthetase (SARS1) contributes to regulating angiogenesis [92]. Additionally, many synthetases have nuclear localization signals and play roles in transcriptional regulation [3]. Furthermore, many cytoplasmic synthetases also participate in the multi-synthetase complex, which includes nine synthetases and regulates canonical and non-canonical ARS functions [93,94].
The majority of described non-canonical functions have been for cytoplasmic synthetases; however, it is possible that mt-ARSs have non-canonical functions. Evidence from experiments that use centrifugation to separate soluble and membrane mitochondrial fractions has shown that certain mt-ARSs (DARS2, RARS2, and the bifunctional KARS1) localize to distinct parts of the mitochondria, suggesting that they have non-canonical func-tions that are mitochondrial-compartment-specific [95]. Additionally, given the fact that the mitochondria perform functions aside from oxidative phosphorylation, it is possible that mt-ARSs contribute to these roles. For example, FARS2 and WARS2 have pro-angiogenic functions [96,97], and TARS2 is required for threonine-dependent mTORC1 activation [98]. Additionally, recent studies of the METTL8 protein, which is a methyltransferase that modifies mitochondrial tRNAs with 3-methylcytidine at position 32 (m 3 C32) on mt-tRNA Thr and mt-tRNA Ser (UCN), revealed an interaction with mitochondrial seryl-tRNA synthetase (SARS2) via the immunoprecipitation of METTL8; interestingly, SARS2 was the only synthetase identified in these experiments, and the interaction was specific to METTL8 rather than to other methyltransferase proteins like METTL6 [99,100]. METTL8 is also part of a nuclear RNA-binding complex that may methylate mRNAs, but it has multiple alternatively spliced transcripts that coordinate the localization of METTL8 to the mitochondria for m 3 C32 modifications [100]. It has been hypothesized that these m 3 C32 modifications are necessary for proper tRNA folding, and there is evidence from overexpression experiments that the dosage of SARS2 can partially modulate the m 3 C32 modification activity of METTL8 [99,100]. While evidence for non-canonical functions has only been described for a fraction of the mt-ARSs, it is clear that they play essential roles in different cellular functions, and additional research is needed to determine if other mt-ARSs have non-canonical functions that explain the clinical heterogeneity of mt-ARS-associated human disease.
Downstream Consequences of Mt-ARS Variants on Cellular Stress Responses
Reduced function of ARSs has been linked to cellular stress responses, specifically the integrated stress response (ISR) and the unfolded protein response (UPR), leading to the hypothesis that these pathways contribute to the clinical phenotypes associated with these ARSs. The ISR controls the protein synthesis in stress conditions signaled from the endoplasmic reticulum and the cytoplasm [101]. In response to stress signals, the ISR represses translation while specifically increasing translation of mRNAs that are capable of responding to stress; if the cellular stress cannot be resolved, this process can trigger apoptosis [102]. The ISR can be activated by different kinases, depending on the type of stress response; mTORC1 is activated in mitochondrial stress and signals the ISR, the mitochondrial UPR, and the one-carbon metabolism cycle [103]. The UPR responds to misfolded proteins and other stressors like oxidative stress and hypoxia to maintain mitochondrial protein homeostasis by upregulating the transcription of mitochondrial chaperone proteins and proteases, while the one-carbon metabolism pathway regulates biosynthetic processes, including amino-acid homeostasis [104,105].
Variants in the bifunctional glycyl-tRNA synthetase (GARS1) have been implicated in activating the ISR, and knockdown of the ISR has been shown to modulate dominantly inherited GARS1-related phenotypes [106]. Mitochondrial ARSs have also been connected to cellular stress responses. Mitochondrial aspartyl-tRNA synthetase (DARS2) has been linked to the mitochondrial UPR, as demonstrated by studies in DARS2 conditional knockout mice [107]. The mutant mice developed cardiomyopathy, and a western blot analysis of the stress response transcription factors ATF5 and CHOP confirmed UPR upregulation [107]. Additionally, mice homozygous for a WARS2 mutation showed ISR upregulation in western blots for ATF4 [108]. In sum, it is possible that the induction of cellular stress responses contributes to the observed clinical phenotypes in mt-ARS-associated disease.
Interestingly, in the mouse studies mentioned above, DARS2-associated activation of the UPR was tissue-specific; according to western blot data, the UPR was strongly activated in cardiac tissue but not in skeletal muscle, despite a 60-80% decrease in mitochondrial oxidative phosphorylation complex activity using in-gel activity assays [107]. Similarly, western blot data revealed that the ISR activation observed in the WARS2 mutant mice appeared heart-specific and did not affect kidney, skeletal muscle, and liver tissues [108]. These data would indicate that (a) certain tissues are more affected by pathogenic mt-ARS variants and/or that (b) certain tissues more readily activate cellular stress response signaling. Both of these possibilities are consistent with the observation of tissue-specific clinical phenotypes for mt-ARSs. Because cellular stress responses are programmed to activate in instances of tRNA depletion, it is unsurprising that stress response activation would be observed in cases of mt-ARS-related disease. There are additional stress response pathways such as the heat shock response (HSR), which modulates cellular protein folding and degradation in response to stresses including exposure to oxidants, that could also play a role in disease etiology [109]. Further investigation is necessary to determine which, if any, cellular stress responses are activated in each mt-ARS-related disease.
Remaining Questions on the Molecular Mechanisms of Mt-ARS-Associated Inherited Disease
Several questions need to be addressed to fully understand the locus, allelic, and clinical heterogeneity and the molecular mechanisms of mt-ARS-associated inherited diseases. While we know that mt-ARSs perform tRNA aminoacylation and, potentially, additional non-canonical functions (Figure 2A), we are still left with questions regarding the pathogenic mechanism(s) that lead to clinical phenotypes ( Figure 2B) and how to approach therapeutic development. Addressing these and other questions will improve the ability of clinicians to provide accurate diagnoses and prognoses and to explore therapeutic options for affected patient populations. and/or impaired non-canonical functions, which reduce overall mitochondrial function and potentially activate cellular stress pathways.
What Is the Full Range of Clinical Phenotypes Associated with Mt-ARS Disease?
As discussed throughout this review, the diseases associated with pathogenic mt-ARS variants display a wide range of clinical phenotypes, affecting the central nervous system, the cardiovascular system, the musculoskeletal system, and other systems [95]. However, despite a likely shared mechanism of reduced tRNA charging in the mitochondria, multiple observations suggest that additional factors are at play in determining patient phenotypes. These observations include the following: (1) clinical phenotypes are
What Is the Full Range of Clinical Phenotypes Associated with Mt-ARS Disease?
As discussed throughout this review, the diseases associated with pathogenic mt-ARS variants display a wide range of clinical phenotypes, affecting the central nervous system, the cardiovascular system, the musculoskeletal system, and other systems [95]. However, despite a likely shared mechanism of reduced tRNA charging in the mitochondria, multiple observations suggest that additional factors are at play in determining patient phenotypes. These observations include the following: (1) clinical phenotypes are often mt-ARS-specific; (2) clinical phenotypes are often variant-and genotype-dependent for a given mt-ARS; and (3) the clinical phenotypes associated with mt-ARSs do not always match the clinical phenotypes associated with variants in corresponding tRNA genes. Thus, the spectrum of clinical phenotypes associated with mutations in mt-ARSs is likely to expand. As additional pathogenic variants are identified, patient phenotypes should be carefully assessed toward fully annotating the complete spectrum of clinical phenotypes associated with these genes. Broadening and carefully defining this spectrum will provide the basis for research on the mechanisms that underlie tissue-specific and tissue-predominant phenotypes.
How Do Locus and Allelic Heterogeneity Impact Clinical Heterogeneity?
Several examples were presented in this review where different mutations in the same mt-ARS cause distinct clinical phenotypes. One possibility that may explain this observation is that the varying output of each genotype leads to differential functional consequences that dictate phenotype specificity and severity. To address this, careful biochemical and cellular studies are needed to quantify the precise effect of each mt-ARS mutation on tRNA charging and mitochondrial function. Furthermore, massively parallel mutagenesis studies [110] aimed at identifying all loss-of-function mutations in mt-ARSs (and aimed at quantifying these loss-of-function effects) would expedite patient diagnosis and allow assessments of the effect of each allele and genotype on gene function.
What Additional Functions Do Mt-ARSs Have in the Mitochondria?
As noted, evidence is mounting for the non-canonical functions of ARSs [89]. While much of this evidence is associated with cytoplasmic ARSs, there is a growing body of work demonstrating that mt-ARSs play additional roles in the mitochondria (e.g., SARS2 [99]). This has significant bearing on the downstream consequences of mutations in any given mt-ARS; while losing the function of any mt-ARSs would affect mitochondrial protein synthesis, it may also affect mt-ARS-specific non-canonical functions if the amino-acid residues impacted are important for those functions. For example, loss of SARS2 would cause defects in mitochondrial protein synthesis and m 3 C32 tRNA modifications, but loss of function in another mt-ARS would likely leave m 3 C32 tRNA modification intact. Such observations could tease apart mt-ARS-specific clinical phenotypes and genotypephenotype correlations. It is possible that this non-canonical role of SARS2, for example, contributes to the uniqueness of the HUPRA syndrome phenotype; given that HUPRA syndrome has only been associated with SARS2, it is possible that a loss of SARS2 function is not only leading to defects in mitochondrial translation due to a lack of charged tRNA Ser but is also due to a lack of m 3 C32 on both mt-tRNA Ser (UCN) and mt-tRNA Thr . Relatedly, it is possible that certain variants might affect only canonical aminoacylation activity and not non-canonical functions, and vice versa, which could contribute to variant-specific phenotypes. There are multiple mt-ARSs that contain protein domains that are potentially unrelated to canonical functions (e.g., DARS2 has a bacterial extension [111], and SARS2 and VARS2 contain C-terminal sequences that are uncharacterized [21]), and these domains are good candidates for identifying non-canonical functions. Thus far, the majority of mt-ARS variants tested have demonstrated loss-of-function effects; pathogenic variants that preserve aminoacylation function may also point toward effects on non-canonical functions. Overall, studies to identify potential secondary functions of mt-ARSs will be essential for fully understanding disease mechanisms.
How Do Pathogenic Mt-ARS Variants Affect Cellular Physiology?
Downregulating cytoplasmic and mitochondrial translation has well-defined negative effects on cell biology. For example, cellular stress pathways, including the ISR [106,108,112] and UPR [107], are activated in an attempt to combat these translation defects, and if not resolved, apoptosis ensues. Thus far, stress response signaling has not been identified in all cases of mt-ARS-related disease. However, given that the severely reduced function of any mt-ARS would potentially lead to, for example, a buildup of uncharged tRNA in the mitochondria, a cellular stress response activation would be expected.
It is also reasonable to hypothesize that other cell signaling pathways could be activated in the context of these pathogenic variants, especially when considering potential secondary functions of mt-ARSs. For example, tRNA modifications play a role in managing cellular stress, and mitochondrial tRNA-derived fragments (tRFs), which are small non-coding RNAs that are often regulated by tRNA modifications, also regulate cellular stress pathways [113]. If mt-ARSs such as SARS2 play a role in tRNA modifications, any regulatory pathways managed by such modifications would be disrupted.
How Do We Develop Therapeutics for Patients with Mt-ARS-Related Diseases?
Current therapeutic approaches for mt-ARS-associated diseases include treatments for general mitochondrial disease and/or the management of specific phenotypes; for example, in a case of SARS2-related HUPRA syndrome, the patient was treated with sildenafil for pulmonary hypertension, allopurinol for hyperuricemia, and α-lipoic acid and coenzyme Q10 for mitochondrial oxidative phosphorylation deficiencies [114]. While these drugs are treating the symptoms of HUPRA syndrome, they are not directly addressing the pathogenic mt-ARS variants. Amino acid supplementation has been used in cases of cytoplasmic ARS-related disease, as there is some evidence that supplementing the amino acid charged by the defective tRNA can improve clinical phenotypes [115]. Thus, it is possible that a similar approach could effectively treat patients with mt-ARS-associated disease. Additionally, in cases where at least one splice-site variant is involved in disease pathogenesis (e.g., DARS2), screens could be performed to identify chemical compounds that alter splicing patterns to support wild-type splicing [116]. In terms of future therapeutics, it is first important to determine exactly how each synthetase (and each variant within each synthetase) causes disease in order to optimize the development of effective treatments.
It is also important to understand how each mutation and genotype affect downstream pathways, which may then be leveraged to develop therapeutics. For example, inhibiting the ISR in a GARS1-associated dominant disease reverses the phenotype in mouse models heterozygous for pathogenic GARS1 variants (missense and in-frame deletion mutations) [106]. It remains to be seen if this is applicable to humans, applicable to all GARS1 variants, and/or applicable to mutations in other synthetases. However, a better understanding of the relationship between defects in mt-ARSs and cellular stress responses could reveal promising therapeutic avenues.
Summary and Concluding Remarks
The literature on mt-ARS biology and related genetic diseases is growing rapidly. We are gaining a broader understanding of the complicated relationship between mt-ARSs and disease, which indicates that pathogenic mechanisms go beyond a "simple" loss-of-function effect. Additionally, emerging evidence suggests that mt-ARSs have non-canonical functions beyond tRNA charging. Thus, to fully understand the etiologies of mt-ARS-associated diseases, the following questions must be addressed: (1) What is the full range of clinical phenotypes associated with mt-ARS disease? (2) How do locus and allelic heterogeneity impact clinical heterogeneity? (3) What additional functions do mt-ARSs have in the mitochondria? (4) How do pathogenic mt-ARS variants affect cellular physiology? and (5) How do we develop therapeutics for patients with mt-ARS-related diseases? Addressing these questions will improve our understanding of mt-ARS-associated disease, improve mt-ARS patient diagnosis and prognosis, and broaden our understanding of the function of mt-ARSs and mitochondrial biology.
|
2022-12-11T16:08:54.682Z
|
2022-12-01T00:00:00.000
|
{
"year": 2022,
"sha1": "d509216d36574461d5d6392608bd74194fef5a6a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4425/13/12/2319/pdf?version=1671423118",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "573155b3373de60e146f32d77de8e57dc145ae33",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
240075439
|
pes2o/s2orc
|
v3-fos-license
|
Postpartum women’s experiences of social and healthcare professional support during the COVID-19 pandemic: A recurrent cross-sectional thematic analysis
Problem Disrupted access to social and healthcare professional support during the COVID-19 pandemic have had an adverse effect on maternal mental health. Background Motherhood is a key life transition which increases vulnerability to experience negative affect. Aim Explore UK women’s postnatal experiences of social and healthcare professional support during the COVID-19 pandemic. Methods Semi-structured interviews were conducted with 12 women, approximately 30 days after initial social distancing guidelines were imposed (T1), and a separate 12 women were interviewed approximately 30 days after the initial easing of social distancing restrictions (T2). Recurrent cross-sectional thematic analysis was conducted in NVivo 12. Findings T1 themes were, ‘Motherhood has been an isolating experience’ (exacerbated loneliness due to diminished support accessibility) and ‘Everything is under lock and key’ (confusion, alienation, and anxiety regarding disrupted face-to-face healthcare checks). T2 themes were, ‘Disrupted healthcare professional support’ (feeling burdensome, abandoned, and frustrated by virtual healthcare) and ‘Easing restrictions are bittersweet’ (conflict between enhanced emotional wellbeing, and sadness regarding lost postnatal time). Discussion Respondents at both timepoints were adversely affected by restricted access to informal (family and friends) and formal (healthcare professional) support, which were not sufficiently bridged virtually. Additionally, the prospect of attending face-to-face appointments was anxiety-provoking and perceived as being contradictory to social distancing guidance. Prohibition of family from maternity wards was also salient and distressing for T2, but not T1 respondents. Conclusion Healthcare professionals should encourage maternal help-seeking and provide timely access to mental health services. Improving access to informal and formal face-to-face support are essential in protecting maternal and infant wellbeing.
National lockdown restrictions have disrupted access to in-person healthcare professional support and dissipated access to sources of structural and social support: which has not been sufficiently bridged by technology. Transitioning to new motherhood amidst the COVID-19 pandemic poses unique perinatal stressors which have negatively impacted: birth experience, mother and infant bonding, parenting confidence, and satisfaction with healthcare professional and social support. Such disruption has also contributed towards the experience of parental exhaustion, and elevated levels of emotional distress. However, little is known qualitatively about the impact that distinct phases of social distancing restrictions have had on UK mothers.
What this paper adds
This rapid response piece of research has provided an in-depth understanding of specific psychological, social, and community level factors which may account for heightened levels of maternal emotional distress observed during the COVID-19 pandemic. Findings evidence a sustained negative impact of social distancing restrictions on accessibility and quality of healthcare professional and social support which have had concerning postpartum consequences e.g., delays in labour progression and help-seeking avoidance. Findings from the current study have potential applications for revised policy and practice, with an aim to better support maternal wellbeing during the remainder of this, and in future health crises.
Introduction
New motherhood is a major life course transition which affects various domains of a woman's life and health [1]. Clinical diagnoses of distress, such as anxiety and/or depression, have elevated prevalence in the early postnatal period [2] which may, in part, be attributed to the major life course transitions related to new motherhood [3]. Maternal emotional distress has adverse short-and long-term effects on maternal and infant outcomes, such as greater risk of complications during labour [4] and decreased development of receptive language and gross motor skills in the first year of an infant's life [5]. Worryingly, a woman's heightened vulnerability towards mental distress may be exacerbated by stressors induced by the Coronavirus [SARS-CoV-2] or 'COVID-19' pandemic [6].
COVID-19 is a novel respiratory disease which was declared a public health emergency of international concern on 30 January 2020 [7]. Although not at increased risk of contracting COVID-19, women in the third trimester of pregnancy and women in the early postnatal period are at greater risk of negative outcomes if they contract the virus, when compared with nulliparous adults younger than 65 years old [8]. Due to growing concerns about COVID-19 transmission and mortality, the UK Government imposed a national lockdown on 23 March 2020 [9].
National lockdown restrictions in the UK involved prohibiting the public from leaving their homes unless for the following purpose(s): shopping for necessities, one form of exercise per day (alone, or with members of the same household), medical necessity, and essential travel for work [9]. UK national lockdown restrictions have also disrupted perinatal access to instrumental and relational support services [10]. Direct lockdown-induced disruptions to perinatal support have included: limited access to in-person health and support services and reduced breastfeeding support from healthcare professionals [11]; discontinued parenting support groups [12]; and reduced access to support from maternal social networks [13]. COVID-19 related stressors may have exacerbated postnatal vulnerabilities to experience emotional distress [14].
Indeed, COVID-19 has had a detrimental effect on maternal mental health outcomes. There is a growing evidence-base to suggest COVID-19 related disruption has been associated with elevated levels of postnatal anxiety and depression, compared with pre-COVID prevalence of mental distress [15]. Fallon et al. [16] examined the psychosocial experiences of new mothers amidst the COVID-19 pandemic, using a large on-line survey. Prevalence of negative social changes, consequential of COVID-19, were apparent for relationship satisfaction with one's partner (45%), perceived social support (56%), and satisfaction with healthcare (38%). Currently there is little existing qualitative literature which has sought to explore women's experiences of social and healthcare professional support during the COVID-19 pandemic: the majority of which has focused on initial UK lockdown restrictions, only. Qualitative research can offer richer insight into which disruptions, due to imposed social distancing restrictions, have been most impactful to maternal emotional wellbeing. As such, the current study aims to explore postnatal women's experiences of social and healthcare professional support during different phases of COVID-19 related national lockdown restrictions in the UK, using in-depth, recurrent cross-sectional thematic analysis.
Respondents
The current qualitative study was nested within a larger, quantitative study exploring psychosocial experiences of new motherhood during COVID-19 [16]. Respondents who took part in Fallon et al. [16] were debriefed and re-directed to a separate Qualtrics survey. Here, eligible mothers were asked if they would be happy to take part in an audio recorded interview study, so that researchers could gain deeper insight into their experiences of motherhood during different phases of national lockdown restrictions. Eligible mothers were instructed to leave the box blank if they did not wish to take part, or to provide an email address and/or contact telephone number if they were happy to be contacted with more information [LJ].
Potential respondents were selected via a random number generator due to oversubscribed interest (221 and 207 expressions of interest at T1 and at T2, respectively). Those selected were approached and given more information about the current study [LJ]. With verbal consent the potential respondent was then emailed a study information sheet and an anonymous Qualtrics link to provide electronic consent [LJ]. After providing consent, respondents were contacted again to organise a convenient time for interviewing [LJ]. Verbal consent was taken before commencing the interview to ensure that the mother was still happy with their involvement in the current study.
Eligibility criteria were consistent for Fallon et al. [16] and for the current study. Eligibility criteria included having given birth to a live infant within the past three months, being over 18 years of age, English speaking, and currently residing in the UK. The last criterion was due to cross-country differences in lockdown restrictions [17]. Attrition rate was 14% and 20% at T1 and T2, respectively. Reasons for drop-out included: lack of available time (1 respondent), failed to attend arranged interview and did not respond to a follow-up e-mail (2 respondents), and failed to respond to 2 separate attempts at e-mail contact, spaced one week apart (2 respondents).
A total of 24 respondents were recruited: 12 at Timepoint 1 (T1; data collection completed: 20 May 2020) and a different group of 12 at Timepoint 2 (T2; data collection completed: 16 July 2020). T1 respondents were aged between 28-41 years (M Age = 33.17), and infant age ranged from 2 to 13 weeks (M Age = 7.25 weeks). All T1 respondents were married. See Table 1 for T1 demographic information. T2 respondents were aged between 28-41 years (M Age = 34.67), and infant age ranged from 6 to 14 weeks (M Age = 10.5 weeks). All T2 respondents were married. See Table 2 for T2 demographic information.
Ethics
Ethical approvals were sought from and granted by the University of Liverpool Research Ethics Committee on 07 April 2020 [ref:-IPHS/ 7630].
Methods
Timepoint One (T1) interviews commenced approximately thirty days after the introduction of initial social distancing restrictions (23 March 2020; [9]) and Timepoint Two (T2) interviews commenced approximately thirty days after the first initial easing of social distancing restrictions (11 May 2020 [9];). Individual semi-structured interviews [18] were conducted via telephone or video-calling [LJ]. Interviews lasted between 30− 120 min (Mean = 53.5 min). All respondents were reimbursed £10 and debriefed approximately one day after being interviewed.
Interview schedules were created with collaborators who had expertise in the field of perinatal mental health [JH, LDP, VF, SAS]. Topics of conversation had a chronological structure so to conduct an indepth exploration of experiences through different phases of national lockdown restrictions. T1 interviews involved thinking about how life was before COVID-19, the time around the interview, the future, and general opinions about COVID-19. T2 interviews involved thinking about how life was at the start of lockdown restrictions being implemented on the 23 March 2020, the time around the interview, the future, and general opinions about COVID-19. Specifically, each period covered in the topic guide included items on the quality of emotional, informational, and instrumental support received from: friends and family, healthcare professionals, and the mother's wider community (local and national government/health services generally). See Supplementary materials for T1 and T2 topic guides.
Audio recordings were transcribed, then uploaded to and analysed using NVivo 12 [LJ]. Transcripts were then analysed using thematic analysis [19]. Thematic analysis involved six stages: familiarisation with transcripts, generation of initial codes, identification, review, and defining themes, and report writing [19]. All authors were responsible for refining and identifying themes, following an inductive and consultative approach [20,21].
Analysis followed an adapted, recurrent cross-sectional thematic approach, so that comparisons could be made across timepoints [22]. This adapted approach consisted of two steps: thematic analysis was conducted for T1 and for T2 independently; comparisons between findings from independent timepoints were then discussed and Occupation categories were taken from the ONS [54]. Information regarding UK educational levels were taken from Gov.uk [52,53]. Occupation categories were taken from the ONS [54]. Information regarding UK educational levels were taken from Gov.uk [52,53].
interpreted with literature [22]. Data saturation was achieved after analysing eight (T1) and seven (T2) transcripts, which was determined as the point whereby adding new transcripts did not lead to the identification of new themes [23]. Still, recruitment continued until twelve respondents had been interviewed at each timepoint to ensure that data saturation had been reached, even if presumed to have been achieved earlier [24]. Themes are outlined and discussed in detail with support from only the most illustrative quotations, accompanied by their associated timepoint (i.e., T1 for timepoint 1, and T2 for timepoint 2).
Results
A thematic analysis of the timepoint 1 dataset (n = 12), generated two main themes, with two and three sub-themes, respectively. For the timepoint 2 dataset (n = 12), thematic analysis also generated two main themes, again with two and three sub-themes, respectively. Please see Table 3 for a summary of generated themes and sub-themes, split by interview timepoint.
Timepoint 1: theme 1 -motherhood has been an isolating experience
Most respondents felt that their postnatal experiences had been significantly more isolated and difficult to manage than they speculated it to have been in the absence of social distancing restrictions. Respondents noted that these exacerbated difficulties were felt due to COVID-19 related disruption to emotional and practical support from maternal social networks. Respondents expressed deep sadness and disappointment in being unable to share their life transition through face-to-face interactions with family, friends, and other new mothers.
Diminished support from family and friends
Most respondents felt the initial lockdown was extremely isolating, due to restrictions on between-household socialising, and fears of spreading and contracting COVID-19: "You haven't got any friends or family that can necessarily come into your home and support you in case, they, you know, also contract it. Carrying it. It's kind of like, one of them, rock and a hard place erm, situation. So, erm. Yeah. It is isolation." (Respondent 1, T1).
This was especially difficult during the early postnatal period whereby women would have otherwise been receiving much needed emotional and practical support from friends and family during this transition to parenthood: "I think week two to four was peak tiredness and then that's the point where I'd really loved either my mum, my mother-in-law, or my own family to sort of step in and be able to help out a bit more." (Respondent 10, T1).
In some cases, this led to frustrations around the perceived unfairness of social distancing restrictions: "The hardest thing in general like, I mean, particularly with my mum, she's been self-isolating. We've been self-isolating…I can't see why we can't see family." (Respondent 11, T1).
One respondent's family and friends lived in a different country, meaning the respondent's support network had decided to temporarily travel to the UK to self-isolate with the respondent during the initial lockdown, helping with practical and emotional needs during the early postnatal period. This respondent was content with the support she had received during the initial lockdown, due to her unusual situation. However, she experienced much anxiety at the thought of family returning home when lockdown restrictions eased, which would result in the loss of her temporarily established support system:
Lost postnatal experience
For women who had had their babies during initial set of lockdown restrictions, it was not uncommon for mothers to express sadness concerning the lost opportunity for extended family to bond with the new baby, and the inability to share infant milestones with family: Indeed, as the lockdown restrictions continued, women found it increasingly difficult not having access to their social support networks: "This is what I'm finding very difficult, is the not being able to see friends and even the family…just other people…I get really sad thinking how the year was supposed to be really good for us." (Respondent 9, T1).
Although technology has been a useful resource in keeping women connected with loved ones, there are limits on the level of intimacy that can be achieved through distanced communication: "We have a very active WhatsApp chat erm and yeah I mean I get a lot of stuff from the internet but it's not-it's not quite the same." (Respondent 10, T1).
Timepoint 1: theme 2 -everything is under lock and key
Frequently changing policies regarding face-to-face healthcare checks were a source of confusion and alienation for interviewees. Respondents also frequently experienced anxieties about attending hospital and GP appointments, which appeared contradictory to national advice for new mothers to 'shield'. Other negative effects of COVID-19 on quality of healthcare professional support included restrictions on time spent at practices and the need to rely on virtual healthcare. The impact of COVID-19 on healthcare access resulted in many respondents being left with unanswered questions and seeking sources of selfreassurance.
What is 'essential'?
The majority of respondents mentioned lack of clarity and associated feelings of distress and frustration regarding the six-and eight-week postnatal check-ups. Lack of consistency and clarity from healthcare professionals was a source of anxiety and confusion: There was also a portrayed sense of apprehension around helpseeking among respondents, expressed regarding fears that one's concerns were non-essential:
Into the lion's den
Contradictory advice concerning national guidelines for mothers to adhere with social distancing guidelines, whilst also being invited to attend hospital appointments, where the risk of contracting COVID-19 was perceived to be higher, was a source of confusion and distress: "You're told you're vulnerable and you have to isolate but then you still have to go to hospitals or, you know, health facilities for your appointments, which don't feel as safe...it just sends a bit of a mixed message you know? Oh gosh, I'm having to go to the lion's den to have this appointment." (Respondent 9, T1).
There was a general concern for the safety of perinatal appointments being held at hospitals, which had a significant impact on delaying contractions due to associated anxieties: For this respondent, such anxieties resulted in attempting to delay planned labour induction in the hopes that the mother would be able to have a homebirth instead of being admitted to hospital: "I ended up saying [to healthcare team] well you know, "I don't want to be induced on that particular day, can we postpone this by a week?" so hopefully within a week [baby] would make an entrance all by himself." (Respondent 5, T1).
For those interviewed who did not need to stay in hospital for long after giving birth, this was a great source of relief: "I was just apprehensive seeing how it'd [hospital] become because I just wanted to be home. But luckily, I was discharged the same day...So, I was happy with that, not having to stay in hospital for long." (Respondent 7, T1).
Deprived of care and feeling distant
Several respondents spoke of feeling that available healthcare professional support was time restricted: " [Midwives] want to get you off the phone as soon as possible' cause they have such a high number of people they've got to deal with erm so you do feel a little bit rushed." (Respondent 1, T1).
Interviewed women were also dissatisfied with virtual healthcare which sometimes appeared more like a 'tick-box exercise' than genuine concern for mother or infant: "I did receive a new-born check over the phone er… the GP just rang up and said, "Is he eating okay?" "Any problems?" and I said no. And she said, "Okay". And it just felt quite like…what is the point in that?
Respondents missed having face-to-face contact with healthcare professionals, and frequently spoke of the importance of having the opportunity to build rapport and to ask questions which mothers otherwise feared did not warrant medical attention: Women who felt that they often went without the support and reassurance needed in the early postnatal period often resorted to reliance on physical indicators and self-reassurance that infant health was okay:
Timepoint 2: theme 1 -disrupted healthcare professional support
COVID-19 related restrictions on access to healthcare professional support led to T2 respondents feeling burdensome and abandoned. Extra reliance was thus placed on sourcing information from the internet. For women who did source information from online sources, virtual healthcare was perceived as being an insufficient replacement for essential face-to-face healthcare appointments in that virtual care failed to meet maternal and infant wellbeing needs. Consequently, respondents feared the potential consequences of discontinued face-toface healthcare professional support on infant and maternal wellbeing. Respondents felt considerations had not been made regarding the unique needs of new mothers, such as allowing partners to be present at essential healthcare appointments and during labour. However, mothers also noted the unprecedented pressures which healthcare professionals were under, and appreciated attempts made to extend care where possible.
Diminished care, distress, and desertion
Many respondents at T2 received very little healthcare professional support in the early postnatal period, which exacerbated feelings of loneliness: "Not too much support for people after you've had a baby, really. After you've been discharged from the midwife and the health visitors, that's kind of it. You're on your own." (Respondent 13, T2).
Respondents who felt ill supported and burdensome about helpseeking were left to find information themselves from internet sources. For such women, concerns were raised about fears of potential exposure to misinformation:
"I didn't have any contact details for health visitors. I didn't feel like I could go to them, either. Just felt like I was being intrusive. So I just used Google all the time, which is good and bad [laughter] because there's a lot of diagnosis things on there that might not be useful." (Respondent 22, T2).
Virtual healthcare was insufficient in meeting postnatal needs and was often perceived as more of a 'tick box' obligation rather than having received quality care:
"I've spoken to a health visitor a couple times and it was just like [imaging speaking to health visitor], "I know you've gotta tick a box, but that is really pointless. Wasting my time and yours." (Respondent 19, T2).
Worries were raised concerning potential implications of missed face-to-face healthcare visits to infant safety:
"We had no home visits at all from any health professionals which [sigh] is okay, but you do worry about the fact that the baby's environments aren't being checked…obviously we know it's okay but [laughter] they [health visitors] don't." (Respondent 20, T2).
Another distressing experience for women at T2 included partners being excluded from maternity suites, which was perceived as an incredibly isolating experience:
"Obviously you haven't got your husband or his family [on the ward], and it's literally like being in a little prison cell." (Respondent 17, T2).
This was particularly distressing for a respondent who had experienced a previous miscarriage: "We had a couple of scans as well, which my husband wasn't allowed to come along to. Erm and we've had some pregnancy losses in the past, so that was, that was quite difficult, not having that support there." (Respondent 20, T2).
They're doing the best they can
Despite dissatisfaction with quality and availability of healthcare professional support, respondents also recognised the unprecedented circumstances of the pandemic: "I understand they're [healthcare professionals]…trying to protect us and that kind of thing, so I think the things that I would've wanted I think couldn't have been possible." (Respondent 23, T2).
A respondent, whose infant needed to stay in hospital for eight days after birth, found it invaluable that healthcare professionals acted of their own volition to extend support beyond social distancing restrictions: "When [baby] went in for her operation she was in for eight days, you know, she had a six-hour operation, it was quite scary and yet me and my husband weren't allowed to visit her…the staff were great and often turned a blind eye when we were together." (Respondent 22, T2).
Healthcare professionals were praised for their empathic attempts to bridge the gap created by enforced social distancing restrictions on faceto-face services, with virtual healthcare: "When the GP prescribed me the surgery, she spent half an hour on the phone to me. She went above and beyond, really, and spoke about her own experiences as a mother. Said she'd been through similar, gave me some websites to look at. So, it's people really acting on their own volition." (Respondent14, T2).
Timepoint 2: theme twoeasing restrictions are bittersweet
Eased social distancing restrictions enhanced maternal emotional wellbeing through renewed independence. The ability to interact with loved ones and the re-opening of schools re-instated maternal autonomy and relieved parenting pressures. On the other hand, respondents also grieved their lost maternity period, which was much detached from expectations of sharing infant development milestones with loved ones during pregnancy. To maintain social connections during an otherwise lonely transition toward new motherhood, technology was found to be invaluable, but feelings towards it were mixed: mothers were appreciative of having the ability to maintain intimacy with friends and family, but also noted the limitations of technology in maintaining the same degree of social connectedness, achievable with face-to-face interactions.
Easing restrictions and renewed normality
Initial easing of social distancing restrictions in May 2020 was an invaluable source of improved maternal wellbeing, which was attributed to renewed independence: "We can start going into people's houses now, and that's sweet…things are massively improved…I've got freedom back a little bit more, not to the to the same extent that I would like, but it's certainly erm so much better." (Respondent 15, T2).
Gratitude concerning renewed small freedoms was a source of improved maternal emotional wellbeing for T2 respondents: "Even things like when we [husband and I] were getting our coffees, our little takeaway coffees, we were so grateful for that….we're going to the zoo for the first time on Saturday and we're just really excited." (Respondent 21, T2).
Newly introduced guidelines that allowed lone inhabitants to 'bubble' with another household, and easing social distancing restrictions, were both sources of great relief for T2 respondents: "With the support bubble, my brothers' on his own so he has been able to come here and stay with us…I have family coming over this weekend… that'll be really nice to see them. So, thank God it's eased a bit. Yeah." (Respondent 24, T2).
Another guideline change which was perceived as an invaluable emotional aid for T2 respondents included the re-opening of schools: "The schools, you know, even though it's just one day a week, they've made…that's all made a really big difference, psychologically. It feels like there's less pressure on you [laughter]" (Respondent 20, T2).
Lost time with Baby
There was a commonly reported sense of sadness in connection with family and friends having lost irretrievable, precious time with their infants: "There's so many things that I feel like we've missed out on in terms of, you know, him [baby] meeting his family and…other than immediate family, no one's even met him, who would have by now…and at this point in time we don't know when they will either." (Respondent 16, T2).
Despite the experienced joy and happiness in being able to see family face-to-face at T2, there was an accompanying, conflicting sense of loss in the realisation that true postnatal 'normality' had not yet been achieved: "My mam came to my back door, and it was heart-breaking. She just had to look at the baby through the window." (Respondent 14, T2).
Disappointment was also felt in the lost time which could have been spent attending parenting classes and interacting with other mothers: "I did NCT [National Childbirth Trusta charity which provides antenatal classes] so I've been able to do the face-to-face sessions about halfway through and what was really frustrating, and has impacted me now, is towards the end when the pandemic kind of started, my last sort of interaction won't have face-to-face." (Respondent 15, T2).
Other sources of difficulty included lack of parenting support from social networks that would have been accessible in the absence of COVID-19 imposed social distancing restrictions: "Hardest was sort of my parents, who are in their 70's, was them not having the role that they want to have with the baby, because my mum is so hands on, you know? She…was the childcare." (Respondent 19, T2).
Technology: a necessary evil
For many respondents, technology was invaluable in allowing mothers to remain connected with loved ones while face-to-face contact were restricted: Virtual parenting classes have also been identified as more difficult to navigate, and less socially engaging, than face-to-face parenting classes: "They've [parenting groups] all been trying to do things on-line, but it just isn't the same. You've gotta be there. It's about the social interaction… and to be honest, I don't really think baby groups are for babies, they're for the mums." (Respondent 21, T2).
Discussion
The current study used recurrent cross-sectional thematic analysis to explore women's experiences of social and healthcare professional support during different phases of the COVID-19 pandemic.
Social support during the COVID-19 pandemic
A source of notable disappointment and grief for respondents at both timepoints was the closure of parenting groups. Parenting groups provide invaluable social and informational resources, which have been associated with improved emotional wellbeing outcomes [25]. Current findings suggest that lockdown restrictions on accessibility to parenting groups [12] have had an adverse effect on maternal mental health. Strikingly, respondents at both timepoints noted on-line parenting support groups were an insufficient alternative for face-to-face interactions. Salience at both timepoints emphasises the ineffectiveness of technology in attenuating maternal feelings of isolation and frustration in response to imposed lockdown restrictions, due to the lack of improvement in thoughts or feelings over time. Although use of technology has been an important source of support for mothers during the UK national lockdown [11] the re-establishment of face-to-face parenting support groups would seem to be imperative for improving postnatal emotional wellbeing.
T2 respondents found disruptions to healthcare professional support such as the exclusion of partners and family from maternity wards exceptionally isolating (See Table 3). Exclusion of social support networks from maternity wards was not identified by T1 respondents, which may have been due to the time of interviewing: three respondents had given birth before social distancing restrictions were implemented, one respondent had had a home birth, and two respondents had been allowed partners to be present at some hospital appointments prior to social distancing restrictions being implemented. Rapid-response research conducted during the initial phase of the COVID-19 pandemic found that labouring alone increased perceived isolation and frustration [26]. Allowing mothers to be accompanied by a support person throughout all perinatal healthcare appointments, not just during active labour [27], is an auspicious opportunity to improve maternal emotional wellbeing and satisfaction with healthcare professional support.
Healthcare professional support during the COVID-19 pandemic
At both timepoints, virtual healthcare was perceived as an impersonal 'check box' exercise. Reconfigured healthcare guidance has consisted of flow diagrams [28] and bullet-points [29] for healthcare professionals to follow during perinatal mental health and physical check-ups. Evidence from respondent accounts suggests that this skeleton care is overly reductionist and inefficient in supporting postnatal emotional and physical concerns. After prioritising acute safety of the public (based on careful evaluation of the potential risk of increasing infection rates, on balance with counteracting effects of vaccination uptake; [30]), priority should be placed on reinstating all essential face-to-face perinatal healthcare appointments in hospital and home settings. This would be with an aim to protect infant and maternal wellbeing.
For T2 respondents, insufficient healthcare professional support led to increased reliance on on-line resources to address postpartum questions. Respondents who had sourced information online were concerned about being potentially exposed to misinformation and false information, and were consequently worried about possible negative impacts to maternal and infant wellbeing. Previous literature has also found that on-line informational resources often contain false information and misinformation [31], which may result in serious infant welfare and maternal mental health concerns going unaddressed or being unnecessarily minimised or exaggerated. Given the increased reliance on technology for support during the COVID-19 pandemic [11,32] it is essential for healthcare professionals to direct mothers to reputable on-line resources between face-to-face visitations, so to mitigate the risk of acquiring inaccurate guidance.
Most respondents found attending routine hospital appointments anxiety-provoking. For one respondent, the thought of being transferred to hospital was so anxiety-inducing that labour contractions were slowed. Higher scores on general and pregnancy-specific measures of anxiety are both predictive of greater use of pain relief and greater need for medical intervention during labour [33]. This is concerning because increased use of medical interventions during birth are related to poorer infant health outcomes [34]. Considering the elevated prevalence of postnatal anxiety observed during the pandemic [16] it is important for healthcare professionals to: encourage mothers to reach out about medical and emotional wellbeing concerns, initiate face-to-face conversations about mental health issues, ensure sufficient accessibility to mental health services, and ensure provisions are in place to reassure mothers about attending essential face-to-face appointments e.g., wearing Personal Protective Equipment, during the remainder of the global COVID-19 pandemic [10,32].
Maternal anxieties concerning COVID-19 are well rationalised, given the relatively higher risk of COVID-19 related mortality when compared with seasonal influenza [35]. However, this has troubling consequences: the pandemic has seen a reduction in number of non-COVID-19 related emergency room admissions due to fears of contracting the virus [36]. Such help-seeking avoidance may have adverse downstream health consequences [37]. Notably, caregivers face additional practical barriers to attending healthcare appointments (e.g., work, school) which may compound health anxieties and further reduce hospital appointment attendance [38]. Possible solutions showing utility are digital interventions. Digital interventions have been effective in reducing postnatal anxiety around parenting practices and improving infant health outcomes pre-pandemic [39]. Future research should aim to examine the feasibility and acceptability of psycho-educational interventions to help reduce maternal anxiety, to dissipate misconceptions about attending essential hospital appointments through the remainder of the global COVID-19 pandemic.
For T1 respondents, lack of information and clarity regarding the sixto-eight-week postnatal check-ups were a source of anxiety and frustration. In contrast, T2 respondents did not discuss lack of clarity in communications regarding the six-to-eight-week health check, which may be indication that re-prioritisation of the face-to-face six-to-eight week checks in April 2020 [40] were effective in supporting postnatal concerns. The six-to-eight-week check allows GPs to assess maternal mental and physical wellbeing after birth and to check infant development and health [41,40]. Lacking and ineffective communication from healthcare professionals has been linked with dissatisfaction with support and negative emotional outcomes in other domains of postnatal research e.g., within an infant feeding context [42]. Current findings suggest lack of clarity surrounding face-to-face health checks during initial lockdown restrictions led to ineffective support for mothers and exacerbated feelings of anxiety and frustration.
Moreover, T2 respondents talked about healthcare professionals acting of their own volition to provide support above and beyond national restrictions. Certainly, social distancing restrictions on healthcare services have been in direct contrast with the moral values and preferred practice of maternity staff (Horsch et al., 2020). Such dissonance between preferred practice and imposed restrictions may have contributed towards the increased prevalence of emotional distress observed among obstetrics and gynaecology employees during the COVID-19 pandemic [43]. Prioritisation of personalised face-to-face care is therefore fundamental for satisfaction with support among both mothers and maternity staff [44,45].
T1 respondents felt that face-to-face health visitation was essential for building the rapport necessary to confide in healthcare professionals about emotional wellbeing difficulties, and T2 respondents feared the potential dangers for mother and infant of missed face-to-face health visitation. The purpose of health visitation is to ensure that the infant's environment is safe, to check on maternal emotional wellbeing, and to assess baby for conditions which may require further evaluation e.g., yellow palms and soles as an indicator of potential jaundice [46]. Such home visitations are responsible for an 18% reduced risk of perinatal mortality [46]. Reduced access to in-person healthcare (Horsch et al., 2020), consequently, has potential for detrimental impacts to maternal and infant wellbeing. Maternal mental health has suffered substantially due to COVID-19 related stressors [6,14,16]. It is therefore concerning that respondents in the current study felt inhibited to seek support due to the limitations of virtual healthcare arrangements [32]. Essential face-to-face healthcare visitation during the immediate postnatal period should therefore be re-prioritised in this, and similar crises.
Strengths, limitations, and future directions
The current study offers analyses of data from rapid research in response to the COVID-19 pandemic, providing in-depth insights into the psychological, social, and community factors which may have contributed towards heightened levels of maternal emotional distress identified in recent quantitative investigations of maternal mental health during the COVID-19 pandemic [16]. Findings from the current study have potential applications in revising policy and practice, with an aim to support maternal wellbeing more effectively during the remainder of this health crisis, and in future crises. Data for the current study was collected in alignment with changing social distancing restrictions [9] which allowed for more accurate recall of lived experiences during different phases of the COVID-19 pandemic. A homogenous sample of respondents were recruited, who were well matched by age, educational status, and occupation. This allowed for greater transferability of study findings to be achieved.
A limitation of this study is that place of birth (private hospital, NHS hospital, midwifery led unit, home birth) was not routinely recorded as part of the interviews, meaning that our findings are with regards to the birth experience and cannot be linked to their place of birth. Although a geographically diverse sample of women were recruited, another limitation of the current study is that participant ethnicity was not routinely recorded. Within the current sample, one participant self-disclosed as being of Black ethnicity and one participant self-disclosed as being of Asian ethnicity. Literature suggests that, pertaining to the COVID-19 pandemic, women from Black, Asian, and Ethnic minority backgrounds experience more adverse emotional wellbeing outcomes [47] and worse health outcomes from contracting the disease [48]. Recent literature shows that Black women frequently perceive COVID-19 guidance as confusing and untrustworthy [49]. Additionally, evidence suggests that there has been an increase in stigma and anti-Asian discrimination during the COVID-19 pandemic due to misplaced blame for the outbreak which is likely to have had a negative impact on mental health [50]. Future research should therefore seek to explore the psychosocial experiences of mothers from Black, Asian, and Ethnic minority backgrounds, to identify and address ethnicity-specific barriers to support accessibility during this, and similar crises.
Conclusion
The current study used a recurrent cross-sectional thematic analysis to explore postnatal experiences of social and healthcare professional support during the COVID-19 pandemic, in a UK population of women. Regarding social support, recommendations are made to allow mothers the opportunity to self-isolate with one other major support partner, and to prioritise the re-opening of parental support groups. For healthcare professional support, recommendations are made to prioritise face-toface healthcare visitation, to improve clarity and consistency of communication regarding changing social distancing restrictions, to allow a support person to attend all necessary hospital appointments, and for healthcare professionals to actively encourage mothers to engage in help-seeking behaviour. Future research should aim to explore the experiences of mothers from Black, Asian, and Ethnic minority backgrounds, and to examine the acceptability and feasibility of a psychoeducation intervention in reducing maternal anxiety concerning the attendance of essential face-to-face hospital appointments during this, and similar crises.
Author agreement
This article is the authors' original work and has not received prior publication, nor is it under consideration for publication elsewhere. All authors have seen and approved the manuscript being submitted and agree to abide by the copyright terms and conditions of Elsevier and the Australian College of Midwives.
Ethical statement
Ethical approvals were sought from and granted by the
|
2021-10-15T15:22:52.516Z
|
2021-10-01T00:00:00.000
|
{
"year": 2021,
"sha1": "047a030431eeadeadc8634d0d03d61c96422cca4",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/j.wombi.2021.10.002",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "1586b84b78122c2a5564582ae392c5e027d4dc67",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
227335053
|
pes2o/s2orc
|
v3-fos-license
|
Reinforcement Learning Based Dynamic Function Splitting in Disaggregated Green Open RANs
With the growing momentum around Open RAN (O-RAN) initiatives, performing dynamic Function Splitting (FS) in disaggregated and virtualized Radio Access Networks (vRANs), in an efficient way, is becoming highly important. An equally important efficiency demand is emerging from the energy consumption dimension of the RAN hardware and software. Supplying the RAN with Renewable Energy Sources (RESs) promises to boost the energy-efficiency. Yet, FS in such a dynamic setting, calls for intelligent mechanisms that can adapt to the varying conditions of the RES supply and the traffic load on the mobile network. In this paper, we propose a reinforcement learning (RL)-based dynamic function splitting (RLDFS) technique that decides on the function splits in an O-RAN to make the best use of RES supply and minimize operator costs. We also formulate an operational expenditure minimization problem. We evaluate the performance of the proposed approach on a real data set of solar irradiation and traffic rate variations. Our results show that the proposed RLDFS method makes effective use of RES and reduces the cost of an MNO. We also investigate the impact of the size of solar panels and batteries which may guide MNOs to decide on proper RES and battery sizing for their networks.
Abstract-With the growing momentum around Open RAN (O-RAN) initiatives, performing dynamic Function Splitting (FS) in disaggregated and virtualized Radio Access Networks (vRANs), in an efficient way, is becoming highly important. An equally important efficiency demand is emerging from the energy consumption dimension of the RAN hardware and software. Supplying the RAN with Renewable Energy Sources (RESs) promises to boost the energy-efficiency. Yet, FS in such a dynamic setting, calls for intelligent mechanisms that can adapt to the varying conditions of the RES supply and the traffic load on the mobile network. In this paper, we propose a reinforcement learning (RL)based dynamic function splitting (RLDFS) technique that decides on the function splits in an O-RAN to make the best use of RES supply and minimize operator costs. We also formulate an operational expenditure minimization problem. We evaluate the performance of the proposed approach on a real data set of solar irradiation and traffic rate variations. Our results show that the proposed RLDFS method makes effective use of RES and reduces the cost of an MNO. We also investigate the impact of the size of solar panels and batteries which may guide MNOs to decide on proper RES and battery sizing for their networks.
I. INTRODUCTION
The ideas of virtualized RANs (vRANs) and functional splits date back to Small Cell Forum's studies on small cell virtualization [1]. Accordingly, vRAN mainly aims to disaggregate Baseband Units (BBU) and remote radio head (RRU) functionalities and allow softwarized network functions of BBUs to be hosted on common-of-the-shelf (COST) hardware. The new Open Radio Access Network (O-RAN) architecture aims to define open interfaces in disaggregated vRANs such that equipment from multiple vendors can be interoperable [2]. Within the O-RAN architecture, horizontal and vertical splits allow network functions to be handled by either the hardware at the edge cloud, known as Distributed Unit (DU), or the hardware at the central cloud, i.e. the centralized unit (CU).
In O-RAN, function splitting (FS) can be either between physical and virtualized resources, or between DU and CU, yielding many possibilities. Yet, the full potential of FS arises from dynamic splits where network functions are placed based on varying feedback from the network. The traditional user Quality of Service (QoS) is certainly central to decision making in dynamic FS. For instance, the traffic type, whether it is enhanced mobile broadband (eMBB) or ultra-reliable lowlatency communication (URLLC), impacts the FS decision. In addition, energy efficiency plays a key role in a mobile network operator's (MNO) decision-making process due to the tremendous growth in their carbon footprint and cost in relation to densification and data demand [3]. Using renewable energy sources (RESs) in telco clouds as an alternative to ongrid energy is a promising approach to reduce the excessive energy consumption from the grid. On the other hand, renewable energy has two critical drawbacks. First, MNOs should store this energy in a battery that has limited capacity. Due to the increasing CapEx of RES and battery, MNOs need to optimize energy usage [4]. Second, RES is intermittent and the supply has some level of uncertainty [5]. Therefore, under the variability of the RES and the variability of the wireless environment, optimal functional splitting in disaggregated green virtualized RANs introduces a great degree of complexity which can be best addressed by machine learning techniques, in particular reinforcement learning-based methods [6].
This paper focuses on a novel system model in which the functions are split between a CU and several DUs while using the RES as an alternative energy source for the telco cloud. Figure 1 presents this architecture. We propose a reinforcement learning-based (RL) dynamic function splitting (RLDFS) approach and implement Q-learning and SARSA algorithms. Our motivation to choose these algorithms originates from their light-weight implementation and their dynamically learning properties. Thus, our studied network can conveniently adapt to the continuous traffic and solar radiation changes. We experiment with various densities of traffic loads and real solar energy data collected at different seasons to see the impact of seasonal changes in four globally separate geographical areas, which have significantly diverse solar radiation distributions. Our results show that the proposed solution reduces the OpEx of an MNO significantly under various solar radiation and traffic load.
The remainder of this paper is organized as follows. Section 2 summarizes the related work. We define the system model and the cost optimization problem in Section 3. In Section 4, we explain the RLDFS technique and present its performance in Section 5. Section 6 concludes the paper.
II. RELATED WORK
Temesgene et al. have proposed the first studies that merge the FS approach and RESs usage in a RAN. The authors detail the energy consumption in virtual small cells (vSCs) and implement reinforcement learning (RL) methods to optimize the FS decisions between the vSCs and a macro base station [6], [7]. Meanwhile, Wang et al. propose a heuristic solution for an architecture that the functions that are split between the RRHs and BBUs [8]. In addition, Ko et al. focus on a similar architecture and formulate a constrained Markov decision process model [9]. Nevertheless, all of these studies aim to split the functions between the RRHs and a BBU center.
Larsen et al. provide a broad survey for both static and dynamic FS methods. Also, it is among the first research works that consider O-RAN architecture [10]. Furthermore, Alabbasi et al. introduce a study called Hybrid Cloud RAN where they aim to jointly minimize the energy and bandwidth consumption in their RAN with a constrained programming solver [11]. Their results highlight the importance of FS decisions for delay and energy consumption minimization. However, their study does not consider using RESs usage in the RAN.
In our previous study, we provide mixed-integer programming solver and heuristic solutions for this joint problem [3]. Although the new study is based on this paper and aimed to minimize the OpEx, the previous research considers a hybrid C-RAN architecture and includes a bin-packing problem due to function-micro data center assignment decisions. Different than prior work, in this paper, we decide the function splittings based on the user/traffic types (URLLC/eMBB) tuples, and we focus on a novel O-RAN architecture. Also, we choose two RL-techniques, Q-Learning and SARSA, as solution techniques.
III. SYSTEM MODEL
We consider a virtualized RAN environment where network functions can be split dynamically between CU and DU, based on the network conditions. This allows for more flexibility than the fixed functional split options provided by 3GPP. Figure 1 illustrates a three-tier vRAN model that employs radio unit (RU), DU and CU, in addition to being green by having solar power generation capability at DUs and CUs.
We may classify the network functions into two groups: The user-related functions (URFs) dedicated to specific user data and cell-related functions (CRFs) that process the multiplexed URFs in the physical layer [12]. The top-tier, CU, has energy efficient digital units to execute only URFs. This CU has directed fiber optic links to the middle-level DUs (r ∈ R). DUs are responsible for both URFs and CRFs. There are two reasons to prevent handling the CRFs at the CU level. First, processing the CRFs at this level need huge bandwidth allocation at the fiber optic links [13]. Second, operating these functions at the DUs provides relaxation for the stringent delay requirements [3]. Finally, a set of RUs are connected to their corresponding DU with a point-to-point millimeterwave or dedicated fiber link [11]. In this architecture, the MNO can deploy the RUs near to their corresponding DU to hold their capital expenditure (CapEx) at more economical rates. Furthermore, as DUs are located closer to the users, they provide latency advantage over CUs for certain types of traffic.
Deciding the optimum splits for network functions (f ∈ F) is the fundamental goal in this system. The split decisions should be dynamically made for a set of time intervals in a day (t ∈ T ) for each traffic type (i ∈ I), such as eMBB or URLCC. The other significant decision problem is related to using two different energy sources at the DU and CU. The first energy source, a solar panel, reduces the OpEx of the MNO with renewable energy; the second one, on-grid energy, acts as reliable energy source in the case of insufficient green energy. Before formulating the relations between these two types of energy sources, we will describe the total energy consumption in the DU (E DU rt ) and the CU (E CU t ) determined by eqs. ( 1) and (2), respectively: is the static energy consumption of a DU due to the cooling system and the other idle-mode energy consumption which do not change by the function split decisions. A CU also has a static energy consumption (E CU S ) due to its idle-mode energy consumption. Besides, dynamic energy consumption has three main components. The first component U rit is the traffic load of traffic type i at the DU r in time slot t. The second component, a ritf , is a binary decision variable that equals to one if the URF f of traffic type i is executed at the DU r in time slot t. The last one is the dynamic energy coefficient, represented as E DU D and E CU D for the DU and CU, respectively. Lastly, the energy consumption of CU (eq. (2)) depends on the number of functions that are not processed in DU (1 − a ritf ) and the traffic loads of these functions (U rit ). Thus, it is crucial to optimize the FS decisions to reduce overall energy consumption.
The relation between the two types of energy sources can be formulated with the following OpEx minimization problem as in eq. (3). The OpEx of the system is the overall on-grid electricity bills of the CU and the DUs. Note that the energy consumption of the RU and the cost of solar generation, in terms of investment, are not included in OpEx calculation. The reason is that these costs do not impact the dynamic FS problem.
In this model, we consider three fundamental problems to reduce the energy bill of an MNO. First, we have to reduce the total energy consumption with the efficient splitting of the URFs. Second, we have to promote the usage of solar energy (p CU t and p DU rt ) instead of on-grid energy. Hence, the amount of surplus solar energy needs to be minimized, and in relation, the capacity for solar generation and batteries needs to be selected such that the cost of MNO is minimized. Lastly, we have to consider the variation of the electricity prices in a day period (c E t ) and use renewable energy during peak electricity rates.
Minimize:
Subject to: (1 − a ritfx ) * fy∈F fy≥fx a ritfy = 0, ∀f x ∈ F, ∀i ∈ I, ∀r ∈ R Here, eq. (4) guarantees that the URFs chain is broken only once between the DUs and the CU. Note that each constraint should be satisfied for all time intervals (∀t ∈ T ). Assume that the function f x is executed at the CU, then a ritfx equals to zero. After that, the remaining upper layer functions f y should also be completed at the CU (a ritfy = 0); otherwise, the multiplication on the left side of the equation will be different from zero.
Equations 5 to 10 regulate the energy usage by the servers in CU and DU. The first two equations determine the renewable energy in the batteries of the solar generators that sit either beside DU or CU. For the CU side (eq.(5)), the current battery energy (b CU t ) depends on the battery's remaining energy from the previous time interval (b CU (t−1) ). This value increases with the size of the solar panel at the CU (ω CU ), and the generated renewable energy of a panel's unit size (G CU t ). On the other hand, the energy in the CU battery reduces with the consumed green energy (p CU t ). The calculation is similar for the DU side, which is provided by eq. (6). Meanwhile, the batteries' capacities (β CU and β DU r ) limit the maximum stored renewable energy presented by eqs. (7) and (8) for the CU and DUs, respectively. Lastly, it is clear that the consumed green energy (p CU t and p DU t ) should be lower than the total energy consumption in a specific time interval, and this constraint is guaranteed by eqs. (9) and (10). Solving this optimization model for each arriving user traffic, and under varying solar conditions is not practical. Instead, we propose to use reinforcement learning to allow the network to adapt to dynamic conditions.
IV. REINFORCEMENT LEARNING BASED DYNAMIC FUNCTION SPLITTING (RLDFS)
In our model, we use a multi-agent system where DUs and CU act as independent agents. The states for a DU are calculated with the tuple, S DU t = {b DU rt , U rit , t D }, which includes the remaining energy in the DU's battery, the traffic load on this DU and the time of the day (t D = t (mod T D )), U rit , t D }, which includes the remaining energy in the CU's battery, the total traffic load on the network, and the time of the day, respectively. There are two main benefits to include the time of the day as a state parameter. First, we want to combine the impact of solar irradiance and the electricity price tariffs together. Second, we want to reduce the state space to improve the convergence. Otherwise, we would have to specify individual states for different solar data and electricity prices. Combining this approach with the multi-agent system, we limit the state space to a reasonable size, which yields fast converging rates for the optimal functional split decisions that need to be determined in hours timescale. The OpEx minimization problem has two important decision variables. The actions of the DU agents render them with the tuple A DU t = {a ritf , p DU rt }. Meanwhile, the CU agent needs to decide only its renewable energy consumption . In a real-world scenario, the CU automatically processes the URFs not chosen to be processed at the DUs. Finally, the reward is calculated by eq. ( 11), in which O t is the OpEx in time slot t , ψ is the normalization factor, and τ is the window size.
We solve the OpEx minimization problem with two RL approaches. The first one, called Q Learning, is represented by eq. (12). The second one, called Sarsa, is represented by eq. (13). The main difference between these two methods relies on the calculation of the next q-table (S t , A t ). The first one promotes the action that provides the maximum q-value; the second one applies the next action (A t+1 ) directly into account to calculate the next q-table [14]. Otherwise, the essential detail to implement an RL solution is to decide the states S t , the actions A t and the reward calculation R t+1 .
V. PERFORMANCE EVALUATION
The evaluation setting of our primary use case is shown in Figure 2. We have 1 CU and 20 DUs that each one has one solar panel and a battery to store harvested renewable energy. There are 10000 users serviced by each DU. These users generate two types of traffic, i.e. URLLC and eMBB. The URLLC traffic demands low latency; thus, they are processed directly on DUs [15]. On the other hand, the functions of eMBB traffic, which creates a ten times larger load than the URLLC, are split between the DUs and the CU.
The users have a daily sinusoidal shape traffic load described by eq. (14) in which ϕ is a random value between the 3π/4 and 7π/4 which defines the peak hour of the traffic profile, ν = 7 determines the slope of the traffic profile and n(t) is a random value which produces a fluctuation in this traffic profile [16]. We also provide a yearly load variation to understand the impact of the distinction between the seasons (Figure 3). In addition, we generate different peak hours for each DU to affect distinctive zones in a city such as residential, industrial, or shopping areas [4], [17]. Thus, we simulate both temporal and spatial variations of a traffic load in the region of a city.
The pvWatts application calculates generated green energy (G y rt ) from a solar panel [18]. We use the solar radiation data of four different cities (Stockholm, Istanbul, Cairo, Jakarta) with a distinct distribution in a year period. Thus, we can investigate the effect of seasonal change in our model. The other energy consumption parameters are given in Table I. The electricity price values are from the Republic of Turkey Energy Market Regulatory Authorities (EPDK) variable electricity tariff regulation with different price policies according to the time of the day and calculated based on exchange rates as of September 2020 [19]. Table II details the reinforcement learning parameters. The idea to choose a 48 hours window for cost minimization relies on making the decisions to consider the daily variance of traffic loads, harvested energy and electricity prices. Also, we want to extend the decisions between the consecutive days. We evaluate our RL methods with comparison to two approaches. The first one, called distributed-RAN (D-RAN), processes both URLLC and eMBB packets at DUs. The second one, centralized-RAN (C-RAN), handle URLLC packets at DUs to not violate delay requirements and transfer eMBB packets to the CU for cost-efficiency. Figure 4 shows the performance of RLDFS-QL and RLDFS-Sarsa methods according to different cities (solar radiation distributions) and traffic rates. These methods outperform D-RAN and C-RAN approaches in all four cities and under varying traffic intensities (low, medium, high). Besides, our methods perform better with increasing solar radiation rates. Figure 5a provides more insights to the results in Figure 4 by considering the city of Jakarta and medium traffic load. The proposed RLDFS-Sarsa algorithm can consume a higher amount of renewable energy in the noontime; thus, it's the method that has the lowest unstored energy in Figure 5b. Meanwhile, RLDFS-QL algorithm can shift renewable energy usage according to traffic load and the electricity tariffs. C-RAN fails to efficiently use renewable energy owing to not migrating the eMBB packets to the DUs, in the case of lower URLLC traffic loads on DUs. We further investigate the impact of RES and battery sizing, which is shown in Figure 6. As observed, RL-based methods have lower costs than the other techniques and their performance improves with larger solar panels and batteries. Adaptation to a larger amount of renewable energy is the main reason for this outcome.
VI. CONCLUSION
In this paper, we introduce a novel RAN concept that combines energy-efficiency with virtualization that will be applicable to future O-RAN deployments. We propose a reinforcement learning based technique and solve it with two different approaches (Q-learning and Sarsa) to make dynamic function split decisions among DUs and the CU. We also formulate an OpEx minimization problem. Our results show that RL-based solutions make the best use of renewable energy and are cost-efficient. Finally, we present the findings of the impact of different solar panel sizes and the batteries on this network model, which helps to evaluate the feasibility of using RES for an MNO. As a future work, we plan to investigate the optimal RES and battery sizing for an MNO.
|
2020-12-08T02:00:43.657Z
|
2020-12-06T00:00:00.000
|
{
"year": 2020,
"sha1": "5a830c165ec2333992b1a9558194936d6544d17d",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2012.03213",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ba83a239ba0ea9af78e3ce6389a44fc4165cd699",
"s2fieldsofstudy": [
"Computer Science",
"Environmental Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
266459764
|
pes2o/s2orc
|
v3-fos-license
|
Rapid Assessment of Di(2-ethylhexyl) Phthalate Migration from Consumer PVC Products
Poly(vinyl chloride) (PVC) is widely used to produce various consumer goods, including food packaging, toys for children, building materials, and cosmetic products. However, despite their widespread use, phthalate plasticizers have been identified as endocrine disruptors, which cause adverse health effects, thus leading to increasing concerns regarding their migration from PVC products to the environment. This study proposed a method for rapidly measuring the migration of phthalates, particularly di(2-ethylhexyl) phthalate (DEHP), from PVC products to commonly encountered liquids. The release of DEHP under various conditions, including exposure to aqueous and organic solvents, different temperatures, and household microwaves, was investigated. The amount of DEHP released from both laboratory-produced PVC films and commercially available PVC products was measured to elucidate the potential risks associated with its real-world applications. Furthermore, tests were performed to evaluate cytotoxicity using estrogen-dependent and -independent cancer cell lines. The results revealed a dose-dependent impact on estrogen-dependent cells, thus emphasizing the potential health implications of phthalate release. This comprehensive study provides valuable insights into the migration patterns of DEHP from PVC products and forms a basis for further research on the safety of PVC and plasticizers.
Introduction
Poly(vinyl chloride) (PVC) is the most produced polymer after polyethylene and polypropylene [1].It is widely employed to fabricate consumer products, including food and beverage packages, children's toys, plastic bags, automobile interiors, building materials, furnishings (e.g., wallpaper, vinyl flooring, and furniture upholstery), and cosmetic products [2].However, owing to the inherent rigidity of PVC, plasticizers are typically incorporated to confer flexibility and elasticity for specific applications [3].More than 3 million tons of plasticizers, particularly phthalate plasticizers, are produced annually globally [4].
Despite their widespread use, PVC and its associated phthalate plasticizers have garnered considerable attention owing to their associated health and environmental risks [5].Phthalate plasticizers, a significant component that makes PVCs flexible, have been identified as endocrine disruptors, which affect the endocannabinoid system and are directly linked to metabolic syndrome and tissue damage [6][7][8].Di(2-ethylhexyl) phthalate (DEHP), the most commonly used phthalate ester plasticizer, interacts with estrogen receptor alpha and interferes with the normal hormonal balance, leading to estrogenic effects in the body [9].These plasticizers can be released into the environment from various PVC products, thus posing a potential threat to human health through inhalation, ingestion, and skin contact [10][11][12][13][14][15].Their non-covalent attachment to PVC facilitates easy migration, leading to recent efforts to explore the covalent attachment of phthalates to PVC [16].Studies on DEHP have highlighted its adverse effects, including anti-androgenic effects at high doses (405 mg/kg/day) and subtle effects at lower doses (15 mg/kg/day) [17].The Food and Drug Administration (FDA) further emphasizes the risks associated with oral exposure to DEHP during gestation (100-200 mg/kg/day), which include neural tube defects, skeletal and cardiovascular malformations, developmental delays, and intrauterine death [18].However, the recent announcement by the European Food Safety Authority establishes significantly lower limits, setting a tolerable daily intake (TDI) of 50 µg/kg based on the potential for fetal testosterone depression and a TDI of 150 µg/kg based on effects on the liver [19].DEHP is also found in PVC medical devices, subject to scrutiny from European authorities [20], resulting in strict controls over its use.
In addition to health concerns, awareness on the environmental impact of phthalatebased plasticizers originating from PVC products is increasing.Further, as the detection of microplastics in living environments has become more prevalent [21,22], the potential migration of phthalate-based plasticizers from PVC items has raised additional alarms.The harmful nature of fine plastics contributes to the complexity of this issue [23].
Despite these concerns, studies assessing the plasticizer quantity in PVC products in various living environments remain limited.Further, current methods for evaluating the plasticizer content rely on physical and chemical analyses of the melted PVC products to determine the remaining plasticizer content in the solution [24].However, these methods lack the specificity required to ascertain whether phthalates migrate from consumer products into the body.In response to these challenges, the present study aims to overcome the lack of information by developing a method to rapidly measure the migration of phthalates from PVC products into liquid components commonly used in daily life.By subjecting phthalate-containing products to conditions encountered in living environments and assessing the amount of phthalate leaching, our aim is to identify conditions where phthalates migrate readily.This research seeks to provide a quick and straightforward method, employing simple equipment such as high-performance liquid chromatography (HPLC), to determine the extent of DEHP leaching from a PVC product in a living environment.Thus, it elucidates the potential risks associated with the use of PVC and its plasticizers in various applications.
Reagents and Materials
Extra-pure-grade DEHP was obtained from Samchun Chemical (Pyeongtaek, Republic of Korea).HPLC-grade n-hexane and glacial acetic acid were purchased from Merck (Darmstadt, Germany).Phosphate-buffered saline (PBS), Dulbecco's modified Eagle's medium (DMEM), RPMI 1640 medium, and fetal bovine serum (FBS) were procured from Corning Cellgro (Manassas, VA, USA).All reagents and buffer solutions were prepared in glass vials and apparatuses to prevent contamination with phthalates.
Preparation of Standard PVC Film
The resin suspension, sourced from Hanwha Chemical (Yeosu, Republic of Korea), was used as the base material for standard PVC film.To enhance flexibility, DEHP was incorporated into the resin at a ratio of 60 parts of DEHP per 100 parts of PVC.The resulting blend underwent a thorough melting process using a twin-screw extruder.Subsequently, the extruded resin was pelletized and washed to eliminate surface dust and impurities.Initially, granulated pellets were immersed in a 0.5% non-toxic mild soap solution and stirred thoroughly for 3 min.Following this, the pellets underwent 5 min of washing with running tap water, followed by washing with distilled water for an additional 10 min.Subsequently, the samples were treated with HPLC-grade methanol for 15 s and then dried in an oven at 50 • C for 30 min.The cleaned pellets were then shaped into a film (20 mm × 10 mm × 0.4 mm) using a steel mold operated as a hot press at 170 • C. The molded samples were promptly quenched in a water bath to room temperature.Subsequently, the samples underwent a secondary washing as described above.The molded and rinsed PVC films were then employed for leaching experiments.Furthermore, all glassware used in this study underwent thorough cleaning using a tetrahydrofuran-methanol mixture before use.
Migration of DEHP from PVC Films into Liquids
A two-pronged approach was adopted to investigate the release of DEHP from the PVC films.First, a PVC film produced in the laboratory and designated as the control group served as a benchmark for comparative analysis.In addition, various PVC products procured from a local market were subjected to DEHP elution tests.PVC films were cut into pieces (5 mm × 5 mm, 1 g per piece).Subsequently, various stimulants were used to facilitate DEHP release.Distilled water, saline (PBS), hydrochloric acid (pH 1), sodium hydroxide solution (pH 13), olive oil, ethanol, and acetone were used as representative solvents possibly in contact with PVC products.The samples were submerged in the stimulants (5 mL) for varying exposure times and temperatures.After the removal of the PVC samples, the solutions were preserved in glass vials for subsequent analysis.
High-Performance Liquid Chromatography (HPLC)
HPLC analysis was performed using a Waters HPLC system (Waters Breeze 1525, Etten-Leur, The Netherlands) equipped with a binary pump, autosampler (Waters 2707), and ultraviolet-visible detector.Chromatographic separation was achieved using a Symmetry C18 column (150 mm × 4.6 mm; particle size = 5 µm), with a mobile phase consisting of a mixture of 40% methanol and 60% acetonitrile.In each analysis, a sample volume of 10 µL was injected into the HPLC system.The flow rate was maintained at 0.6 mL/min and all eluents were monitored at 228 nm.All experiments were conducted three times, and the presented data correspond to the average of three replicates.Standard deviation is not shown due to its negligible impact.
Calibration Curve of DEHP for HPLC
DEHP (0.786 mg/mL) was dissolved in acetonitrile (Merck, Darmstadt, Germany) to prepare a 1000 ppm stock solution.Subsequently, the stock solution was diluted to generate a series of standard solutions of varying concentrations: 0, 50, 100, 200, and 500 ppm.A comprehensive calibration curve was constructed for all of these concentrations.
Cell Culture and Cytotoxicity Evaluation
MCF-7 and MDA-MB-231 cells were obtained from the Korean Cell Line Bank (Seoul, Republic of Korea).MCF-7 cells were cultured in DMEM supplemented with 5% FBS, whereas MDA-MB-231 cells were cultured in RPMI 1640 media with 10% FBS.The cells were incubated at 37 • C in a 5% CO 2 atmosphere.Since DEHP was not soluble in media, it was initially solubilized in ethanol and then further diluted with the media.The resulting concentration of ethanol in the media was 0.1%.All of the samples were filtered through a 0.22 µm filter, and the filtered samples were introduced to MCF-7 and MDA-MB-231 cells, which had been cultured to approximately 20% confluence in 96-well tissue culture plates.After 48 h of incubation, 3-(4,5-dimethylthiazol-2-yl)-2,5-diphenyltetrazolium bromide (MTT) assay was performed according to the manufacturer's instructions (Sigma-Aldrich, St. Louis, MO, USA).Cells were also cultured in the media only containing 0.1% ethanol and used as a control (n = 4).
Statistical Analysis
For group comparisons, one-way analysis of variance (ANOVA) followed by Tukey's post hoc test using IBM SPSS version 19 was performed.Statistical significance was determined at a p-value less than 0.05 for all tests.
Results and Discussion
The objective of this study was to provide a standard experimental method to determine the amount of DEHP that migrated from the PVC products.Therefore, first, a calibration curve was prepared by plotting the peak area determined from the chromatogram vs. the DEHP concentration in the range of 0-500 ppm (Figure 1).Next, linear regression was performed and the correlation coefficient was determined to be 0.9985, thus suggesting a strong relationship between the peak area and DEHP concentration obtained from the HPLC analysis.
Statistical Analysis
For group comparisons, one-way analysis of variance (ANOVA) followed by Tukey's post hoc test using IBM SPSS version 19 was performed.Statistical significance was determined at a p-value less than 0.05 for all tests.
Results and Discussion
The objective of this study was to provide a standard experimental method to determine the amount of DEHP that migrated from the PVC products.Therefore, first, a calibration curve was prepared by plotting the peak area determined from the chromatogram vs. the DEHP concentration in the range of 0-500 ppm (Figure 1).Next, linear regression was performed and the correlation coefficient was determined to be 0.9985, thus suggesting a strong relationship between the peak area and DEHP concentration obtained from the HPLC analysis.The solutions used to stimulate the release of DEHP from the PVC products were chosen based on conditions commonly used in daily life.Note that most foods contain water and/or edible oil.Additionally, acidic solutions, such as vinegar and alcoholic substances, are edible solutions used in various alcoholic beverages.Further, inedible solutions, such as alkaline solutions used in various detergents and acetone used to remove nail polish, are commonly used in living environments.
First, the amount of DEHP released from PVC films produced in the laboratory as standard samples was measured.The aforementioned aqueous solutions were applied to the PVC films under various temperature conditions (−20, 4, 25, and 37 °C), along with an extremely harsh condition realized by an autoclave (121 °C).As listed in Table 1, elution of DEHP from the PVC films was not detected in any of the collected aqueous samples, even when the films were exposed to high temperatures.This finding clearly indicates that DEHP is lipophilic [25].The solutions used to stimulate the release of DEHP from the PVC products were chosen based on conditions commonly used in daily life.Note that most foods contain water and/or edible oil.Additionally, acidic solutions, such as vinegar and alcoholic substances, are edible solutions used in various alcoholic beverages.Further, inedible solutions, such as alkaline solutions used in various detergents and acetone used to remove nail polish, are commonly used in living environments.
First, the amount of DEHP released from PVC films produced in the laboratory as standard samples was measured.The aforementioned aqueous solutions were applied to the PVC films under various temperature conditions (−20, 4, 25, and 37 • C), along with an extremely harsh condition realized by an autoclave (121 • C).As listed in Table 1, elution of DEHP from the PVC films was not detected in any of the collected aqueous samples, even when the films were exposed to high temperatures.This finding clearly indicates that DEHP is lipophilic [25].
By contrast, substantial amounts of DEHP were eluted into the organic solvents used depending on the experimental conditions (Table 2).In particular, regarding PVC films exposed to 90% ethanol, DEHP was not detected for 24 h; however, 4.17 ppm and 11.8 ppm DEHP were eluted from the samples after exposure for 72 h and 1 w, respectively.Exposure to 100% ethanol for 24 h did not yield detectable amounts of DEHP, whereas 11.5 ppm was detected after 72 h of exposure.Further, the DEHP release in 100% ethanol was much faster than that in 90% ethanol, thus indicating that higher ethanol concentrations yield faster DEHP release from the PVC films.
No significant DEHP release was observed when the films were exposed to olive oil for 1 w at 25 • C; however, it was detected at 121 • C, though very low compared to that exposed to ethanol.Because autoclaving is not a commonly available condition in daily life, instead of direct autoclaving, the sample was heated in olive oil using a household microwave for 15 s, which yielded a similar result to autoclaving.Interestingly, 69.0 ppm DEHP was detected after 24 h of exposure to acetone.Note that acetone tends to dissolve PVC films; thus, the experiment was performed only within 24 h to monitor the release from the films and not that from the complete dissolution of the film.Evidently, DEHP elution from the PVC films produced in the laboratory was much higher when organic solvents were used compared with when aqueous solutions were used, even under harsh conditions.Further, DEHP release was much higher with longer exposure times and higher temperatures in organic solvents.Next, the amount of DEHP released from the PVC consumer products used in daily life was determined.Various commercially available products, including PVC packaging materials, were purchased from local markets, and the release of DEHP from these products was tested.A protective sheet is a versatile film commonly employed to safeguard surfaces, including kitchen tables and wooden furniture.Evidently, PVC products did not release DEHP when exposed to aqueous solutions at 37 • C for 24 h (Table 3).The samples heated by the microwave at 700 W for 15 s also did not significantly elute DEHP from aqueous solutions, except under strongly alkaline conditions.A protective sheet heated in the microwave released substantial amounts of DEHP under strongly alkaline conditions (pH 13).Thus, microwave exposure could be useful for the rapid testing of whether PVC products can release plasticizers under alkaline conditions.PVC products used as a protective sheet and book cover roll released substantial amounts of DEHP into olive oil, 90% ethanol, and acetone, even at 37 • C. Surprisingly, the samples heated for 15 s in the microwave exhibited increased DEHP release (Table 3).Essentially, the PVC films purchased from the local market released more plasticizers than those prepared in our laboratory.This difference could be attributed to the use of high-purity raw materials and the production of a limited quantity of film in the laboratory.It is crucial to acknowledge that the amount of plasticizer eluted may vary depending on the purity of the resin and the intricacies of the production process.Given that phthalate plasticizers bind to estrogen receptors and mimic estrogen action, the presence of this type of endocrine disruptor can be confirmed in estrogen-dependent and estrogen-independent cancer cell lines [9,26].Note that MCF-7 cells are estrogendependent, whereas MDA-MB-231 cells are estrogen-independent [26].In brief, the PVC products were immersed in 90% ethanol for 24 h, and an eluted plasticizer was used to test the viability of the MCF-7 and MDA-MB-231 cells.Evidently, treatment with standard DEHP or plasticizer eluted from book cover rolls did not affect the viability of estrogenindependent MDA-MB-231 cells.However, standard DEHP and the eluted plasticizer increased the proliferation of estrogen-dependent MCF-7 cells in a dose-dependent manner (Figure 2).The number of MCF-7 cells treated with the eluted DEHP was increased by 1.3 and 1.9 times for concentrations of 1 nM and 10 nM, respectively, compared to the control group.
Plasticizers are the most popular plastic additives for enhancing the flexibility and processability of materials; in particular, approximately 90% of them are used in PVC applications [27].Despite being integral to PVC production globally, phthalate plasticizers face legal restrictions in toys and food packaging in numerous countries owing to heightened environmental awareness and growing social pressure.Thus, alternative plasticizers that meet environmental criteria without compromising the end properties of the products must be developed [28].This study revealed a notable discrepancy in the amount of plasticizer eluted from consumer PVC products sourced from local markets compared with PVC films fabricated in the laboratory.This variance underscores the potential impact of resin purity and production process on plasticizer release.The methodological approach employed herein enabled the swift and thorough exploration of DEHP migration under diverse conditions, thus offering insights into the complexities of plasticizer release from PVC products.and estrogen-independent cancer cell lines [9,26].Note that MCF-7 cells are estrogen-dependent, whereas MDA-MB-231 cells are estrogen-independent [26].In brief, the PVC products were immersed in 90% ethanol for 24 h, and an eluted plasticizer was used to test the viability of the MCF-7 and MDA-MB-231 cells.Evidently, treatment with standard DEHP or plasticizer eluted from book cover rolls did not affect the viability of estrogen-independent MDA-MB-231 cells.However, standard DEHP and the eluted plasticizer increased the proliferation of estrogen-dependent MCF-7 cells in a dose-dependent manner (Figure 2).The number of MCF-7 cells treated with the eluted DEHP was increased by 1.3 and 1.9 times for concentrations of 1 nM and 10 nM, respectively, compared to the control group.Plasticizers are the most popular plastic additives for enhancing the flexibility and processability of materials; in particular, approximately 90% of them are used in PVC applications [27].Despite being integral to PVC production globally, phthalate plasticizers face legal restrictions in toys and food packaging in numerous countries owing to heightened environmental awareness and growing social pressure.Thus, alternative plasticizers that meet environmental criteria without compromising the end properties of the products must be developed [28].This study revealed a notable discrepancy in the amount of plasticizer eluted from consumer PVC products sourced from local markets compared with PVC films fabricated in the laboratory.This variance underscores the potential impact of resin purity and production process on plasticizer release.The methodological approach employed herein enabled the swift and thorough exploration of DEHP migration under diverse conditions, thus offering insights into the complexities of plasticizer release from PVC products.
Conclusions
In this study, the migration of phthalates, specifically DEHP, from PVC products was found to depend on various environmental conditions.A comprehensive evaluation of the laboratory-produced PVC films and commercially available PVC products revealed distinct patterns of DEHP release, thus emphasizing the role of exposure time, temperature, and solvent type in the migration process.Importantly, the potential health risks associated with phthalate release, particularly in estrogen-dependent cell lines, were highlighted.The methodology reported herein provides a rapid and effective means of assessing DEHP migration under diverse conditions, thus offering insights into plasticizer release from consumer PVC products compared with laboratory-produced films.These findings contribute to the evaluation of the safety of PVC and its plasticizers, essentially highlighting the variability in plasticizer release depending on the source and production process of PVC products.As regulatory scrutiny of phthalates intensifies, this study may provide valuable information to consumers regarding the potential risks associated with the use of PVC in everyday products.
Figure 2 .
Figure 2. Effects of DEHP eluted from book cover rolls immersed in 90% ethanol on the viability of MCF-7 and MDA-MB-231 cells cultured for 48 h.Standard DEHP solutions were also used for comparison, and cells cultured in media only served as a control.The number of cells were counted after 48 h of culture and the cell viability was determined by comparing it to the number of control cells (mean ± standard deviation, n = 4, * p < 0.05 versus control).
Figure 2 .
Figure 2. Effects of DEHP eluted from book cover rolls immersed in 90% ethanol on the viability of MCF-7 and MDA-MB-231 cells cultured for 48 h.Standard DEHP solutions were also used for comparison, and cells cultured in media only served as a control.The number of cells were counted after 48 h of culture and the cell viability was determined by comparing it to the number of control cells (mean standard deviation, n = 4, * p < 0.05 versus control).
Table 1 .
Amounts of DEHP released from PVC films produced in the laboratory and treated with aqueous solutions under various temperatures and times.
Table 1 .
Amounts of DEHP released from PVC films produced in the laboratory and treated with aqueous solutions under various temperatures and times.
a Not detected (below detection limit).b Autoclaved.
Table 2 .
Amounts of DEHP released from PVC films produced in the laboratory and treated with organic solvents under various conditions of temperature and time.
Table 3 .
Amounts of DEHP released from various consumer PVC products depending on various stimulants and treatment conditions (unit: ppm).
a Microwave used (700 W). b Not detected (below detection limit).c Products were partially melted.
|
2023-12-22T16:21:03.744Z
|
2023-12-20T00:00:00.000
|
{
"year": 2023,
"sha1": "26b983d08d0db26bdcd08278d6bd426ad3ec569d",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "ab530c655f8ec09ffeaf5f9b9270b81e131f92dd",
"s2fieldsofstudy": [
"Environmental Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
174802710
|
pes2o/s2orc
|
v3-fos-license
|
Walking to your right music: a randomized controlled trial on the novel use of treadmill plus music in Parkinson’s disease
Background Rhythmic Auditory Stimulation (RAS) can compensate for the loss of automatic and rhythmic movements in patients with idiopathic Parkinson’s disease (PD). However, the neurophysiological mechanisms underlying the effects of RAS are still poorly understood. We aimed at identifying which mechanisms sustain gait improvement in a cohort of patients with PD who practiced RAS gait training. Methods We enrolled 50 patients with PD who were randomly assigned to two different modalities of treadmill gait training using GaitTrainer3 with and without RAS (non_RAS) during an 8-week training program. We measured clinical, kinematic, and electrophysiological effects of both the gait trainings. Results We found a greater improvement in Functional Gait Assessment (p < 0.001), Tinetti Falls Efficacy Scale (p < 0.001), Unified Parkinson Disease Rating Scale (p = 0.001), and overall gait quality index (p < 0.001) following RAS than non_RAS training. In addition, the RAS gait training induced a stronger EEG power increase within the sensorimotor rhythms related to specific periods of the gait cycle, and a greater improvement of fronto-centroparietal/temporal electrode connectivity than the non_RAS gait training. Conclusions The findings of our study suggest that the usefulness of cueing strategies during gait training consists of a reshape of sensorimotor rhythms and fronto-centroparietal/temporal connectivity. Restoring the internal timing mechanisms that generate and control motor rhythmicity, thus improving gait performance, likely depends on a contribution of the cerebellum. Finally, identifying these mechanisms is crucial to create patient-tailored, RAS-based rehabilitative approaches in PD. Trial registration NCT03434496. Registered 15 February 2018, retrospectively registered.
Introduction
The loss of automaticity and rhythmicity of movements in patients with idiopathic Parkinson's disease (PD) has been correlated with the presence of different gait abnormalities, including shuffling steps, gait initiation failure, and freezing of gait [33], which all make the gait rehabilitation challenging in these patients [12]. The loss of automaticity and rhythmicity may depend on the impairment of the cerebral mechanisms that generate a regular walking rhythm [28], possibly because of deficient dopamine levels within the cortical-striatal locomotor network [23,24,60]. Indeed, humans synchronize their movements with external rhythmic cues through an innate internal timing process (i.e., rhythmic entrainment) [81]. This process involves different frontoparietal networks, including auditory, premotor, and motor areas [4,8,37], which are connected across complex basal ganglia(BG)-thalamo-cortical and cerebello-thalamo-cortical motor networks, as suggested by some connectivity studies showing abnormalities in neural activity and connectivity within frontoparietal networks in patients with PD. [4,27,46,62,63,76,79] Therefore, gait rehabilitation in patients with PD is aimed at restoring the cerebral mechanisms that generate a regular walking rhythm. These patients have been provided with a walking treadmill equipped with rhythmic auditory stimulation (RAS) to improve gait parameters by harnessing the innate internal timing process (i.e., rhythmic entrainment) through external cues [49,51,52,59,69,81,82,85]. Treadmill walking by itself alone has been found to furnish lasting, positive effects on different gait parameters, probably by affecting specific neuroplasticity mechanisms within complex cortical-BG-cerebellar networks [64]. However, auditory cues significantly improve gait parameters [37], probably by providing an external rhythm that bypasses the internal rhythm deficit [49,54] by engaging complex frontoparietal connections based on complex cortico-BG-cerebellar loops [9]. This could compensate for any failure in the mechanisms controlling automatic and rhythmic movement generation [54]. By coupling steps with external auditory cues, it could be possible to form a rhythmic gait by entraining movement patterns, i.e., via frequency locking between two oscillating bodies [49,51,52,84] to support the generation of better gait patterns; the rhythmic entrainment; the engagement of automatic timing systems; the planning, performing, and learning of movements; the acquisition of temporal skills; and an increase in motivation [81][82][83][84].
Nonetheless, the neurophysiological mechanisms by which coupling steps with external auditory cues improves gait remain partially unclear [4,13,37]. Obtaining a better understanding of these neurophysiological mechanisms would allow clinicians to tailor neurologic music therapy-based rehabilitative approaches to individual patient (i.e., adapt their approach to the underlying neurophysiological basis) to improve the patients' ability to generate a regular walking rhythm [78].
Investigating changes (increase or decrease) in gait cycle-related, frequency-band specific electroencephalography (EEG) power (namely, event-related desynchronization (ERD) and synchronization (ERS)) [65,66] and of gait cycle-related, frequency-band specific coherence (namely, task-related coherence -TRCoh) [19,48] induced by RAS gait training could offer useful information. In fact, the former approach may furnish information on the ongoing activities related to the motor process characteristics coded into the sensorimotor areas, including its kinematics (speeds) and kinetics (motor loads) [17,55]. The latter approach offers useful information regarding the sensorimotor events related to the dynamic coupling between different brain areas (including the frontal and sensorimotor regions) [18,48] and is thus an indicator for the network activity related to gait cycle generation. Moreover, using EEG is advantageous for capturing gait cycle-related dynamics as this tool is applicable in a mobile setup and provides good temporal resolution with regard for the brain activity. Therefore, ERS, ERD, and TRCoh data could be important for analysing the recovery mechanisms related to post-stroke brain function recovery ( [10,91]).
The aim of our study was to evaluate the efficacy of treadmill gait training combined with RAS in terms of mobility, balance, and gait parameters by correlating EEG changes with behavioral (gait) changes to identify the putative neurophysiological basis underlying gait improvement. To this end, we evaluated α (8-12 Hz) and β (13-28 Hz) frequency range changes in power (as estimated by time-frequency analysis) and coherence (as estimated by TRCoh) within the frontal, centroparietal, and temporal areas induced by treadmill gait training (GaitTrainer3; Biodex, Shirley, NY, US) with and without RAS in a group of patients with PD. We focused our analysis on α and β rhythms because these are thought to be a marker of the progression of the disease, patients' responses to physiotherapy (including gait), and the effects of levodopa on motor symptoms [6,7,18,47,70], and they thus offer potentially useful information concerning gait impairment and responsiveness to treatment in patients with PD.
Trial design
Patients were enrolled in a parallel-group, randomized clinical trial. Patients were randomly allocated into either the RAS treadmill group or the non_ RAS treadmill gait training group. Regardless of group allocation, all patients were provided with a daily training program consisting of 45 min of conventional overground gait training, 45 min of activities in daily living training and reaching activities in occupational therapy, 45 min of biomechanical training in both the upper and lower limbs, 30 min of speech therapy, and 30 min of rest distributed between the sessions (for a total of 195 min). Then, the individuals were provided with further 30 min of RAS or non_RAS treadmill time, depending on the group assignment. The daily training program was practiced once a day at the same time of day (from 9 am to 1 pm), five times per week for eight consecutive weeks. RAS and non_RAS treadmill sessions were performed individually in the same location and supervised by physiotherapists with a 2 years of training in RAS. Three to four patients were supervised by each RAS-trained physiotherapist throughout the training period. The subjects were in a clinically ON phase when provided with the training, as per the UPDRS.
Participants
Fifty out of 67 of the in-patients attending the Robotic Neurorehabilitation Unit of our Institute with a diagnosis of idiopathic Parkinson's Disease (according to the UK Brain Bank diagnostic criteria) were rated as eligible to be enrolled in this randomized, assessor-blinded, parallel-group study. The inclusion criteria were as follows: (i) Hoehn and Yahr stage between II and III, Mini-Mental State Examination test > 23, and normal executive function tests [2,41,43,50]; and (ii) no changes in antiparkinsonian drug treatment in the previous 6 months. The exclusion criteria included a history of neoplasms; severe cardiovascular, respiratory, visual, auditory, and muscular-skeletal disease; other neurological conditions; and neurologic music therapy in the last 3 months. The clinical-demographic characteristics are reported in Table 1. This study was approved by our local Ethics Committee and retrospectively registered on 15 February 2018 in ClinicalTrials.gov under no. NCT03434496 (https://clinicaltrials.gov/ct2/show/NCT03434496 NCT03434496). All participants gave written informed consent to study participation and data publication before the enrollment.
Intervention
GaitTrainer3 is a platform that integrates gait training via a treadmill and RAS. The device is indeed equipped with an instrumented deck that issues acoustic cues to determine the exact tempo and rhythm during gait training and visual real-time biofeedback to prompt patients to follow their gait pattern. In fact, the device provides online feedback, including step length, speed, and symmetry, to encourage patient progress and monitor patient performance. Patient footfalls were compared in real-time to the desired footfalls step by step and documented in a histogram.
Patients were required to walk along with the music "angel elsewhere", which reaches a target music tempo of~120 bpm. The song was presented with the lyrics, and the beat of the song was emphasized with a superimposed salient high-pitch bell sound. The patients were first trained to synchronize their footsteps to the beat of the music, which was adapted to their baseline gait performance; that is, the beat frequency of the RAS (namely, the beat rate of the music) was individually adjusted for each patient starting from the patient's best cadence (gait frequency and stride length). Then, the beat frequency was progressively increased up to the target beat frequency (120 bpm) through the first three to five sessions. This frequency was then implemented for the remaining part of the RAS training. We adopted this intermediate target frequency and RAS setup as it has been shown that using a beat frequency not based on the patient's baseline cadence can worsen step length and gait cadence, especially when the frequency is set too low (60-90 bpm) or too high (> 150 bpm) [42]. Moreover, RAS tasks that are not provided to the patient with the explicit instruction to synchronize their walking pace with the beat when adopting freely chosen music (i.e., not controlled for meter, rhythm or rate) or when combined with other cues (e.g., tactile stimuli) can negatively affect gait performance, perhaps because their attention is diverted to additional tasks irrelevant to walking [42].
Outcomes
Outcome measures were assessed before (TPRE) and after (TPOST) rehabilitation training was complete. The primary endpoint with respect to the clinical efficacy of gait training was the achievement of the minimal clinically important difference (MCID) in the Functional Gait Assessment (FGA) (at least 4 points). As secondary outcomes, we assessed the brain oscillation changes related to gait cycle (α and ERS/D magnitude changes) recorded by the frontal, centroparietal, and temporal pooled electrodes and the α and β TRCoh recorded by electrode-group pairs, which have been proposed to be correlated with the progression of the disease, the response to physiotherapy, and levodopa administration ( [6,7,18]); they therefore offer potentially useful information concerning gait impairment and responsiveness to treatment in patients with PD. Furthermore, we calculated the results of the UPDRS, the Berg Balance Scale (BBS), the Tinetti Falls Efficacy Scale (FES), the 10-m walking test (10MWT), the timed up-and-go test (TUG), and the gait quality index (GQI) derived from a gait analysis sensor. During the 8-week training period, the patients were asked not to undertake other gait training regimens. The experimenters and those who analyzed the data (different from the first experimenters) were blind to patient allocation. The patient flow procedure is summarized in Fig. 1.
EEG recording and analysis
Brain activity (EEG; μV) was continuously recorded for 10 min while the patient was walking on the GaitTrainer3 in the non_RAS modality, usually 5-10 min after the session started. We used a Brain-Quick System (Micromed; Mogliano Veneto, Italy) equipped with a standard 19-electrode headset. EEG recording occurred during the third to fifth session (depending on when a target gait of 120 bpm was reached) and the last gait training session. Patients were prohibited from drinking coffee, smoking, and changing their bedtime during the 3 days prior to EEG recording. This was easily checked, as the participants were in-patients. EEG were sampled at 512 Hz, band-pass filtered between 1 and 200 Hz using a zero-phase finite impulse response (FIR) filter (order = 7500) to minimize drifts and a zero-phase FIR filter order = 36, referenced to Cz, and notch-filtered at 50 Hz (FIR notch filter, order = 3302) to remove the power line noise. Impedances were constantly kept below 5 kΩ for the entire duration of the experiment and data collection. An electro-oculogram (EOG) with a bipolar montage was also collected. Data were pre-processed using EEGLab. EEG recordings were first visually inspected to identify and remove data affected by prominent artefacts across all the recording channels. Then, the data were re-filtered between 8 and 40 Hz, re-referenced to the common average reference, and decomposed into neural and artifactual components using the Infomax algorithm Independent Component Analysis (ICA) [15].
Continuous data were then segmented into epochs starting from the right heel strike (HS) and ending at the next one to capture a complete stride (composed of the following, in order: right, left, and right HS), to obtain 428 ± 25 epochs. EEG segmentation was based on data synchronized from the important time points (i.e., start, heel strikes, and end) furnished by a wireless inertial sensor (GSensor, BTS Bioengineering; Milan, Italy) and used to extrapolate gait phase data. Thus, the single trial spectograms were time-warped over the trials using a linear interpolation function, with the gait data used as milestones for realigning the EEG signals' time axes (i.e., aligning the time-points of the epochs for the HS, including the right, left, and right HS time-warped to 0, 50, and 100% of the gait cycle, respectively) [25,29,67,72,73,88].
To assess whether the RAS-induced changes in gait performance resulted in ERS/ERD strength variations in the α and β frequency range, we performed a time-frequency analysis related to the phases of the gait cycle [40]. To calculate ERS/ERD as a function of time, we employed a sinusoidal wavelet transform in which the data window length depended inversely on the frequency to obtain a better compromise between time changes and frequency changes [75]. ERS/ERD was defined as a percentage of power decrease in a specific frequency band relative to the baseline period.
The time-frequency coherence (i.e., the relationship between two non-stationary processes) was computed in terms of TRCoh to investigate inter-regional connectivity during gait (that is, the oscillatory aspects of interregional brain activation). TRCoh refers to the steady-state changes in functional connectivity associated with continuous tasks, that is, the ongoing sequential movements of the lower limbs rather than the phasic changes associated with single limb movements across the gait cycle, as is done to compute ERS/ERD. Furthermore, TRCoh approach eliminates the coherences that are not task-related (e.g., are due to volume conduction or controlled by the reference, thus equally present during both activation and rest conditions) [19,48,68]. Specifically, we computed the coherence for all possible pooled-electrode pairs for the α and β bands. Coherence values were calculated for each frequency bin as a complex correlation coefficient based on the value of the crossspectrum for the pooled-electrode pair for a given frequency bin and the values of the autospectra for each electrode pool of the pair. Based on these values, coherence was obtained by squaring the magnitude of the complex correlation coefficient (ranging from 0 and 1). Consequently, coherences for each frequency bin were summed and divided by the number of frequency bins. Finally, TRCoh was obtained by subtracting the coherence values obtained during rest from those obtained during the corresponding activation conditions. Therefore, positive values indicated TRCoh magnitude increments, whereas negative values indicated TRCoh magnitude decrements.
Gait data analysis
A single wireless inertial sensor (GSensor, BTS Bioengineering; Milan, Italy) was fixed to the subject's waist with a semi-elastic belt to cover the L4-L5 inter-vertebral space. Gait data were continuously recorded for 30 s while the patient was walking on the GaitTrainer3 in the non_RAS mode at an individually adapted step cadence simultaneously with EEG recording. Recording occurred two successive times for 30 s each. The sensor provided acceleration data along the antero-posterior, medio-lateral and superiorinferior orthogonal axes, which were transmitted to a PC via Bluetooth and analyzed using dedicated software (BTS G-STUDIO). This software analysis furnished the gait phase data, including the speed of gait, step cadence, stride length, gait cycle duration, stance/swing ratio, and the GQI (an overall gait performance score reflecting the grand-average of the gait parameter with an approximate 60:40% distribution of stance:swing phases). All of the parameters were measured before and after gait training.
Sample size
For the power analysis, we considered the effect of the RAS on FGA as the primary outcome measure at the end of the rehabilitation period. The FGA is a validated measurement of gait-related activities, balance, and gait ability and has been shown to have good construct validity in patients with PD, to have moderate-to-strong correlation with other balance and gait appraisals, and to predict falls within the subsequent 6 months [90]. We had to modify the outcomes of the protocol (as originally registered) before starting patient recruitment as we found that the pre-planned endpoints were not sufficient for our purposes according to the evidence coming from former trials and reviews. According to our experience and data in the literature [5], the required sample size was 25 patients per arm to detect a pre-to post-treatment MCID in the composite primary outcome. (i.e., a difference of at least 4 points with a standard deviation between 20 and 25% for each group, a two-sided confidence interval 95% and a power of 80% with a possible drop-out rate of10%) [3].
Randomization and blinding
Patients were randomly allocated into either the RAS treadmill or the non_RAS treadmill gait training group at a 1:1 allocation ratio. For randomization, sealed envelopes were prepared in advance and marked on the inside with a + (RAS treadmill) or -(non_RAS treadmill) by a deputy experimenter (who was not involved in patient management or data analyses). The experimenters who managed the data were blinded to the patients' allocation.
Statistical methods
Whether the data were normally distributed, baseline differences, and the homogeneity of variance of the data were assessed using the Shapiro-Wilks and Levene test, respectively. For descriptive purposes, the outcome measures were compared within and between the two groups using the independent sample t-test or Fisher's test. As we employed an intent-to-treat analysis, we included every subject who was randomized according to a randomized treatment assignment. For the main analysis (gait training-induced changes) of each outcome measure, we employed repeated measures Analysis of Variance (ANOVA) with the factors group (two levels) as the dependent variable and time (two levels) as the independent variable. The factor electrode-pair (6 levels) was added with regard to the EEG data analysis. The reliability intraclass correlation coefficients, their confidence limits and the effect size for clinical outcomes are also provided. Statistical significance was set at p < 0.05. Post-hoc paired t-tests with Bonferroni correction were thus used. A Spearman correlation test was employed to estimate the correlations between significant EEG and gait changes (behavior changes).
Baseline
There were no significant clinical-demographic differences between the two groups (25 patients each) at baseline (Tables 1 and 2). Additionally, there were no significant differences in EEG and gait differences between the groups (all p > 0.1). Indeed, both groups showed a weak GQI paralleled by weak fronto-centroparietal α/β-ERS during double support in the stance phase, centroparietal α/ β-ERD during single support in the stance phase, and frontal β-ERD during single support in the swing phase of the gait cycle. Furthermore, we detected low TRCoh values within the β fronto-centroparietal, β temporal, and α fronto-temporal paths.
Clinical outcomes
All patients completed training without reporting any side effects, and none of the patients withdrew from any treatment session, as assessed by the RAS-trained physiotherapists. As we employed an intent-to-treat analysis, we included every subject who was randomized according to the randomized treatment assignment.
Discussion
Our data indicate that RAS training offers additional advantages in terms of overall gait quality, balance, number and length of strides compared to non_RAS, as reported in the literature. This finding is important from a rehabilitative perspective, given that poor gait in patients with PD is characterized by an increase in the number of steps [14,20,69,77]. On the other hand, RAS training was not superior to non_RAS concerning the improvement in gait speed, turning, and stride duration, as formerly reported (Miller et al. 1996; [14,20,43,69,77];), thus suggesting that these improvements were influenced by the rehabilitative program itself rather than cueing. However, improving the speed of gait and turning is an important target in PD rehabilitation [14, 20, . Therefore, this finding also represents an important rehabilitative endpoint [14,20,69,77]. The novelty of our study is that we reveal a putative neurophysiological mechanisms explaining the greater strength of RAS training (by using GaitTrainer3) when obtaining clinical improvement compared to an equivalent dose of non_RAS training using EEG.
Some previous EEG studies have characterized cortical oscillations related to gait in patients with PD, and a few have provided robust data on EEG power; however, even fewer have explored functional connectivity [78]. Overall, there is some evidence showing that cortical activity abnormally increases during gait in patients with PD, and this likely represents a cortical compensation phenomenon reflecting subcortical (basal ganglia and the cerebellum) dysfunction [78]. Unfortunately, there is a paucity of correlation analyses of cortical and behavioral outcomes, and these have mainly performed functional imaging. In particular, it has been shown that gait impairment is correlated with the deterioration of a fronto-centroparietal network, beyond BG, the level of cortical activity, the increased activity of the prefrontal cortex, and the cortical timing metrics [22,[30][31][32].
However, more data are available from functional neuroimaging studies than those using EEG approaches [4,37,78], whereas there is no significant EEG data related to RAS gait training aftereffects. We found that the RAS training-induced gait improvement depended on stronger entrainment of fronto-centroparietal and fronto-temporal electrode connectivity than was required by non_RAS training, as suggested by the significant correlation between the changes in connectivity measures and the behavior (gait) indices and the greater modulation of frontal and centroparietal α and β power related to specific parts of the gait cycle.
The changes in beta range connectivity that occurred as part of RAS training were the most important contributors to the observed clinical improvement (as per clinical-behavior correlation analysis) and are likely to depend on associative plasticity between the acoustic cues coupled to walking ( [4,26,56,81,82,92];). Hence, the external pacing cues used in treadmill walking may interact with the mechanical pacing of footfalls on the running belt. Indeed, patients had to walk while synchronizing their footsteps to the salient beats of the music, thus leading to audiomotor integration phenomena mediated through fronto-temporal and fronto-centroparietal pathways ( [26,81,82]; Yeterian and Pandya, 1998 [4, 56];). This likely allowed the generation of a more physiological and rhythmic gait by integrating implicit and explicit timing mechanisms to compensate for the internal pacing deterioration [61]. Moreover, the external cueing modality we adopted harnessed implicit timing, which is mostly intact in PD, thus still allowing automatic timing [23]. Finally, the greater fronto-temporal connectivity observed following RAS than non_RAS is likely to depend on the modulation of β-oscillations among a wide network of auditory, motor, and associative cortices by part of auditory cueing, thus promoting motor activation patterns [4,23]. Sensorimotor rhythms are finely tuned during gait training and represent timely selective top-down control from the cortex to subcortical structure (and then to the muscles), and they thus serve as a strong promoter of the motoric status quo and controller of gait stability and adaptations, sensory processing of the lower limbs, visuomotor integration, and speed, depending on the current motor scenario [34,45]. It has been reported that the spatiotemporal extent of alpha and beta synchronization within fronto-centroparietal and fronto-temporal electrodes is inappropriately increased and its reactivity diminished in PD, mainly owing to BG impairment, reduced dopamine release, and intrinsic cortical excitability abnormalities [23,24,34,36,45,60]. This rhythm deterioration is strongly correlated with the clinical picture at baseline and reflects the inability of patients with PD to modulate their gait cycle according to walking necessities and given the clinical improvement observed following levodopa treatment and deep brain stimulation (DBS) reported in the data of the literature [34,45]. Therefore, the strong spatiotemporal changes in sensorimotor rhythms observed across the gait cycle (and thus the clinical improvement) obtained by coupling music and gait training may depend on the precise modulation of dopamine release by internal and external timing mechanisms that are triggered by music as these allow the fine-tuning of gait cycle parameters according to the motor scenario and motor task demand in a way resembling levodopa and DBS [34,36,45]. However, we can only speculate on the neurophysiological similarities of the effects of music, levodopa, and DBS as the first is less discriminating when focusing on the particularly extensive beta synchrony, leaving undisturbed the other periods compared to levodopa and DBS [34,45]. Nonetheless, it has been reported in healthy participants that the presentation of RAS significantly improved finger tapping task performance, leading to significantly reduced DA responses in the left ventral striatum [36]. Thus, the potential role of RAS in modulating DA responses should be confirmed in PD patients, considering the dopaminergic role in the enhancement of motor control in PD with the consequent implications in neurorehabilitation.
It has been proposed that cerebello-thalamo-cortical motor networks could compensate for the detrimental BG-thalamo-cortical motor network functions related to internal timing processing [16,63,74,87]. Indeed, there is evidence that temporal rhythmic auditory information may assist compensatory mechanisms through network-level effects, reflected in increased interaction between auditory and executive networks that in turn modulate activity in cortico-cerebellar networks [4]. We hypothesized that the cerebellum contributes to mediate more of the fronto-centroparietal and fronto-temporal temporal electrode connectivity (and the clinical-kinematic improvement) following RAS training then non_RAS training. It has been shown that rhythmic cerebellar stimulation by means of oscillatory transcranial currents delivered at frequencies resembling an intrinsic musical tempo largely shapes frontoparietal connectivity and the sensorimotor rhythms related to the fine regulation of gait parameters [57,58]. Therefore, it is likely that the cerebellum contributes to internal timing mechanisms when properly stimulated by rhythmic external cues, or at least acoustic cues. Nonetheless, the involvement of the cerebellum by part of the RAS needs to be further studied to better characterize the neurophysiological basis, including which cue typology is required and the rhythm specificity. In fact, over-activation of the cerebellum may worsen gait as suggested by studies of non-invasive cerebellar stimulation in PD. [53] Another main finding of our study is that α frequency range fronto-temporal connectivity was only involved in the RAS group. This functional connectivity is strongly linked to cognitive performance in PD as it deteriorates in parallel with cognitive decline [34]. Moreover, the potentiation of frontotemporal connectivity is also important in motor and cognitive rehabilitation [22]. Given that α deterioration is a marker of the degeneration of the ascending diffuse projection systems that control attention [34], a key advantage of using music as an external cue is that it increases the attention level, as reflected by the low variability of the outcome measures following RAS and the consequential improvement in patient participation and performance.
Limitations
The main limitation of our study is the lack of a follow-up period. However, it has been shown that patients with PD who are provided with cued gait training do not retain the obtained clinical improvement after 3 months [13]. This probably depends on the progression of neurodegeneration and the detrimental implicit learning in patients with PD. It is likely that retention could be promoted by long-term, less intensive, home rehabilitation. Therefore, future investigations are needed to verify this issue. Further, future directions should include an examination of the EEG changes that occur during over-ground walking and not just walking on a treadmill. Moreover, our time-frequency analysis focused only on the alpha and beta frequency bands. The roles of the other frequency bands deserve further investigation.
Another limitation is that the patients received extended daily rehabilitation training for 8 weeks. Therefore, the changes may not reflect only the differences between RAS and non-RAS training. Even this issue deserves further investigation with different control groups.
We found declines in fronto-parietal connectivity at baseline, whereas it was previously reported that patients with PD show an increase in cortico-cortical functional connectivity (this may reflect a compensatory mechanism to overcome motor-cognitive limitations). However, this over-connectivity was found to be limited to the early stages of the disease, whereas our patients had a disease duration of approximately 10 years.
Finally, it would be interesting to test whether the effect of RAS on patients' gait parameters depends on whether the music was or was not appreciated by the patient, in comparison to the effect of a musical piece that was chosen at the same for all patients.
Conclusion
Our data suggest that RAS may be a useful, add-on, gait rehabilitation strategy in PD as auditory cueing can specifically target motor cortical beta frequency range synchrony during steady-state treadmill walking in patients with PD. This modulation sustained greater clinical improvement following RAS gait training than non_RAS gait training. This extensive oscillatory recruitment may represent a bypass of the damaged circuitry of internal pacing by part of a broader network encompassing the cerebellum and different cortical areas. Therefore, the brain could recalibrate its internal pacing mechanisms by harnessing the rich sensorimotor feedback signals provided by the music-gait coupling.
Obtaining a better understanding of the neurophysiological mechanisms underlying the cortical control of cued gait in patients with PD may provide us with information that would allow us to design interventions targeting such cortical mechanisms using, e.g., transcranial magnetic stimulation, transcranial alternating current stimulation or, as in our study, cueing strategies. Targeting the functional connectivities along fronto-centroparietal/temporal electrodes and the α and β rhythms related to specific parts of the gait cycle may be an important issue in the motor rehabilitation of patients with PD when aiming to mitigate walking disturbances in these patients. In other words, identifying the neurophysiological mechanisms underlying RAS-induced gait improvement may help clinicians to develop patient-tailored rehabilitative approaches based on the selective impact of cues on gait parameters, thus making gait training highly individualized and optimizing its efficacy.
|
2019-06-08T15:18:27.510Z
|
2019-06-07T00:00:00.000
|
{
"year": 2019,
"sha1": "1fb6f18779ac23505680b1226b0a2db28ce7dfc3",
"oa_license": "CCBY",
"oa_url": "https://jneuroengrehab.biomedcentral.com/track/pdf/10.1186/s12984-019-0533-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "37ecc35856445ef6690be6792e3712ab106499a0",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
34354827
|
pes2o/s2orc
|
v3-fos-license
|
The Human Mucosal Mycobiome and Fungal Community Interactions
With the advent of high-throughput sequencing techniques, the astonishing extent and complexity of the microbial communities that reside within and upon us has begun to become clear. Moreover, with advances in computing and modelling methods, we are now beginning to grasp just how dynamic our interactions with these communities are. The diversity of both these communities and their interactions—both within the community and with us—are dependent on a multitude of factors, both microbial- and host-mediated. Importantly, it is becoming clear that shifts in the makeup of these communities, or their responses, are linked to different disease states. Although much of the work to define these interactions and links has been investigating bacterial communities, recently there has been significant growth in the body of knowledge, indicating that shifts in the host fungal communities (mycobiome) are also intimately linked to disease status. In this review, we will explore these associations, along with the interactions between fungal communities and their human and microbial habitat, and discuss the future applications of systems biology in determining their role in disease status.
Introduction
With the global burden of fungal diseases rising, researchers have begun to turn to next-generation sequencing (NGS) technology to investigate the role fungi play in the spectrum of human health and disease. At the forefront of this advancement is the "Superorganism" hypothesis, where humans are considered to be complex organisms made up of numerous mutually independent smaller organisms (i.e., bacteria, fungi, virus, archaea) and their genomes. This group of microbial cells and their genomes are collectively referred to as the human microbiota and microbiome, respectively. Over the past decade, the bacterial portion of the microbiome has been well characterized in a number of health and disease states of man, including: Type 2 diabetes [1][2][3]; liver cirrhosis [4]; colon cancer [5]; rheumatoid arthritis [6], and; inflammatory bowel disease [7][8][9]. In contrast, however, research into the mycobiome (the fungal proportion of the microbiome) has received less attention, such that the field of mycobiome research is still in its infancy.
There are currently several common challenges facing microbiome and mycobiome researchers. First, irrespective of their biomass, fungi account for a relatively small percentage of the human microbiome compared to their bacterial counterparts [10,11]. Second, similar to what we have seen with bacteria, the isolation of nucleic acids from fungal cells can be problematic, and often requires a combination of enzymatic, chemical and mechanical lysis steps [12]. Third, the ability to discriminate between fungal taxa is influenced by sequencing primer choice and, finally, curated databases for taxonomic assignment and/or the annotation of fungal genomes are lacking or are incomplete [13,14]. It is against this backdrop that a number of authors have begun to unravel the mystery of the human mycobiome.
Akin to the microbiome, the human mycobiome has been shown to play an integral role in the pathology of health and disease in man [15]. In fact, changes to the mycobiome have been shown to play vital roles in the modulation of the host immune response [16,17], disease progression [16], the maintenance of microbial population structures [18], as well as metabolic functioning of the host [18].
This review aims to explore the current status of human mucosal mycobiome research, focusing on the gastrointestinal tract.
Studying the Mycobiome
The advancements we have seen in high-throughput NGS technology over the past decade, has dramatically changed the landscape against which we study the mycobiome. From traditional, culture-based methodologies, we have moved towards the use of amplicon based technologies that target fungal specific house-keeping genes, which allow researchers to identify both cultivatable and non-cultivatable fungal species in a wealth of environmental samples. Theses fungal house-keeping genes are situated within the fungal ribosomal RNA gene cluster (rRNA), and include the 18S rRNA, 5.8S rRNA and 28S rRNA genes, as well as the internal transcribed spacer regions (ITS1 and ITS2) [19]. Similar to what has been seen with the 16S rRNA gene in amplicon-based bacterial microbiome studies [20], there is currently a lack of consensus between authors regarding which genetic target offers the best level of taxonomical and phylogenetic resolution [13], and as such several alternative primer sets exists that target different regions of these fungal genes (Cui et al. [19] gives a good overview of the different fungal rDNA primers used in mycobiome studies to date). Confounding this issue in mycobiome studies is the lack of completely sequenced and annotated fungal genomes that can be used for taxonomic identification. Current fungal rRNA databases routinely used to assign fungal taxonomy in microbiome studies include UNITE [21] for ITS, SILVA [22] for fungal 18S and 28S rRNA genes, as well as RDP [23] for fungal 28S rRNA genes.
Unlike the field of microbiome research, mycobiome studies tend not to use shotgun metagenomic sequencing approaches. As metagenomic approaches simultaneous sequence all of the genetic material within a sample (host, bacterial, fungal, archael, etc.), they have the potential to generate both taxaonomic and functional information. However, this technique relies on a lot of computational power and is limited by the inclusion of both bacterial, fungal and archael genes in reference catalogs [24]. In fact, in a current metagenomic reference catalog used for studying gut microbial populations [24,25], only 0.1% of the 3.3 million reference genes were reported to be of eukaryotic origin [24]. Until we overcome the limitation posed by a lack of fungal reference genes in these catalogs, the true potential of mycobiome research using metagenomic approaches cannot be fully realized.
Mucosal Mycobiomes in Health and Disease
There is mounting evidence linking the host's mucosal microbiomes to the modulation of host immunity. One's ability to untangle the complex interactions between the microbiota, mycobiota and immune response at a given body site begins with developing an understanding of which microbes frequently call these mucosal niches home. A summary of our current knowledge of the mycobiota and microbiota that colonize the oral cavity and the lower gastrointestinal tract (GIT) in states of health is given in Figure 1. [26], and gastrointestinal tract [27] of study participants.
The Oral Mycobiome
The concept of a "core healthy oral mycobiome" was introduced in 2010 by Ghannoum and colleagues when they characterized the oral mycobiome of 20 healthy adults [28]. In this study interrogation of ITS1F/ITS2 sequences identified a total of 85 fungal genera within the oral cavity, 11 of which related to non-culturable fungal genera [28]. Although the exact number of fungal genera in the oral cavity varied between participants (range 5-39), a core set of genera were identified in the oral cavities of more than 20 percent of study participants: Candida (75%); Cladiosporium (60%); Aureobasidium (50%); Aspergillus (35%); Fusarim (30%), and; Cryptococcus (20%). The high prevalence of Candida in the oral cavity is consistent with previous culture-based studies, and subsequent molecular studies confirmed the high prevalence of Candida spp. within the oral cavity, reporting Candida albicans, Candida parapsilosis and Candida dubliniensis as the most abundant oral Candida species [26,29,30].
The constituents of the "core healthy oral mycobiome" were refined in 2014, when Dupuy and colleagues identified only eight of the key oral mycobiome genera originally classified by Ghannoum et al. in their healthy saliva samples [29]. This highlights that although a healthy core oral mycobiome may exist, the overall abundance and diversity of fungal taxa may be somewhat individualised. One of the most interesting aspects of this study was the report of a relative high abundance (13-96%) of Malassezia within the oral cavity of their entire study cohort, which is in contrast to previous studies which failed to identify Malassezia spp. at all [28,29]. Although, subsequent molecular studies are yet to confirm the reports of Malassezia within the oral cavity of man, its presence can be logically explained. First, Malassezia is a common skin commensal that has been isolated from the nares [31] and respiratory tract [32] of man, thus its presence in the oral cavity is not unexpected. Secondly, as Malassezia has a relatively robust cell wall structure, the choice of cell lysis methodology may significantly affect the ability to isolate Malassezia DNA, resulting in a subsequent underestimation of fungal abundance [12,29]. In light of this, it is important to consider here the differences in the DNA extraction processes used in the two studies. In fact, both studies used the same FAST DNA Spin Kit for DNA isolation, however, Dupuy et al. modified the protocol to include a robust mix of [26], and gastrointestinal tract [27] of study participants.
The Oral Mycobiome
The concept of a "core healthy oral mycobiome" was introduced in 2010 by Ghannoum and colleagues when they characterized the oral mycobiome of 20 healthy adults [28]. In this study interrogation of ITS1F/ITS2 sequences identified a total of 85 fungal genera within the oral cavity, 11 of which related to non-culturable fungal genera [28]. Although the exact number of fungal genera in the oral cavity varied between participants (range 5-39), a core set of genera were identified in the oral cavities of more than 20 percent of study participants: Candida (75%); Cladiosporium (60%); Aureobasidium (50%); Aspergillus (35%); Fusarim (30%), and; Cryptococcus (20%). The high prevalence of Candida in the oral cavity is consistent with previous culture-based studies, and subsequent molecular studies confirmed the high prevalence of Candida spp. within the oral cavity, reporting Candida albicans, Candida parapsilosis and Candida dubliniensis as the most abundant oral Candida species [26,29,30].
The constituents of the "core healthy oral mycobiome" were refined in 2014, when Dupuy and colleagues identified only eight of the key oral mycobiome genera originally classified by Ghannoum et al. in their healthy saliva samples [29]. This highlights that although a healthy core oral mycobiome may exist, the overall abundance and diversity of fungal taxa may be somewhat individualised. One of the most interesting aspects of this study was the report of a relative high abundance (13-96%) of Malassezia within the oral cavity of their entire study cohort, which is in contrast to previous studies which failed to identify Malassezia spp. at all [28,29]. Although, subsequent molecular studies are yet to confirm the reports of Malassezia within the oral cavity of man, its presence can be logically explained. First, Malassezia is a common skin commensal that has been isolated from the nares [31] and respiratory tract [32] of man, thus its presence in the oral cavity is not unexpected. Secondly, as Malassezia has a relatively robust cell wall structure, the choice of cell lysis methodology may significantly affect the ability to isolate Malassezia DNA, resulting in a subsequent underestimation of fungal abundance [12,29]. In light of this, it is important to consider here the differences in the DNA extraction processes used in the two studies. In fact, both studies used the same FAST DNA Spin Kit for DNA isolation, however, Dupuy et al. modified the protocol to include a robust mix of ceramic and zirconia beads to facilitate mechanical digestion, and also tripled the timing at the homogenization step [29].
The importance of bacterial-fungal, and fungal-fungal interactions in the homeostatsis of oral health, was recently highlighted in individuals with and without HIV [26]. In this study, the authors concurrently profiled the microbiome and mycobiome in the oral cavity ( Figure 1) of 24 subjects and identify a number of significant fungal-fungal correlations in individuals with and without HIV. Although both Candida and Penicillium were isolated from the oral cavity of all individuals, significant differences in the overall mycobiome profiles were identified between the health and disease states [26]. For example, Alternia, Epicoccum and Trichosporon were only found in HIV positive patients, whilst Pichia, Cladosporium and Fusarium were associated with health. In contrast, assessments of the microbial populations, identified a stable oral microbiome between the two groups, predominated by Streptococcus and Prevotella. When the authors evaluated the bacterial-fungal relationships in this dataset, they identified a number of significant correlations, including a significant negative correlation between the abundance of Rothia and Cladosporium in the oral cavity of healthy individuals, although no mechanistic justification for this correlation has been given. Interestingly, the authors go on to identify an antagonistic effect between the oral fungal genera Candidia and Pichia, such that a relative increase in Pichia colonisation was associated with a reduction in the abundance of Candida [26]. Highlighting the importance for elucidating the role of bacterial-fungal and fungal-fungal interactions on microbiome and mycobiome community structures as well as health and disease.
The Gut Mycobiome
Perhaps the most widely studied fungal niche in humans is the gastrointestinal tract. The higher burden of fungal cells in the gut compared to other body niches, along with the wealth of data linking the gut microbiome to systemic inflammation makes the gut mycobiome an important area of study. Numerous authors have begun to unravel the role of the mycobiome in gut health [27], and disease, including; inflammatory bowel disease (IBD) [8,9,33], obesity [34], and inflammation [16,17,35].
Molecular studies of the gut mycobiome, have identified that healthy stools contain fungal genera belonging predominately to either the Ascomycota or Basidiomycota fungal taxa [27,33]. Furthermore, these studies report a rich and diverse fungal community within the GIT of healthy individuals which is predominated by Candida, Saccharomyces, Trichosporon and Cladosporium [16,27,36].
In 2016, Mar Rodriguez et al. [34], investigated the role of the gut mycobiome in obesity, and showed that although there was no significant difference in mycobiome richness between obese and non-obese individuals, the specific composition of the mycobiome could distinguish between obese and non-obese individuals. In this respect, the obese mycobiome, was predominated by Candida, Nakaeseomyces, Penicillium and Pichia, whilst in the non-obese mycobiome Mucor, Candida and Penicillium were the most prevalent. Interestingly, the authors showed that the genus Mucor correlated negatively with metabolic markers of obesity (fasting triglycerides, low-density lipoprotein (LDL), cholesterol, BodyMass Index (BMI) and fat mass), whilst the genus Penicillium and the family Aspergillaceae correlated positively with high-density lipoprotein (HDL) [34]. Although, this study did not elucidate the role of the diet in obesity and mycobiome composition, a previous study by Hoffmann et al. [27] uncovered interactions between diet and the gut mycobiome. In that study Hoffmann et al. concurrently profiled the mycobiome and microbiome in stools (Figure 1), and showed that a healthy gut mycobiome is predominated by the fungal genera Candida and Saccharomyces, and the gut microbiome by Bacteroidetes and Firmicutes taxa. Of particular interest here were the positive associations of Candida with a carbohydrate-rich diet, and its co-occurrence with particular bacterial (Prevotella and Rumminococcus) and archaeal genera (Methanobrevibacter).
Taken together, this data provides strong support for the role of bacterial-bacterial and fungal-bacterial interactions in host metabolism, and systemic inflammatory conditions. A number of studies have begun to untangle the relationship between intestinal mycobiota and IBD [8,9,33]. In these studies, the gut mycobiome of patients with IBD is characterized by a reduction in fungal biodiversity and a change in community composition compared to healthy controls. More specifically, at the phyla level, the ratio of fungi belonging to the Basidiomycota and Ascomycota taxa has been shown to be altered when compared to healthy controls, such that there is a statistically significant increase in Basidiomycota taxa at the expenses of Ascomycota seen in IBD [8]. The hallmarks of this dysbiosis appear to develop from a reduction in the relative abundance of Saccharomyces, Penicillium and Kluyveromyces coupled with an increase in Candida, Malasseziales and Filobasidiaceae in IBD compared to healthy controls [8]. Interestingly although both these studies reported a decrease in Saccharomyces cerevisiae and increase in Candida in stools from IBD cases, the exact species of Candida differed between the studies: Sokol reported an increase in C. albicans [8], whilst Hoarau reported an increase in Candida tropicalis [9]. These differences might be explained by the different extraction methodologies used, but this variation is most likely explained by the different ITS sequencing targets used in the two studies. Where the Sokol study targeted ITS2 and the Hoarau study targeted ITS1 [8,9].
Fungal Interactions
When thinking about the role that a specific species, or the fungal community as a whole, may play in host health and disease, it is important to remember that these species and communities do not exist in isolation. Recent developments suggest that although the microbiome and mycobiome impact on the host they also affect each other, and the host will also impact on the homeostasis of these species through the production of metabolites and other, more specific factors and interactions.
Polymicrobial Interactions and the Microbiome
Although there are a significant number of studies exploring the bacterial communities, the importance of communities of fungal species in regulating the composition of the microbiome as a whole is only now beginning to be explored. In particular, the role of metabolites in these interactions is finally beginning to be explored [37]. The microbiota forms a complex ecosystem of cooperating microbes. Within this ecosystem each species will be producing metabolic intermediates, signalling molecules and toxins that will accumulate and impact on the physiology of other members of the community. Metabolic approaches confirm that growth within a polymicrobial community results in alterations to the global metabolome that are dependent on the species present in the community. These polymicrobial communities also result in the production of new secondary metabolites as a result of the action of multiple species in a chain of events, which may offer clinical significance. For example, mixed communities of Cladosporium and B. subtillis resulted in the production of a novel secondary metabolite that displayed antimicrobial properties, as well as an increase in surfactins [38]. Co-culturing Rhizopus microspores with Burkholderia gladioli also induces the production of bongkrekic acid-a notable respiratory toxin [39]. Therefore, although in their infancy, these types of studies provide evidence that polymicrobial communities (like those found in the microbiome) affect secondary metabolite production, which may affect the community structure as well as its interactions with the host habitat. For example, the presence of C. albicans in oral biofilms promotes the growth of S. mutans through the induction of genes involved in metabolic pathways [40]. The close-knit community of the microbiome, in conjunction with alterations in metabolic flux, will also set up micro-domains of differing environmental parameters (i.e., pH and H 2 O 2 ) within the biofilm, driving shifts in the local communities [41]. These changes in environmental stimuli will change the community composition of the microbiome, with less fit organisms being outcompeted by microbes better suited to these conditions. Alternative ways fungal-bacterial interactions can affect the community structure include spatial rearrangements. It is now widely accepted that many bacteria adhere to fungal hyphae [42]. This attachment permits redistribution of bacteria within the discrete layers of medically important biofilms. There is also increasing evidence that the presence of fungi in multi-species communities promotes antimicrobial resistance. This reduced susceptibility to antibiotics appears to be mediated via fungal contributions to the extracellular matrix [43]. Therefore, although our current knowledge of the role of the interactions between fungi and bacteria on the microbiota structure and composition are limited, there is precedent that these interactions play an important role and require further investigation.
Fungal-Bacterial Interactions
Fungi and bacteria can interact on multiple levels making polymicrobial interaction studies complex. These interactions can either be agonistic or antagonistic. The most studied fungal-bacterial interactions are those between C. albicans and Pseudomonas aeruginosa due to their co-habitation of and medical significance in the cystic fibrosis lung, and burns wounds. In this system, these two opportunistic pathogens display an antagonistic relationship. P. aeruginosa secretes 3-oxo C12 homoserine lactone to control C. albicans morphogenesis, resulting in restricted hyphal growth [44]. However, quorum sensing only inhibits the initiation of hyphal formation, and does not affect extension of pre-existing hyphae [45]. To overcome this, P. aeruginosa has also evolved the ability to bind specifically to C. albicans hyphae through attachment to carbohydrate components of the fungal cell wall, and induce hyphal death [42]. P. aeruginosa also secretes phenazines (i.e., pyrocyanin) that are toxic to fungi [46]. While high concentrations of phenazines kill C. albicans, physiological concentrations permit growth on fermentable carbon sources, but restrict hyphal development [47]. The production of fermented by-products further enhances P. aeruginosa phenazine production, which promotes the colonisation of P. aeruginosa in the lung [48]. Therefore, P. aeruginosa appears to have evolved several mechanisms to manipulate C. albicans and restrict its growth to yeast. On one hand, this may seem counterproductive, as in other ecosystems bacteria use fungal hyphae to increase their dispersion, suggesting that P. aeruginosa could have evolved the ability to attach to C. albicans hyphae to aid its dissemination. Conversely, it is possible that yeast cells are better producers of fermented products than hyphae, which is why P. aeruginosa invests significant energy into maintaining C. albicans in its yeast form to promote its own colonisation in the host. However, C. albicans is not a silent partner in this relationship. C. albicans secretes its own quorum-sensing molecule, farnesol, which downregulates the expression of P. aeruginosa virulence factors through modulation of the Pseudomonas Quinolone System (PQS) system [49]. The roles these interactions play during infection are still unclear, and it is possible that the host environment determines which interactions will prevail.
The best-documented agonistic fungal-bacterial interactions occur in the oral cavity during the formation of dental plaque. Binding of Streptococcus gordonii or Streptococcus mutans to C. albicans hyphae results in stable biofilm formation around the surface of the tooth. Binding is mediated via the bacterial surface proteins, CshA, SspA and SspB [50], and the fungal adhesin Als3 [51]. Despite colonising the hyphae, S. gordonii do not kill the hyphae. Instead, S. gordonii promotes C. albicans hyphal development through the secretion of the auto inducing peptide AI-2, and through the inhibition of farnesol repression [52]. However, other quorum sensing molecules secreted by S. gordonii and S. mutans can exert opposing effects on C. albicans morphogenesis, with diffusible signal factor (DFS) and competence stimulating peptide (CSP) both inhibiting hyphal formation [53,54]. Therefore, like the interactions between P. aeruginosa and C. albicans, the outcome of the interaction between S. gordonii and C. albicans is likely to be controlled by the local environment.
Fungal-Fungal Interactions
In addition to interacting with bacteria, fungi also interact with one another. For example, C. glabrata is able to bind to C. albicans hyphae and hitchhike [55] through the host. This attachment promotes invasion of C. glabrata into the oral mucosa and may enhance disseminated C. glabrata infections. This discovery has cause for concern as C. glabrata is inherently resistant the azole class of antifungals [56], the first drug of choice, making disseminated infection hard to treat.
The quorum-sensing molecule, farnesol, secreted by C. albicans, is also able to control the morphology of other fungi inhibiting hyphal growth, conidiation and germination [57,58]. Although the complete mode of action of farnesol is not known, intracellular cAMP levels are reduced upon treatment with exogenous farnesol in many fungi [57], suggesting that inhibition of cAMP signalling pathways is a general trait of farnesol. In addition to inhibiting fungal morphogenesis, farnesol also displays antifungal properties. For example, farnesol induces cell death in multiple fungal species via the generation of reactive oxygen species (ROS) from the mitochondria [59][60][61]. At high concentrations farnesol also induces apoptosis in C. albicans, which is dependent on ROS generation [62]. Therefore, C. albicans uses farnesol in antagonistic relationships with other fungi to reduce competition in the host, and to control its own growth and morphology by eliciting different responses to specific threshold concentrations of farnesol.
Host-Fungal Interactions
Alongside the potential for extensive exchanges between different members of the microbial communities and their concomitant impact, these communities will also interact with cells and systems of the host habitat. Whilst a key component of these interactions are those between the fungi and the host innate and adaptive immune system, the specifics of these interactions are covered elsewhere in this special issue. Thus, here, we will focus on the non-immune interactions between host and mycobiome.
One of the more intriguing phenomena that may result from colonization by the mycobiome is "training" of the innate immune system. Recent studies have indicated that pre-exposure of macrophages to fungal cell wall products (β-glucan) results in epigenetic changes that ultimately lead to a stronger response on infection with live fungi at a later date [63]. Thus, the presence of a mycobiome may result in stronger innate protective responses to all microbes. Whether there is any specificity of this response to key species remains to be determined.
As well as the mycobiome impacting on the host, the host can also have a significant impact on the mycobiome. The presence of microbes and their metabolites (such as short-chain fatty acids) leads to the production of a cocktail antimicrobial peptides (AMPs) that in turn can regulate the species present in the gut. The impact that this circuit can have on the microbiota and health is clearly demonstrated by work investigating the role of NLRP6 in colonic epithelial cells on health [64,65]. This work demonstrates that loss of a detection mechanism can lead to shifts in the microbiota and associated metabolome that can then impact on microbiome composition. Importantly, these shifts are maintained in an otherwise normal genetic background host due to the shift in metabolome. From this, we can see that an ability to model the interactions between host, mycobiome and metabolome will be extremely powerful tool in determining the role of host-mycobiome interactions in both health and disease.
Modeling of the Mycobiome, Microbiome and Host Interactions
Metagenomic analysis can provide information for the genes and species of the bacteria and potentially fungi, and through using different functional databases such as KEGG, the metabolic functions of these communities can be determined. However, due to the extreme complexity of human microbial ecosystems, multi-omics analyses are incapable of dissecting the overall metabolism of these ecosystems from community-level to individual level and thus elucidating the interactions between microbial species/strains, microbe and host, and other environmental factors. In the study of these complex biological ecosystems, mathematical modeling can provide critical insights that will assist in understanding the underlying mechanisms of these complex systems through the evaluation and testing of different hypothesis. Among these mathematical models, genome-scale metabolic models (GEMs) are perhaps the most important, and have been used to understand the molecular mechanisms of individual organisms in a biological system through the analysis of genotype-phenotype relationships [66]. Tissue/cell specific GEMs have been successfully applied to both human health and disease, to identify novel biomarkers for early diagnosis and efficient treatment of a variety of conditions, such as non-alcoholic fatty liver disease and certain cancer cell-types [67,68]. GEMs have shown their worth and utility in the study of fungi, through prediction of their phenotype in taking up different substrates, the effects of gene knockouts and as a platform for network independent analyses to identify key metabolites and sub-networks [69,70]. Recently, these powerful tools have been applied to the study of microbial communities, such as human gut microbiome [69]. Using GEMs in community metabolic modeling can successfully predict the contribution of individual species and interactions between them to the overall simplified community metabolism and elucidate the interactions between the bacteria [71,72]. Through the generation of comprehensive toolboxes for community modeling, such as CASINO (Community And Systems-level INteractive Optimization), and the use of GEMs for predominant bacteria in human gut, the alteration in the amino acid profile of both feces and serum in response to diet interventions can be simulated and validated [73]. These successful examples of metabolic modeling of human tissue/cell-lines, fungi, and microbiome communities pave the way for the application of these methods on mycobiome research, enabling us to better understand the interactions between fungi and bacteria, other fungi and their host habitat; this allows us to elucidate their role in different diseases, alongside their overall contributions in human host-microbial metabolism.
Conclusions
As we develop an improved understanding of the pivotal role played by microbial communities in health and disease, we also increase our appreciation for the key role played by fungal communities in these situations. These fungal communities unsurprisingly show significant variation between different body habitats and with changes in disease status. We are beginning to grasp the significant role that these variations play in host homeostatic responses and pathologies, although our understanding here is still very much in its infancy. As we develop an increasing understanding of how factors such as host and microbial responses impact on the mycobiome and, likewise, how the mycobiome affects other microbial communities and the host, so we will improve our ability to predict the significance of changes in the mycobiome on host status. As we move forward, the importance and significance of advanced in silico modeling techniques (such as GEMs) associated with systems biology will be of ever-increasing importance, enabling us to create even more complex predictions of the role of different species, cell types and metabolites, with the ultimate goal of being able to determine specific, personalized interventions that improve the health of an individual.
|
2017-10-08T09:09:21.394Z
|
2017-10-07T00:00:00.000
|
{
"year": 2017,
"sha1": "d042d908fa59a794ecf77c6b9945fdb16584969b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2309-608X/3/4/56/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f69e472f72464bed352d1f5c56a7381eb449fd71",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
255906356
|
pes2o/s2orc
|
v3-fos-license
|
An Iterative, Participatory Approach to Developing a Neighborhood-Level Indicator System of Health and Wellbeing
Despite increased awareness of the essential role of neighborhood characteristics for residents’ health and wellbeing, the development of neighborhood-level indicator systems has received relatively little attention to date. To address this gap, we describe the participatory development process of a small-area indicator system that includes information on local health needs in a pilot neighborhood in the German city of Mannheim. To identify relevant indicators, we partnered with representatives of the city’s public health department and used an iterative approach that included multiple Plan-Do-Check-Act cycles with ongoing feedback from local key stakeholders. The described process resulted in a web-based indicator system with a total of 86 indicators. Additionally, 123 indicators were perceived as relevant by stakeholders but could not be included due to data unavailability. Overall, stakeholders evaluated the participatory approach as useful. Even though the onset of the COVID-19 pandemic and the lack of some data elements hindered instrument development, close collaboration with public health partners facilitated the process. To identify and target sub-national health inequalities, we encourage local public health stakeholders to develop meaningful and useful neighborhood-level indicator systems, building on our experiences from the applied development process and considering identified barriers and facilitators.
Introduction
From a socio-ecological perspective, health and wellbeing are not only determined by individual factors (e.g., age or gender) but also by the physical, social and economic environment in which people live [1,2]. As about 55% of the global population lives in urban areas [3,4], living environments play an essential role in shaping individual health status and resources for maintaining health [5]. Neighborhood characteristics in particular have been linked to health and wellbeing through associations with the availability and accessibility of health-care services, opportunities for physical activity and recreation (e.g., parks and green spaces) and healthy, affordable food options [6]. The importance of other neighborhood-level factors such as traffic density, walkability and the existence of social networks to health has also been well documented [7,8].
Differences in these environmental determinants of health, especially when several health risk factors cluster, are likely leading to neighborhood deprivation in underserved areas, thereby contributing to associated health inequalities at a sub-national level [9,10].
Recognizing the essential role of characteristics of urban areas and neighborhoods for health and wellbeing, various global initiatives such as the United Nations' Sustainable Development Goals (SDGs) and the World Health Organization's (WHO) Healthy City Networks have pointed to the importance of establishing urban health indicator systems [3,7]. According to Pineo et al., these systems have been defined as "collections of summary measures about the urban environment's contribution to human health and wellbeing, with a broad interpretation of Following the funding period, the city's public health department and the public health research institute provided internal funds to continue the project.
Setting
Mannheim is a city in southwestern Germany that covers an area of 145 km 2 with more than 320,000 residents [25,26]. For analyses, planning and administrative purposes, the city council subdivides a total of 38 different-sized (0.51-20.49 km 2 ), geographically defined statistical units (referred to as "neighborhoods") that represent coherent social spaces in the city [27]. Health-related interventions in these neighborhoods are defined and implemented by different stakeholders at the city-and neighborhood-level, namely city departments (e.g., the city's public health department), individual local institutions (e.g., schools, kindergartens, youth centers and local self-help institutions) and interprofessional working groups (e.g., working groups for a family-friendly neighborhood). Several separate statistical reports by different city departments are available on selected topics, such as education, child health or social circumstances, to identify areas of social disadvantage and guide local decision-making. In neighborhoods identified as particularly disadvantaged, the city typically employs a neighborhood manager to run a neighborhood management office, a central contact point for residents and stakeholders. Typically, a neighborhood manager has a professional background in educational science, social science or in the areas of urban planning, local politics or public management and governance. Depending on funding structures, neighborhood managers are employed by a registered neighborhood management association or a non-profit organization. Their responsibilities include the coordination and implementation of activities and strategies to reduce disadvantages and to strengthen residents' identification with the neighborhood. To date, five neighborhood managers have been assigned to neighborhoods throughout the city [28]. Through a mutual coordinator, the neighborhood managers are in regular contact with each other, as well as with stakeholders in their neighborhoods and on a city level.
To select a neighborhood that was suitable as a pilot region for developing a neighborhood-level indicator system, we held extensive discussions with partners at the city's public health department centered on three main considerations: The pilot region was supposed to be a neighborhood that had not been the focus of health promotion projects in the past, where the local neighborhood manager expressed a high level of interest in proactively integrating health topics in their work, and where a specific interest existed in having a neighborhood assessment tool available. The neighborhood that was perceived to best meet these considerations covers an area of approximately 1.4 km 2 , with a population of almost 7500 inhabitants (i.e., a population density of about 5500 inhabitants per km 2 ). At the time of study, the neighborhood population's average age was 41.5 years, with a high proportion of inhabitants aged under 18 years (nearly 18.5%) compared to most other neighborhoods in Mannheim. In statistical reports, the neighborhood has been characterized as having above-average social problems (e.g., a high unemployment rate of above 8.5% and a high proportion of households at a high risk of poverty, such as above 4% of households with 3 or more children [26]).
Development Process
In discussions with the city's public health department, we chose to use an iterative approach consisting of several Plan-Do-Check-Act (PDCA) cycles [29] (Figure 1) that involved stakeholders at multiple points, described more fully below. We felt that this was an intuitive process with which community members would be familiar and which was respectful of time demands in contrast to other methods of consensus building (e.g., Delphi process [30]). respectful of time demands in contrast to other methods of consensus building (e.g., Delphi process [30]).
Searching the Literature
Research team members (J.H.-K. and K.Ho.) conducted a non-systematic but extensive literature search to identify a broad range of health indicators used elsewhere in existing, mostly large-area systems to serve as a starting point for indicator development. The scientific databases PubMed/Medline and Web of Science were searched using keywords such as "health indicators", "health reporting", "health monitoring", "wellbeing" and "quality of life" combined with the terms "city", "community" and "neighborhood". Considering that indicator systems are not always published in scientific journals, we also searched Google, Google Scholar and websites of various national and international organizations known for their expertise and practice in health monitoring, including the Robert Koch-Institute, State Institutes for Public Health, the WHO and the Organization for Economic Co-operation and Development. This search strategy was purpose-driven, with the goal of initiating discussions on thematic areas and individual indicators and providing a broad basis on which to develop the indicator system. Notably, in light of the timely scope of the project, the search was not intended to be as rigorous as for full systematic review.
The resulting literature was screened to identify potential neighborhood-level indicators that are directly or indirectly related to health and wellbeing. The potential indicators were collected and organized into clusters, which thematically group related indicators into various "dimensions" (e.g., "child health" or "education"). Initial thematically related clusters and suggestions for labels for each dimension were proposed by the research team (H.R., J.H.-K. and K.Ho.) and discussed in a group meeting with the city's public health department representatives (K.He. and H.K.) until consensus was reached.
Revising the Indicator Set
The initial set of indicators formed the basis for the first PDCA cycle ( Figure 1). To explore whether indicators and dimensions previously identified in the literature search were likely to represent the health and wellbeing of residents in our pilot neighborhood, we invited a diverse group of city-level stakeholders who represented institutions already working directly with either the neighborhood manager in the pilot neighborhood or the city's public health department. These included predominantly people with lower or middle management responsibilities (e.g., Institution Head or Department Head) influencing the situation in the neighborhood. They represented institutions including the social welfare office, the local statistics office, the local police department and institutions from the pilot neighborhood itself (e.g., school principals and the neighborhood manager) ( Table 1). Elected representatives who may have certain competing interests were intentionally excluded from participation. We sought service providers from non-for-profit institutions who were best positioned to be aware of the needs and preferences of members of the pilot neighborhood. Inclusion criteria for participation in this process were having access to data on the neighborhood and/or active involvement in the pilot neighborhood, having connections within local networks and being a potential end-user of the neighborhood barometer. To broaden perspectives, stakeholders nominated others likely to work outside of the existing networks of the neighborhood manager or the city's public health department for invitation to participate. Fourteen face-to-face meetings with 26 stakeholders (Table 1) lasting between one-half and two hours took place between June 2018 and February 2019. The number of participants varied from one to six stakeholders per session. Group meetings were scheduled by mail; individual meetings were offered for those who could not participate in a group meeting.
In these meetings, stakeholders were asked which dimensions and specific indicators they perceived most useful, using the indicator list we had created from the literature search. Furthermore, they were asked whether specific indicators and dimensions should be deleted or added to enhance comprehensiveness and relevance (both aspects self-defined), whether the initial assignment of clusters and labels was suitable, whether other potential data sources existed and who the data owners might be. The feedback obtained was discussed with our city's public health department partners and resulted in agreement on an initial indicator set. Most participants knew each other from prior work, which created a collaborative work environment in which competing interests were not expected to impact the research process.
Collecting of Data
In the next step, we contacted identified data owners (i.e., the local statistics office, the social welfare office, the city's public health department, the youth welfare office, the police department and the local elementary school) to inquire whether desired data elements, aggregated or non-aggregated, were available for the pilot neighborhood. If those data were available, we requested the most recent data and data from 2010 to 2017 in order to facilitate trend recognition over time. We required that data for indicators were based on representative statistical samples and available for at least five residents to avoid identifiability. Even if it was not an inclusion criterion, all requested data were available free of charge. In order to assess validity, available data were checked for the validation of survey instruments used and the data owners' provision of clearly defined data collection procedures and algorithms applied to the data. To provide guidance for future development of a neighborhood-level indicator system, data elements that did not meet these criteria or were not currently available but were considered relevant by one or more stakeholders were recorded as promising for incorporation into a future version of the neighborhood barometer. Reasons for excluding indicators were documented. We took various steps to establish the validity of the data. The data were checked for outlier values using SPSS Statistics 24 (IBM Corporation, Armonk, NY, USA). If unexpected values were identified, the data owner was consulted for clarification. Additional geospatial data in the form of street addresses and GPS coordinates for neighborhood-based resources or amenities were extracted from Google Maps and checked for validity by visiting each on site. We requested clear definitions for each indicator from the data owners or determined them ourselves after review of the data sources.
Although the process focused primarily on identifying actionable indicators directly related to health and wellbeing, we also wanted to ensure that relatively constant indicators indirectly related to health and wellbeing (e.g., percentage of households with children or gender structure) were included, as they might assist end-users' in seeking highly specific information that could be cross-referenced with contextual characteristics of the neighborhood.
Building the Visual Interface
We were interested in creating a tool that would be easy to update, that would be accessible to end-users without specialized software knowledge and that allowed visual representations of the data in complementary ways. It was planned that the city's public health department would maintain the neighborhood barometer in the future. Thus, the research team decided jointly with representatives of the city's public health department to select the software Tableau (Tableau Software LCC, Seattle, DC, USA) for several reasons: the ease of importing data from Excel files, straightforward tools for updating data, an intuitive user interface that did not require knowledge of programming languages and interactive functionality that allowed end-users to apply a variety of filters to create visual data reports on demand. For the data visualization in Tableau, we created one comprehensive Excel file filled with data on indicators and one Excel file including geospatial data. To facilitate data processing and to create linkages across Excel worksheets, we used a Tableau Add-In for Excel to pivot data reported by rows into columns. Based on the resulting Excel files, we created Tableau Worksheets and Dashboards that were finally merged to one Tableau Story. Unweighted data were displayed as absolute figures, percentages, proportions, average values or rates. Aggregated data for the entire city were provided per year as reference values.
Developing the Beta Version
Refining the initial version of the barometer formed the basis for efforts in the second PDCA-cycle ( Figure 1). The visual interface was shared with all previously participating stakeholders in February and March 2019. Feedback sessions took place in three face-to-face meetings with six individuals and by email with an additional three (Table 1). Each was asked to comment on the following areas: comprehensiveness, relevance and potential for misinterpretation of the chosen indicators, the overall user-friendliness of the visual interface and its potential as a planning and monitoring tool. Additionally, we asked those stakeholders who had provided data to comment on the accuracy of data presentation and the definitions provided. Feedback was recorded by team members (H.R., J.H.-K. and K.Ho.) using meeting notes, emails and protocols. Feedback responses were organized and summarized for each area investigated. Action steps resulting from this process and leading to development of a beta version (April-September 2019) included removing indicators with potential for misinterpretation and adding an introduction that specified aims and limitations of the barometer.
Beta Testing
Beta testing and further refinement were at the core of the third PDCA-cycle ( Figure 1). We held one virtual and three in-person feedback sessions between October 2019 and October 2020 with additional sessions limited by restrictions following the onset of the COVID-19 pandemic. Beta test participants included individuals or working groups, who were either engaged at the neighborhood level, had a neighborhood-or city-level leadership function, or had been involved in previous PDCA cycles (Table 1). In one case, we added discussion of the beta test as an agenda item to a previously scheduled meeting, whereas three other meetings were specifically devoted to the topic. In each meeting, we presented the visual interface for the beta version and solicited general impressions and interpretations of the data from participants' diverse professional perspectives. Specifically, we asked for perceptions about the utility of the individual indicators presented and whether the neighborhood barometer as presented could be considered potentially useful. Feedback was recorded by team members (H.R., J.H.-K. and K.Ho.) using meeting notes which were summarized and discussed with representatives from the city's public health department (K.He. and H.K.). As a result of these discussions, we added a description and geographical definition of the neighborhood. As no other feedback was provided, the research team and the city's public health department agreed that the beta version was ready for implementation in the field.
Implementing the Tool
Implementation plans started with a discussion with representatives from the city's public health department about which stakeholder groups might represent potential endusers. The next steps were interrupted by measures to contain the COVID-19 pandemic in Germany and the involvement of representatives from the city's public health department in local pandemic management. They included an introductory session to the beta version for these potential end-users. Going forward, plans include use of future PDCA cycles to address verbal and written comments and feedback on specific aspects, general utility of the neighborhood barometer and confirmation of the extent to which indicators reflect the intended characteristics. Actual use of the web-based interface will be determined by monthly website usage statistics.
Updating the Tool
As measures to contain the COVID-19 pandemic in Germany became weaker and the involvement of our partners in local pandemic management decreased at the end of the writing of this manuscript and during the revision phase, representatives of the city's public health department were able to update the data to the years 2018, 2019, 2020 and 2021. They contacted data owners again and updated the existing Excel file with available data. Regular updates from the city's public health department are planned.
Uncovering Facilitators and Barriers to Creating a Neighborhood Barometer
To identify lessons learned, potential barriers and facilitators in creating a neighborhood barometer, we regularly reviewed and extensively discussed the process, impressions and experiences documented in protocols of project meetings within the research team (H.R., J.H.-K. and K.Ho.). We defined facilitators as factors that simplified the process and barriers as factors that hindered either the development process or the implementation phase.
The Field-Ready Version of the Neighborhood Barometer
At this stage, the iterative, participatory approach resulted in agreement on a fieldready version consisting of 86 indicators grouped into eight dimensions, which we labelled population structure, population development, household structure, material wellbeing (economics), education, family and upbringing, child health and personal security. The data displayed in the barometer monitor relevant indicators over time and in comparison to the entire city of Mannheim. Through this process, we identified and subsequently mapped geospatial data for 43 amenities thought to be related to health or wellbeing in our pilot neighborhood.
The selected indicators were largely based on aggregated registration data from local offices or agencies (i.e., the local statistics office, the youth welfare office and the police department). Other non-aggregated data were obtained from the early childhood intervention service "Welcome to Life" by the youth welfare office and from school entry screenings conducted by the city's public health department, in which previously validated instruments were applied (e.g., body weight measurement using standardized scales). Data derived from nonvalidated instruments were excluded from the field-ready version of the neighborhood barometer. Detailed information on indicators included in the field-ready version of the neighborhood with information on the data owner, received data and data preparation for visualization can be found in Supplementary Table S1. Due to the inclusion of predominantly aggregated data, cross-tabulation between indicators is not possible. In addition, not all data were available for an update due to the COVID-19 pandemic (e.g., no school entry screenings took place during the pandemic).
As a result of the feedback loops with stakeholders and ongoing discussions and decisions made with representatives from the city's public health department, the website for the neighborhood barometer was organized using tabs serving different purposes (Figure 2), namely introduction (1), neighborhood (2), neighborhood barometer (3), parking lot (4), map (5), glossary (6), references (7) and contact persons (8).
not possible. In addition, not all data were available for an update due to the COVID-19 pandemic (e.g., no school entry screenings took place during the pandemic).
As a result of the feedback loops with stakeholders and ongoing discussions and decisions made with representatives from the city's public health department, the website for the neighborhood barometer was organized using tabs serving different purposes (Figure 2), namely introduction (1), neighborhood (2), neighborhood barometer (3), parking lot (4), map (5), glossary (6), references (7) and contact persons (8). An introduction (1) [Einleitung] clarified the intended purpose, the aims, the opportunities, the limits and the structure of the neighborhood barometer in response to stakeholder feedback. A description and geographical definition of the neighborhood (2) [Stadtteil] specified the underlying population characteristics to maximize transparency, as some end-users might be unaware of the borders of the statistical unit comprising the pilot neighborhood. An overview of the data contained (3) [Quartierbarometer] is illustrated by a series of bubbles identifying each dimension (i.e., population structure, population development, household structure, material wellbeing, education, family and upbringing, child health and personal security) to facilitate exploration of the data ( Figure 2). An example of the application of the neighborhood barometer for a potential user interested in family and upbringing indicators is presented in Figure 3. An introduction (1) [Einleitung] clarified the intended purpose, the aims, the opportunities, the limits and the structure of the neighborhood barometer in response to stakeholder feedback. A description and geographical definition of the neighborhood (2) [Stadtteil] specified the underlying population characteristics to maximize transparency, as some end-users might be unaware of the borders of the statistical unit comprising the pilot neighborhood. An overview of the data contained (3) [Quartierbarometer] is illustrated by a series of bubbles identifying each dimension (i.e., population structure, population development, household structure, material wellbeing, education, family and upbringing, child health and personal security) to facilitate exploration of the data (Figure 2). An example of the application of the neighborhood barometer for a potential user interested in family and upbringing indicators is presented in Figure 3.
As stakeholder meetings resulted in the identification of an additional 123 indicators perceived as relevant but lacking readily available data, a "parking lot" of ideas was provided (4) [Wunschbarometer] to enable retention of this information and to guide the possible direction of future data collection and analytical opportunities in the piloted neighborhood (Supplementary Table S2). For example, various environmental indicators (perceived heat stress, perceived air quality, perceived noise pollution and total share of green areas) were identified as important and relevant for the neighborhood context but lacking data for inclusion in the barometer. They were, therefore, added to the parking lot.
As feedback from several participants stressed the importance of being able to visualize the physical infrastructure of the neighborhood and the need to identify potential shortfalls in particular amenities, we provided a visual display of selected features in map format (5) [Karte], which could be filtered on demand. Additionally, a glossary (6) [Glossar] containing definitions and data resources, a list of references (7) [Quellenverzeichnis] and contact persons for questions and suggestions (8) [Ansprechpartner] were added to the website to maximize transparency (Figure 2). viron. Res. Public Health 2023, 20, x 10 of 16 As stakeholder meetings resulted in the identification of an additional 123 indicators perceived as relevant but lacking readily available data, a "parking lot" of ideas was provided (4) [Wunschbarometer] to enable retention of this information and to guide the possible direction of future data collection and analytical opportunities in the piloted neighborhood (Supplementary Table S2). For example, various environmental indicators (perceived heat stress, perceived air quality, perceived noise pollution and total share of green areas) were identified as important and relevant for the neighborhood context but lacking data for inclusion in the barometer. They were, therefore, added to the parking lot.
As feedback from several participants stressed the importance of being able to visualize the physical infrastructure of the neighborhood and the need to identify potential shortfalls in particular amenities, we provided a visual display of selected features in map format (5) [Karte], which could be filtered on demand. Additionally, a glossary (6) [Glossar] containing definitions and data resources, a list of references (7) [Quellenverzeichnis] and contact persons for questions and suggestions (8) [Ansprechpartner] were added to the website to maximize transparency (Figure 2).
Facilitators and Barriers
Stakeholder involvement and engagement was verified as a key facilitator in creating a neighborhood barometer through review of meeting minutes and project diaries. This aspect of the development process added value, as it uncovered relevancies that were not obvious to the research team purely on the basis of previously published literature
Facilitators and Barriers
Stakeholder involvement and engagement was verified as a key facilitator in creating a neighborhood barometer through review of meeting minutes and project diaries. This aspect of the development process added value, as it uncovered relevancies that were not obvious to the research team purely on the basis of previously published literature identified through the literature search. For example, nearly 40 indicators not identified in the literature were included due to stakeholders' perceptions (e.g., "proportion of children who can/cannot swim in primary school" as stakeholders had concerns about this topic). Moreover, 25 indicators from the literature search were excluded as stakeholders raised concerns about misinterpretations or the lack of informative value to an extent the research team had not anticipated (e.g., "number of private cars per household" was felt to be neither an accurate indicator for material wellbeing nor for mobility in this neighborhood).
Investing time and holding meetings in person appeared to further facilitate the developmental process. Upon reflection, the perceived quality of interaction seemed greater in meetings dedicated entirely to discussion of the neighborhood barometer compared with those in which it was discussed as one of several agenda items. Additionally, meetings in person compared with those held online resulted in more extensive discussions and active interpretation of the data presented in the barometer and a greater quantity of feedback, as reflected by the number of comments, reactions and amount of time devoted to the discussions. Review of study notes also suggested that the continuous and substantial nature of collaboration with our partners from the city's public health department facilitated the process. For example, they regularly participated in project meetings, identified and enabled access to members of an extensive existing network of stakeholders and dedicated personnel resources to maintenance and updating the neighborhood barometer in the future.
The primary barrier we encountered was the onset of the COVID-19 pandemic and related containment measures, which halted plans for implementing the beta version of the neighborhood barometer in the proposed pilot neighborhood. For example, face-to-face group meetings were prohibited in Germany at that time. Priorities among stakeholders also changed dramatically during this time, leaving participants with relatively few opportunities to familiarize themselves with a new indicator system in the face of more pressing tasks. In addition, our partners at the city's public health department were highly involved in local pandemic management, which was prioritized over the implementation and updates of the barometer for a long time.
Another important barrier complicating the development of a pilot neighborhood barometer was the unavailability of data endorsed by stakeholders as potentially informative. For example, data for more than 100 indicators in different areas were not routinely collected by any city agency or non-governmental organization. Alternatively, data on many indicators of interest were available, but not at the neighborhood level or only for other kinds of administrative units such as constituencies or school catchment areas. Finally, some useful data elements were available at a neighborhood level but with case numbers too small to guarantee data privacy.
Discussion
In close collaboration with partners at our local public health department and by soliciting and incorporating stakeholder ideas and feedback, we designed and developed a field-ready neighborhood-level indicator system of health and wellbeing tailored to the local and end-users' needs in a neighborhood in Mannheim, Germany. Following several PDCA cycles with multiple feedback loops each, we created a user-friendly, web-based tool that presents data in various, complementary ways in order to meet the differing informational needs of a diverse group of stakeholders in this neighborhood. Stakeholder feedback supported the inclusion of indicators deemed relevant, a suitable grouping scheme of indicators in various dimensions, and comprehensibility of definitions applied. In addition, it allowed us to critically appraise indicators and investigate their potential for misinterpretation.
Using an iterative participatory approach, we were able to incorporate stakeholders' preferences, professional knowledge and feedback and gained a clearer sense of the context for data presented in the barometer at multiple points during the developmental process. In line with benefits reported in previous research [11], we experienced that extensive meetings with a heterogeneous group of stakeholders from various disciplines fostered intersectoral exchange throughout the developmental phase. The collaboration with representatives of the city's public health department who are local key actors in monitoring residents' health enabled us to better understand local needs and stakeholders' feedback. In addition, we benefited from the city's public health department as a door opener to stakeholders at the neighborhood and city levels. This collaborative relationship, which began with jointly writing the funding application, resulted in a high level of transparency during the development process, mutual decision-making, and a field-ready version of the barometer with broad support from involved stakeholders. Additionally, the collaboration served as a foundation for joint efforts on other currently ongoing projects, demonstrating the sustainability of this approach.
Overall, the process we followed provides strong evidence for the potential value and synergy that may arise from collaborations between public health science and local health departments. To leverage the full potential of these collaborations, discussion on ways to support and build connections between disciplines will be important. In the German state of Baden-Wuerttemberg, current efforts have taken shape in the form of the Center for Public Health and Health Services Research, established in 2019 [31], to support linkages between university medicine, health-care research and health services. Opportunities for similar formalizations of local research collaborations are anticipated in other settings in which this approach might be used, with high potential to respond to local public health needs.
Previous research suggests that successful collaboration benefits significantly from time devoted to relationship building despite the increased burdens this may pose on those involved [32,33]. Finding a common language, developing shared goals, and acknowledging mutual and individual concerns, however, represent elements of a strategy that respects differing work cultures and that may counterbalance the short-term costs of relationship building. From our experience in this study, we suggest that time for these aspects of relationship building should be scheduled as formal activities to be supported by funding agencies.
We were able to facilitate a process in which stakeholders agreed on a set of indicators perceived as relevant for identifying and prioritizing local needs and action points in the target neighborhood. The neighborhood barometer was largely based on registration data from local offices or agencies that guaranteed the accuracy of data at the neighborhood level. However, due to the lack of data availability for some indicators, it was not possible to include many health-related data elements of interest, which were collated in a repository (the "Parking Lot", Supplementary Table S2). The Parking Lot shows that in particular, stakeholders missed information on health in all age groups (i.e., children, adolescents, adults and seniors), subjective population wellbeing (e.g., self-rated health status and life satisfaction), living conditions (e.g., perceived heat stress and perceived air quality) and participation and involvement (e.g., neighborhood cohesion and electoral turnout in different elections). Considering the multidimensionality of health [34,35] and the associations between diverse characteristics of neighborhood environments and different dimensions of health [36], our study suggests that at least at the local level, the current availability of data appears insufficient in reflecting the interconnectedness of place and health.
We hope that the awareness of this complexity and desire for more specific and granular data among stakeholders, as indicated by their feedback obtained in the development process, will lead to more comprehensive efforts in routinely recording and collecting neighborhood-level data for analysis beyond the current standard. Increasing digitalization holds potential to open access to usable, new data sources such as social media platforms and geolocation data available through mobile devices [8]. In addition, primary data collection using neighborhood surveys can generate a deeper understanding of residents' perceptions of their needs and thereby support the definition and identification of local priorities [37]. Moreover, primary data collection would enable the previously missing function of cross-tabulation between indicators and dimensions that various stakeholders desired. Accordingly, a further concentration of resources and skill-building for the optimal use of existing data sources and opportunities for the development of new data sources are needed. At present, there are no established routines in place at the municipal level to collect data identified by the participants of this pilot study. Discussions are currently underway and will be more thoroughly explored following completion of a field test. The Parking Lot provides additional insight into local stakeholder preferences and might provide guidance for developers and funders of a neighborhood-level indicator system for monitoring and surveillance.
Even though the processes described here appear useful for researchers, community planners and policy makers who are planning to fund, develop or revise neighborhood-level indicator systems elsewhere, a few limitations should be noted. Although the indicators selected by stakeholders in the pilot neighborhood in Mannheim are shaped by local needs, preferences and data availability, the contribution of this work is best reflected by the processes used in identifying health-related indicators endorsed by potential end-users and through the recognition of facilitators and barriers in the processes used to identify them. A non-systematic literature search to identify an initial set of indicators (1st PDSA) was chosen for pragmatic reasons (the limited funding period); this may not have identified all indicators covered in international literature, which might have lowered the odds of their consideration during the participatory development process. In addition, the majority of stakeholders in this study work with children and young people, which may have influenced the selection of indicators toward a focus on child health. The limited availability of data for integration in the barometer enhanced this focus on child health, which somewhat shifts when all potential indicators from the Parking Lot (Supplementary Table S2) are considered. For example, environmental indicators of health and wellbeing were regarded as important but could only be added to the Parking Lot. Furthermore, our study benefitted from pre-existing networks of the city's public health department and the neighborhood management. In the absence of such networks, researchers and indicator developers should consider the potential for difficulties that may arise in establishing new contacts and in the consensus process, which may require additional time and efforts.
Given our inability to conduct a field trial during the pandemic, we were not able to fully establish the validity of the indicators that have been endorsed as being relevant by the neighborhood stakeholders. Accordingly, although outside the funded timeline, the barometer will be field-tested and future PDCA cycles will be conducted to further refine the instrument and tailor it to potentially unanticipated local needs and logistical challenges. At the time of writing, our partners in the city's public health department were in the process of updating the data of the included indicators. In addition, the development of the barometer and especially the Parking Lot resulted in a survey in the pilot neighborhood to collect information that was unavailable when the barometer was developed. The upcoming analysis and integration of survey results into the neighborhood barometer will be used to further enhance the instrument and will be subject to another publication. Based on these survey results, we anticipate further discussions and the implementation of the neighborhood barometer, which is still the goal of all involved. Upon implementation and evaluation of the tool, ambitions include the extension of the instrument to other neighborhoods in Mannheim and its integration in local decision-making processes. Future work should consider the development of small-area indicator sets using participatory stakeholder approaches in other geographic areas, where adaptations of our barometer and Parking Lot may serve as a starting point to discuss local need.
Conclusions
We present an iterative participatory approach, characterized by stakeholder involvement and a strong collaboration with the local public health department that resulted in the development of and agreement on a field-ready small-area indicator system of health and wellbeing for a pilot neighborhood in Germany, tailored to local and users' needs. Namely, the agreed system includes 86 indicators across eight domains (population structure, population development, household structure, material wellbeing, education, family and upbringing, child health and personal security), with a further 123 indicators excluded from the instrument due to data structures. The process described here contributes to and should encourage further work in developing meaningful, useful and sustainable neighborhood-level tools that can be used to monitor and promote health and wellbeing. Our work also identified the lack of non-aggregated health data useful for neighborhood monitoring and surveillance in the pilot neighborhood in Mannheim. Collecting these data would enable the study of population health at a neighborhood level (e.g., through cross-tabulation of indicators) and may contribute to identifying local health needs for targeted allocation decisions and policy development. By collaboratively developing and implementing similar tools in other regions, local stakeholders can actively support the identification and reduction of sub-national health inequalities.
|
2023-01-17T17:03:13.194Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ab71a9cf7fbb320596e0b4d5df41f73ed382f05b",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/2/1456/pdf?version=1673590452",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "25bd27ef9178f83162c2b4e0eb93da5520cfc9fd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237089974
|
pes2o/s2orc
|
v3-fos-license
|
MiR-508-3p promotes proliferation and inhibits apoptosis of middle ear cholesteatoma cells by targeting PTEN/PI3K/AKT pathway
Cholesteatoma of the middle ear is a common disease in otolaryngology, which can lead to serious intracranial and extracranial complications. Recent studies showed that the dysregulation of microRNA may be involved in the formation of middle ear cholesteatoma. This study aimed to explore the regulatory effect of micro ribonucleic acid 508-3p (miR-508-3p) on proliferation and apoptosis of middle ear cholesteatoma cells and excavate its underlying regulatory mechanism. We found miR-508-3p expression was upregulated in tissues and cells of cholesteatoma which was inversely related to the expression of hsa_circ_0000007. Overexpression of miR-508-3p could notably facilitate cholesteatoma cell proliferation. Luciferase reporter assay showed that miR-508-3p bound the 3'-untranslated region of its downstream mRNA PTEN. Gain and loss of functions of miR-508-3p were performed to identify their roles in the biological behaviors of cholesteatoma cells, including proliferation and apoptosis. Rescue assays confirmed that PTEN could reverse the effect of miR-508-3p overexpression on cell proliferation. In a word, this study validated that the development of cholesteatoma may regulated by hsa_circ_0000007/miR-508-3p/ PTEN/ PI3K/Akt axis.
Introduction
Cholesteatoma is benign collections of keratinized squamous epithelium within the middle ear. There are congenital and acquired middle ear cholesteatomas [1]. Congenital cholesteatoma is formed from remnants of epithelium that get trapped in the temporal bone during development [2]. Acquired cholesteatoma does not result from an embryologic phenomenon, but are the result of pathologic changes that cause the uncontrolled growth of squamous keratinized epithelium in the middle ear [3]. This study is aimed at the acquired cholesteatoma. When the cholesteatoma begins, it can damage temporal bone and nearby structures like ossicles, facial nerve, vestibule, semicircular canal and brain causing many problems like hearing loss, facial paralysis, dizziness, encephalopyosis and so on.
Cholesteatoma can be a difficult disease to treat because the underlying cause of the disease, eustachian tube dysfunction, is generally not addressed. This can lead to recurrent disease. Surgical resection of cholesteatoma can also be quite challenging, and residual cholesteatoma is often present after surgery [4]. The pathogenesis of acquired cholesteatoma is not clear. The most popular theory is the keratinocyte of the middle ear becomes hyperproliferative.
Circular RNA (circRNA) was considered as a class of endogenous noncoding RNA (ncRNA) [5]. CircRNA is mainly located in the cytoplasm and is highly stable compared to other ncRNAs [6]. CircRNA is abundantly expressed and evolutionarily conserved across eukaryotic organisms [7] and it Ivyspring International Publisher plays crucial roles in many diseases, including digestive system neoplasms, cardiovascular disease, and Osteosarcoma [8][9][10]. It was commonly known that circRNAs regulated cell functions and cancer development by sponging microRNAs (miRNAs) [11][12][13].
MicroRNAs (miRNAs) are small endogenous RNAs that regulate gene expression post-transcriptionally. MiRNAs are short non-coding RNAs of 19~25 nucleotides that mediate gene silencing by guiding Argonaute (AGO) proteins to target sites in the 3' untranslated region (UTR) of mRNAs. AGOs constitute a large family of proteins that use single-stranded small nucleic acids as guides to complementary sequences in RNA or DNA targeted for silencing [14]. The miRNA-loaded AGO forms the targeting module of the miRNA-induced silencing complex (miRISC), which promotes translation repression and degradation of targeted mRNAs [15]. A single miRNA can target hundreds of mRNAs and influence the expression of many genes often involved in a functional interacting pathway [16].
The PTEN/PI3K/AKT pathway regulates multiple cellular functions, including cell growth, differentiation, proliferation, survival, motility, invasion and intracellular trafficking in various diseases like lung cancer, gastric cancer, breast cancer and so on [17][18][19]. PTEN, a dual protein and lipid phosphatase, primarily dephosphorylates phosphatidylinositol-3,4,5-trisphosphate (PIP3), which is the product of PI3K and is able to recruit Akt to the membrane, where it is phosphorylated and stimulated [20]. Activated Akt may regulate multiple biological processes, including cell survival, metabolism, cell proliferation and growth, by affecting its downstream substrates [21,22].
Taken together, the current study was designed to explore the role of hsa_circ_0000007 and miR-508-3p in the development of cholesteatoma with the involvement of the PTEN/PI3K/AKT signaling pathway.
Patients and samples
The present study was performed using data obtained randomly from 20 patients. All patients were surgically treated at Shengjing Hospital of China Medical University from September 1, 2020 to December 31, 2020. All patients have received pathological diagnosis of middle ear cholesteatoma. We collected and frozen all samples. The lower age limit of these patients was 18 years, and the higher limit was 70 years, with the median age of 53.57±18.67 years, which included 7 women and 13 men. At the same time, 15 cases of posterior auricular normal skin or skin fragments that could not be used during otoplasty were collected as control group. This study was approved by the Institutional Human Ethics Committee of Shengjing Hospital of China Medical University, and prior informed consent obtained from all the patients.
Data source
The microarray data analyzed in this study were obtained from the Gene Expression Omnibus (GEO) (https://www.ncbi.nlm.nih.gov/geo/), accession number GSE102715, published on Apr 27, 2020. GEO is a public functional genomics data repository supporting MIAME-compliant data submissions. Array-and sequence-based data are accepted. Tools are provided to help users query and download experiments and curated gene expression profiles [23,24]. This dataset GSE102715 profiled the differences in circRNA expression between 4 pairs of cholesteatoma (GSM2743683, GSM2743685, GSM2743687, GSM2743689) and matched normal skin samples (GSM2743684, GSM2743686, GSM2743688, GSM2743690). All specimens were obtained from 2 female and 2 male patients aged 18-year-old to 32-year-old who received unilateral middle ear cholesteatoma surgeries. The post-auricular skins were taken as control samples from the same patients. GSE102715 was based on the Agilent GPL21825 platform (Arraystar Human CircRNA microarray V2). All of the data were freely available online.
Data processing and differential expression analysis
After getting raw expression data, the volcano figure was created using GraphPad Prism 7.0 software. Differentially expressed genes (DEGs) analysis between cholesteatoma and normal samples was performed using the online analysis tool GEO2R (www.ncbi.nlm.nih.gov/geo/geo2r/?acc=GSE102715 ) and NetworkAnalyst 3.0 (www.networkanalyst.ca/ NetworkAnalyst/home.xhtml). The intersecting part between the two analysis was identified using the Venn diagram webtool (http://bioinformatics.psb .ugent.be/webtools/Venn/). The adjusted P value and |logFC| were calculated. Genes that met the cutoff criteria, adjusted P value<0.05 and |logFC|≥ 2.0, were considered as DEGs. The heatmap for the DEGs was created using GraphPad Prism 7.0 software.
Functional enrichment analysis
GO analysis is a common useful method for large scale functional enrichment research; The GO analysis included 3 categories, namely, biological process (BP), cellular component (CC) and molecular function (MF), which were used to predict protein functions [25]. Pathway functional analysis was performed on the Kyoto Encyclopedia of Genes and Genomes (KEGG) database [26]. GO annotation analysis and KEGG pathway enrichment analysis of DEGs in this study was performed using the clusterProfiler of Limma R package available on Bioconductor (http://bioconductor.org/packages/ release/bioc/html/limma.html). P<0.05 and gene counts≥2 were considered statistically significant.
Cell culture and transfection
After extraction of cholesteatoma tissues from middle ear, the tissues were washed 3 times with pre-cooled phosphate buffer solution (PBS) and cut into 1×1 mm 3 blocks using a surgical scissor with high temperature sterilization. Digestion of them with 0.25% pancreatin at 37℃ for 3h after centrifugation was terminated by adding culture medium. The filtrate filtered by a 200mesh cell sieve was collected and centrifuged at 1500 rpm for 10 min, then the supernatant was discarded, and the complete medium was added to re-suspend cell precipitate for subsequent experiments. The passage cells were re-inoculated in a 6-well plate. After the cell fusion degree reached about 80%, miR-508-3p NC, miR-508-3p mimic and miR-508-3p inhibitor GenePharma (Shanghai, China) were transfected into middle ear cholesteatoma cells respectively according to the instructions of Lipofectamine TM 2000 (Invitrogen, Carlsbad, CA, USA) transfection reagent, and cultured in a CO 2 incubator at 37°C. After 48 hours, ELISA, EdU staining and TUNEL staining were explored, and the transfection efficiency was evaluated using RT-qPCR. Experimental line of human immortalized keratinocytes (HaCaT) was obtained from shanghai Zhong Qiao Xin Zhou Biotechnology Co.,Ltd. Cells were cultured in high-glucose Dulbecco's Modified Eagle Media (DMEM) (HyClone, Thermo Fisher, Shanghai, China) with 10% fetal bovine serum (FBS) (Corning, Thermo Fisher, Waltham, MA). HaCaT cells were grown under sterile, humidified conditions at 37℃ and 5% CO2.
RT-qPCR
Total RNAs were separated by using trizol reagent (Takara, Otsu, Japan). And Prime Script RT Reagent Kit (HaoranBio, Xuhui, Shanghai, China) and TaqMan TM Advanced miRNA cDNA Synthesis Kit (Waltham, MA, USA) were then respectively applied to synthesize complementary DNA. Subsequently, the SYBR Green Master Mix (Takara, Dalian, China) was utilized to conduct the RT-qPCR on ABI 7500 System (Applied iosystems, Carlsbad, California). GAPDH and U6 served as internal controls. Relative expression of RNAs was calculated by using the 2 -ΔΔCt method.
CCK-8
Cell counting kit-8 (CCK-8) reagent (Beyotime Institute of Biotechnology, Shanghai, China) was used to perform CCK-8 assay in accordance with the manufacturer's suggestions. Transfected cells (1 × 10 3 ) were seeded into the 96-well plates and cultured for 0, 24, 48, 72 and 96 h. Then each well was added with CCK-8 reagent. After 4 h incubation, the optical density was measured using a microplate reader at a wavelength of 450 nm to detect cell proliferation at each time points. The detection was repeated 3 times.
ELISA
The ELISA kit was taken out from a low-temperature refrigerator, and left at room temperature, and then standard substances and diluents were added into blank holes according to instructions of the ELISA kit, and standard substances with different concentrations were added into the rest holes to draw standard curves. Then diluted enzyme conjugate was added, incubated at 37℃ for 30 min and washed for 5 times, and 100μL of chromogenic substrate was added, and then incubated in dark for 15 min. Finally, the reaction termination solution was added dropwise, and absorbance was detected with a microplate reader.
5-ethynyl-2'-deoxyuridine (EdU) staining
Each well was added with 100μL of penetrant, incubated for 15 min, washed with PBS, then added with 100μL of EdU staining solution diluted with culture medium, and incubated for 30 min. The culture medium was discarded and the cells were decolorized with PBS for 3 times, each time for 5 min. The staining was observed under a fluorescence microscope.
TUNEL staining
Cells in each group were washed with PBS, fixed with 4% paraformaldehyde for 30 min, permeabilized with 0.3% TritonX-100 solution for 15 min, added with 50μL of TUNEL test solution per well, incubated at 37℃ in dark for 60 min, washed with PBS, added with anti-fluorescence quencher, and observed via TUNEL staining under a microscope.
Western blot
The cells from each group were collected and lysed with RIPA lysis buffer, and the protein concentrations in each group were determined by Braford method. At room temperature, proteins were separated by 10% SDS-PAGE at constant pressure, transferred to a PVDF membrane, and sealed with 5% skimmed milk powder for 1 h. After that, the membrane was incubated with primary anti-antibodies PTEN (1:1000), PI3K (1:1000) and p-Akt (1:1000) overnight at 4℃ and then the HRP labeled secondary antibody (1: 2000) for 2 h. The bands were developed using DAB color development method, and the absorbance of each band was analyzed by Image J software.
Luciferase reporter assay
Reporter plasmids were obtained by inserting PTEN 3'-UTR sequence into pmirGLO vector (Promega, Madison, WI, USA). For the luciferase assay, miR-508-3p mimics and reporter plasmids were co-transfected into 239T cells using Lipofectamine 2000 . After culturing for 48 h, firefly and Renilla luciferase activities were measured using the Dual Luciferase Reporter Assay System (Promega, Sunnyvale, CA, USA) according to the manufacturer's instructions.
Statistical analysis
The statistical analysis was performed using the SPSS 20.0 and the data were visualized using the GraphPad 7. Data has been displayed as the mean± standard deviation (SD). The one-way ANOVA or student's t-test was utilized for the comparisons among groups. Pearson analysis was used to observe the correlation between hsa_circ_0000007 and miR-508-3p in tissue samples. Each experiment of this study was performed in triplicate. Any value of p < 0.05 was thought to be of statistical significance.
Identification of different expression circRNA
We downloaded the microarray expression dataset GSE102715 from Gene Expression Omnibus (GEO) and analyzed the different circRNAs between cholesteatoma and normal skin using the online analysis tool GEO2R and NetworkAnalyst 3.0. In total, there are 13247 raw circRNAs in dataset GSE102715. Based on the criteria of |logFC|≥2, GEO2R identified 16 upregulated and 283 downregulated circRNAs showed in Volcano (Fig. 1A). Based on the criteria of adjusted p. value <0.05 and |logFC|≥2, We obtained top 49 different circRNAs to show in heatmap (Fig. 1B). NetworkAnalyst 3.0 identified 21 dysregulated circRNAs on the criteria of P<0.05 and |logFC|≥2. Subsequently, Venn analysis was performed to get the intersection of the dysregulated circRNAs between GEO2R and NetworkAnalyst 3.0 result (Fig. 1C). As showed in Venn diagram, there are 18 common candidates of dysregulated circRNAs.
Identification of genes of interest
We identified five candidates of circRNAs (hsa_circ_0000007, hsa_circ_0000271, hsa_circ_0000979, hsa_circ_0001485, hsa_circ_0000920) which are markedly downregulated in cholesteatoma to be further studied. RT-qPCR assay depicted that hsa_circ_0000007 expression was most significantly downregulated in comparison with other 4 circRNAs both in tissue and cells ( Fig. 3A and B).
Prediction and verification of potential target microRNA and mRNA
We intended to explore the molecular mechanism of hsa_circ_0000007 in cholesteatoma. We used online prediction tool Circular RNA Interactome to predict potential miRNA which could possibly bind with hsa_circ_0000007. The prediction results showed four miRNAs, including miR-492, miR-508-3p, miR-665 and miR-876-3p. We chose miR-508-3p because its context+ score was minimum of all (Fig. 4A). In this study, RT-qPCR assay was applied to examine the expression of miR-508-3p both in cholesteatoma tissue and cells (Fig. 4B and C). The results demonstrated that the expression of miR-508-3p was notably higher in cholesteatoma tissue and cells than that in normal skin and HaCaT cells (p<0.001). Moreover, miR-508-3p expression was negatively correlated with hsa_circ_0000007 expression (Fig. 4D) (p<0.001). Next, bioinformatics analysis tool Targetscan (http:// www.targetscan.org/) showed that PTEN was a potential miR-508-3p target mRNA. PTEN was also found to have a binding site for miR-508-3p through searching starBase (Fig. 4E). To confirm that PTEN was a miR-508-3p target, we cloned mutant and wild-type PTEN sequences to construct mutant vectors and reporter plasmids respectively. The results showed that the reporter plasmid and miR-508-3p mimic co-transfections visibly suppressed luciferase activity and mutated PTEN vectors, but miR-508-3p mimic co-transfection had no significant effect on luciferase activity. These results proved that miR-508-3p directly targeted PTEN (Fig. 4F). Fig. 5A. Compared with that in miR-508-3p NC group, miR-508-3p level in middle ear cholesteatoma cells overtly increased in miR-508-3p mimic group (p<0.05) and notably decreased in miR-508-3p inhibitor group (p<0.01). The results of ELISA are shown in Fig. 5B and 5C. Compared with those in miR-508-3p NC group, Bax level in middle ear cholesteatoma cells was decreased in miR-508-3p mimic group(p<0.05) and enhanced in miR-508-3p inhibitor group (p<0.01) respectively, while Bcl-2 level was elevated in miR-508-3p mimic group (p<0.01) and declined in miR-508-3p inhibitor group (p<0.01) respectively. Fig. 5D presents the results of EdU staining. Compared with that in miR-508-3p NC group, the proliferation rate of middle ear cholesteatoma cells was increased in miR-508-3p mimic group(p<0.05) and decreased in miR-508-3p inhibitor group (p<0.05), as shown in Fig. 5E. Fig. 5F shows TUNEL staining results. Compared with that in miR-508-3p NC group, the apoptosis rate of middle ear cholesteatoma cells was lowered in miR-508-3p mimic group (p<0.05) and elevated in miR-508-3p inhibitor group (p<0.01), as shown in Fig. 5G. The CCK-8 assay results suggested that over-expression of miR-508-3p significantly promoted the cholesteatoma cell proliferation 96 h after transfection, while down-expression of miR-508-3p reduced proliferation (all p< 0.05) (Fig. 5H). In conclusion, miR-508-3p can promote proliferation and inhibit apoptosis in cholesteatoma cells.
MiR-508-3p facilitates cell proliferation and inhibits apoptosis in cholesteatoma cell through PTEN/PI3K/Akt signal pathway
QRT-PCR results are shown in Fig. 6A. Compared with that in miR-508-3p NC group, PTEN level in middle ear cholesteatoma cells notably decreased in miR-508-3p mimic group (p<0.01) and obviously increased in miR-508-3p inhibitor group (p<0.05). The Western blotting results are revealed in Fig. 6B and 6C. Compared with those in miR-508-3p NC group, the level of PTEN protein in middle ear cholesteatoma cells decreased in miR-508-3p mimic group and increased in miR-508-3p inhibitor group (p<0.05), while the levels of PI3K and p-Akt proteins raised in miR-508-3p mimic group (p<0.01, p<0.001) and lowered in miR-508-3p inhibitor group (p<0.001, p<0.001). Fig. 6D and 6E suggested that over-expression of miR-508-3p visibly facilitated the cholesteatoma cell proliferation after transfection. Co-transfection with miR-508-3p mimic and oe-PTEN led to significantly reduced cell proliferation compared to miR-508-3p mimic transfection alone (p < 0.05). This rescue assays were performed to prove overexpression PTEN can reverse the trendy of proliferation after upregulation of miR-508-3p.
Discussion
In recent years, with the gradual maturity of biochip and sequencing technology, a variety of biological databases can provide more reliable data for researchers [23,27]. Subsequently, Non-coding RNA (ncRNA) has been increasingly studied in various diseases [28]. Studies have shown that ncRNAs play important roles in various biological processes [29,30]. NcRNAs are commonly employed for RNA that does not encode a protein, but can regulate biological transcription and translation. NcRNAs include miRNAs, circRNAs and so on [31].
MicroRNAs (miRNAs), widely distributed, small regulatory RNA genes, target both messenger RNA (mRNA) degradation and suppression of protein translation based on sequence complementarity between the miRNA and its targeted mRNA [32]. MiRNAs are involved in human health and disease as endogenous suppressors of the translation of coding genes. Specific cognate mRNA targets for miRNA are the key to the regulation of mRNA [33]. MiRNAs have been reported and studied in various diseases. For instance, microRNA is a potential blood-based epigenetic biomarker for Alzheimer's disease [34]. Epigenetic abnormalities in meningiomas include abnormal microRNA expression [35]. Many studies have found that miRNA is most closely related to proliferation and apoptosis. For example, Xia MM et al. [36] summarized many microRNAs can regulate Sertoli cell proliferation and adhesion. Zhu ZJ et al. [37] found overexpression of microRNA-181a (miR-181a) promoted the proliferation and inhibited the apoptosis of osteosarcoma cells. Circular RNAs (circRNAs), a novel class of long noncoding RNAs, are characterized by a covalently closed continuous loop without 5′ or 3′ polarities structure and have been widely found in thousands of lives including plants, animals and human beings. Utilizing the high-throughput RNA sequencing (RNA-seq) technology, recent findings have indicated that a great deal of circRNAs, exhibit cell type-specific, tissue-specific or developmental-stagespecific expression. Evidences are arising that some circRNAs might regulate microRNA (miRNA) function as microRNA sponges and play a significant role in transcriptional control. CircRNAs associate with related miRNAs and the circRNA-miRNA axes are involved in a serious of disease pathways such as apoptosis, vascularization, invasion and metastasis [38]. The aberrant expression of circRNAs has been reported in many human diseases including gastric cancer [39], colorectal cancer [40], papillary thyroid cancer [41], lung adenocarcinoma [42] and so on. Function of circRNA-miRNAs-mRNA axis is increasingly studied in human diseases, but has not been reported in middle ear cholesteatoma. In this experiment, we carried out a study on hsa_circ_0000007-miR-508-3p-PTEN axis.
Cholesteatoma is a noncancerous cystic lesion derived from an abnormal growth of keratinizing squamous epithelium in the temporal bone [43]. Although not malignant, cholesteatoma can destroy temporal bone and nearby structures like ossicles, facial nerve, vestibule, semicircular canal and brain causing many problems like hearing loss, facial paralysis, dizziness, encephalopyosis and so on. Cholesteatoma is a serious disease in otolaryngology. Most of the cholesteatoma mechanisms that have been proposed to explain the pathogenesis of acquired cholesteatoma can be divided into four categories: (1) invagination theory (retraction pocket theory), (2) the theory of epithelial invasion or migration (immigration theory), (3) the theory of squamous metaplasia, and (4) basal cell hyperplasia theory (papillary ingrowth theory) [44]. Ultimately, the accumulation of epithelial keratinocytes with over-proliferation and inhibited apoptosis in a deepening retraction pocket leads to the formation of cholesteatoma.
With the increasing research on ncRNA in recent years, many RNA microarray databases [45] can be used for free. In this study, we mined GSE102715 in GEO database. By obtaining the raw data, we found hsa_circ_0000007 through two analysis software (GEO2R and Network Analyst 3.0). Then we detected hsa_circ_0000007 with RT-PCR and found that the expression of hsa_circ_0000007 was significantly lower in cholesteatoma than normal skin. As mentioned above, circRNAs may regulate miRNAs and act as microRNA sponges. Therefore, we found miR-508-3p that was the targeted miRNA downstream of hsa_circ_0000007 by using biological prediction software. Subsequently, we observed by RT-PCR that the expression of miR-508-3p in cholesteatoma was significantly higher than that in normal skin, and there was a statistically negative correlation with hsa_circ_0000007. Our experiment also demonstrated that the changes of miR-508-3p expression could affect the phenotypes of proliferation and apoptosis in cholesteatoma cells. In this study, cells with miR-508-3p NC, high miR-508-3p expression and low miR-508-3p expression were successfully obtained. The levels of Bax and Bcl-2 in cells of each group were determined by the ELISA kit. Bax and Bcl-2 are pro-apoptotic and anti-apoptotic proteins respectively, which play important roles in apoptosis [46]. The experimental results revealed that compared with miR-508-3p NC group, miR-508-3p inhibitor can overtly elevate the level of pro-apoptotic factor Bax and lower the level of anti-apoptotic factor Bcl-2. On the contrary, miR-508-3p mimic can raise the level of anti-apoptotic factor Bcl-2 and lower the level of pro-apoptotic factor Bax. Then, EdU and TUNEL staining methods were adopted to detect the effect of miR-508-3p on the proliferation and apoptosis of cholesteatoma cells, respectively. The results suggest that miR-508-3p mimic can significantly promote the proliferation of middle ear cholesteatoma cells and suppress their apoptosis.
MiR-508-3p has also been reported in other diseases, and its biological functions are related to proliferation, apoptosis and invasion. For instance, Lin C et al. [47] find that overexpressing miR-508 promotes, while silencing miR-508 impairs, the aggressive phenotype of oesophageal squamous cell carcinoma both in vitro and in vivo. Another study demonstrated the functional role of miR-508-3p in promoting the proliferation, invasion and migration of ESCC cells. They also identified a PCAT-1/miR-508-3p/ANXA10 axis in mediating the promoting role of miR-508-3p as a potential therapeutic target of ESCC [48]. But its mechanism of action in cholesteatoma has not been clarified.
We found a targeting relationship between miR-508-3p and PTEN through the TargetScan database analysis. Some CLIP-seq experiments also verified the targeted regulatory relationship between PTEN and miR-508-3p [49,50]. In this study, we verified the targeting relationship between miR-508-3p and PTEN by Luciferase Reporter Assay. In addition, the expression of PTEN decreased significantly in miR-508-3p mimic group and increased significantly in miR-508-3p inhibitor group. This proves the targeted regulation of miR-508-3p on PTEN at the transcriptional level further. Moreover, the rescue experiment also proved that PTEN could reverse the proliferation trend of miR-508-3p mimic group cells. Therefore, we infer that miR-508-3p has an effect on the phenotype of cholesteatoma through PTEN.
PTEN (phosphatase and tens in homolog deleted on chromosome 10) (also named MMAC1/TEP1) was discovered in 1997 independently by three laboratories as a tumor suppressor of which the expression is often lost in tumors [51]. Later studies established that PTEN is a negative regulator of a major cell growth and survival signaling pathway, namely the phosphatidylinositol-3-kinase (PI3K)/ AKT signaling pathway [52]. Phosphatidylinositol-4,5 -bisphosphate 3-kinase (PI3K) is activated and leads to protein kinase B (Akt) phosphorylated with the help of phosphoinositide-dependent kinase, in the PI3K/Akt signal transduction pathway [53]. Activated Akt may regulate multiple biological processes, including cell survival, metabolism, cell proliferation and growth, by affecting its downstream substrates [21]. In order to investigate the regulatory mechanism of miR-508-3p in cholesteatoma, the protein expression levels of PTEN, PI3K and p-Akt in cholesteatoma cells were detected via Western Blotting in this study. As results presented in Fig. 6, compared with miR-508-3p NC group, miR-508-3p inhibitor can enhance PTEN protein level and impede PI3K and Akt protein expressions. On the contrary, miR-508-3p mimic can decrease the expression of PTN and increase the expression of PI3K and p-AKT.
To sum up, we concluded that miR-508-3p played a key role in the formation of cholesteatoma by regulating the PTEN/PI3K/Akt signaling pathway. While the overexpression of miR-508-3p in cholesteatoma is probably mediated by the regulation of upstream hsa_circ_0000007.
for generating GEO datasets.
Funding
This work was supported by the Education Funding Project of Liaoning Province (grant no. ZF2019015).
Author Contributions
Xiulan Ma designed the study, made a critical review and improvement scheme; Dongliang Liu collected the samples, performed molecular analysis and wrote the manuscript; all authors read and approved the final version.
Ethics approval and consent to participate
The Ethics Committee of the Shengjing Hospital of China Medical University approved this study (2018PS268K). All patients involved in the study provided written informed consent.
|
2021-08-17T05:26:25.213Z
|
2021-07-11T00:00:00.000
|
{
"year": 2021,
"sha1": "72a8fbce900809f00eefe44d09bac5dd0066f85f",
"oa_license": "CCBY",
"oa_url": "https://www.medsci.org/v18p3224.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "72a8fbce900809f00eefe44d09bac5dd0066f85f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251502140
|
pes2o/s2orc
|
v3-fos-license
|
Gender Differences in E-Learning Success in a Developing Country Context: A Multi-Group Analysis
The purpose of this study was to evaluate gender differences in the variables that influence the adoption of e-learning in an academic environment. Literature on gender differences in e-learning adoption and usage seems to be very limited and hazy. Hence, the need for this research study. The study was based on the hypothesis that factors such as system quality, information quality, and service quality influence the behavioural intention to use an e-learning platform (Moodle LMS), which in turn influences actual Moodle usage. The study made use of the SmartPLS application. The Structural Equation Model (SEM) technique was used to analyse the interactions between the components of the proposed model from the viewpoint of 540 responses. Both males’ and females’ LMS usage intentions were shown to be significantly influenced by system quality and service quality. In addition to this finding, information quality showed a statistically significant influence on males’ LMS use intentions while having no effect on the LMS use intentions of females. This study contributes to the dearth of research that exists on gender differences in the adoption of e-learning in developing nations that have placed a strong emphasis on the use of elearning technologies. E-learning adoption theory is bolstered by this study, which empirically confirms that the DeLone and McLean model is applicable in a new setting.
I. INTRODUCTION
E-learning has emerged as a dominant force in the context of modern society (Jia et al., 2011). With the emergence of Covid-19, e-learning and telecommuting have become part of the normal activities of the modern employee in developed countries and several underdeveloped ones. In a typical elearning classroom or e-learning setting, lecturers and students who find themselves in an online environment encounter distinct challenges than in a conventional classroom setting (Tirziu & Vrabie, 2015). To maintain learning continuity and a stronger teacher-student bond, it is necessary to use any means of communication. For instance, on an online portal or a Learning Management System (LMS) like Moodle which is being used by the University of Education, Winneba (UEW), face-to-face resources can be uploaded by Lecturers on the online learning portal and user interface for students. A number of these LMS are equipped with features that allow for asynchronous and synchronous student-teacher interactions in a variety of formats, hence enabling the use of several approaches. There are successful strategies for delivering face-to-face information in an online setting that will increase the student's learning, pleasure, and cognitive comprehension of course content. In some developing countries, the emergence of the Covid-19 pandemic prompted several educational institutions (especially tertiary schools), corporate bodies, and governmental institutions to adopt e-learning systems to ensure the continued education of their students/the continuation of the school's educational calendar and increased employee productivity. A study by Baki et al. (2018) found that e-learning systems would not reach their full potential if they are not properly employed by end-users. They further added that e-learning systems are modern technological innovation that is on the increase in terms of acceptance and usage by businesses and people. Therefore, the authors advise that considerable research be conducted to provide significant insight into the many elements impacting user adoption in e-learning systems. This justifies the need for conducting this study.
II. GENDER DIFFERENCES IN E-LEARNING ACCEPTANCE
The problem of gender difference has led to greater awareness in the study and practice of education-related fields in the modern world. Although gender inequalities have been the subject of a number of studies, very little attention has been paid to the influence these variations have had on the implementation of e-learning in higher education. According to Yoo et al. (2015), the distinguishing feature of e-learning is the combination of human-human contact (which occurs between students, other students, and instructors) and human-machine interaction (which occurs between students and e-learning software). The use of elearning tools, for example, makes the learning process more comfortable. When using various types of technology, men tend to approach these technologies differently compared to their female counterparts who also use similar technologies. The findings of several of research lend credence to this assumption (e.g. Adamus et al., 2009;Kayany & Yelsma, 2000). Women have shown, over time, limited degrees of proficiency while utilizing computers. They probably use computers for a variety of activities that are associated with communication and they subscribe to several social media platforms as opposed to other computer programs. On the other hand, men are more likely than women to use virtual media, and the concept of computers, wireless technology, and the internet is closely associated with men. The results of the many studies that have been conducted on the gender difference in the adoption of e-learning are varied. According to research conducted by Cuadrado-García et al. (2010), there are distinguishing characteristics in how male and female students utilize e-learning systems. The degrees of motivation and pleasure that male and female students experience throughout their academic careers are also distinct. As per Azhar and Abd (2020), the behavioural intention of users of e-learning systems has a direct beneficial impact on gender. The authors base their argument on research that they conducted. Kaushik and Agrawal (2020) conducted a study to determine the elements that influence students and either encourage or discourage them from making use of online learning platforms. They concluded that the proliferation of e-learning platforms fills pupils with a sense of hope and inventiveness. These students felt uneasy or awkward when utilizing the recently implemented e-learning platforms since they were unfamiliar with them. On the other hand, there were not any discernible differences among the various demographics that were found. The authors' study was carried out on Indian students. As a result, comparison research is necessary to determine the influence that cultural biases have on the preparation of pupils for digital learning. Studies have shown that women will adopt and use new technology or choose a career in information and communications technology because of this confidence gap (Michie & Nelson, 2006;Wood & Li, 2005). On the other hand, several studies have found that an increasing number of men and women are exposed to and are using computers and computer applications in their work and personal lives (Rainer et al., 2003). This has resulted in a reduction of gender gaps in the adoption and usage of Information and Communications Technology (ICT) applications and systems. On the heels of the conflicting outcomes of the role of gender in technology adoption and usage, Li et al. (2008) suggest that future studies are carried out to make clear the role that gender plays when it comes to the adoption and use of new technology. Within the context of developing nations, gender differences in the adoption of e-learning have received very little amount of research attention. There have been suggestions in the academic literature for greater study to be conducted from the viewpoint of developing countries. This research investigates the factors responsible for students' acceptance of e-learning at a Ghanaian University to close the knowledge gap that currently exists on gender differences in adoption.
III. THE D&M ISS MODEL
The DeLone and McLean (2002) Information System Success model (D&M ISS model) is a comprehensive framework for measuring an information system's success (DeLone & McLean, 2002). The model has six major antecedents: system quality, information quality, use, user satisfaction, individual impact, and organizational impact (DeLone & McLean, 2002). System and information quality both have a major impact on information system usage and user satisfaction. Use and user satisfaction influence one another, and both affect individual impact, which has an impact on organizational impact. DeLone and McLean (2003) modified the original model in response to the benefits and drawbacks highlighted by other researchers. Individual and organizational impact variables were classified as Net Benefits, and the variable Service Quality was established. Quality is preceded by system quality, information quality, and service quality (Yakubu & Dasuki, 2018). The quality antecedents influence use and user satisfaction. Furthermore, the amount of Use may impact the degree of user satisfaction, and the level of User satisfaction may influence the amount of Use. Net Benefits are mostly influenced by use and user satisfaction (Delone & Maclean, 2002). Several researchers have used the notion in several contexts. For instance, Yakubu and Dasuki (2018) used the approach to evaluate Elearning adoption among Nigerian university students. Hsu et al. (2014) introduced a variable measuring trust to investigate E-commerce adoption. Jagannathan et al. (2018) also investigated online banking acceptability using the model. By analyzing eGovernment systems, Wang and Liao (2008) also verified the concept. Opoku et al. (2020) and Mohammadi (2015) explored e-learning studies by combining TAM and D&M ISS model components. With the inclusion of system success in their study of students ' perceptions, Freeze et al. (2010) studied IS Success Model for E-Learning Context-Based in E-Learning. This model was also evaluated by Lee-Post (2009) from an information systems approach. Student satisfaction with the University of Dar es Salaam's e-learning system was evaluated by Mtebe and Raphael (2018) by incorporating teacher quality and perceived usefulness as essential components. A conceptual model created from the revised D&M ISS model was used by Ajoye and Nwagwu (2014) to study the impact of quality antecedents on user satisfaction of the postgraduate school site for the University of Ibadan. The D&M ISS model was utilized in light of the previous evaluations. As a result of several e-learning studies, this model is effective (e.g. Opoku et al., 2020;Subaeki et al., 2019;Cui et al., 2019;Yakubu & Dasuki, 2018;Seta et al., 2018;Lin, 2017;Mohammadi, 2015).
IV. RESEARCH FRAMEWORK
To explain the gender difference in e-learning success among Ghanaian university students, this research made use of an upgraded version of the Information System Success Model, which is known as the D&M ISS model. The model was adjusted so that it would be compatible with the students' e-learning environment at the University of Education, Winneba, in Ghana. Using a modified D&M ISS model, we looked at how students' actual usage of the Moodle LMS was impacted by the quality antecedents. Information, systems, and services all have quality antecedents. As shown in Figure 1, the model implies that a favourable impact on behavioural intention will be achieved by improving the system, information, and service quality. This, in turn, will improve actual usage.
A. System Quality
It is generally agreed that quality is one of the essential features that must be included in every information system. These features include things like responsiveness, an easy-touse or usable interface, reliable operation, and flexibility of the system (Yakubu & Dasuki, 2018). Because there are certain differences between how male and female students use e-learning, these features may have some bearing on gender. On the other hand, how women and men understand a system could not be the same. According to Delone and McLean (2003), system quality was defined as the level of correctness of the information that was generated by a system. Because of a system's inaccuracies, the quantity of pleasure it delivers to its customers may decrease. (Seddon & Kiew, 1996). (Mtebe & Raphael, 2018). E-learning study has shown that the quality of a system has a significant and positive influence on a person's motivation to use e-learning Cheng, 2012;Li et al., 2012). Teachers are more inclined to use a system if it is easy to use, hence this might have a positive effect on teachers' utilization. However, this may not be the case if the system is difficult to use. This might influence how users perceive its efficacy if it is easy to use and can help them perform better at their jobs (Cheng et al., 2012).
Therefore, we hypothesize that; H1m: System quality will have a positive influence on males' behavioural intention to use Moodle LMS H1f: System quality will have a positive influence on females' behavioural intention to use Moodle LMS
B. Information Quality
The term "information quality" refers to the essential characteristics of e-learning output (Petter et al., 2008). This relates to how effective the e-learning material is for the user. However, the system must be timely, simple to understand, accessible, relevant, comprehensive, and secure. These features impact the acceptance of a system. If a system contains irrelevant information and is difficult to comprehend, it will not be adopted. Prior research has shown that information quality positively affects e-learning adoption intent Cheng, 2012). Women are seen to devote more time to a system if they comprehend its operation. They often accept a system and use it as a productivity tool, while males employ it for amusement (Narasimhamurthy, 2014). If a system generates excellent information, its adoption rate will grow. Following this, the study hypothesizes that; H2m: Information quality will have a positive influence on males' behavioural intention to use Moodle LMS H2f: Information quality will have a positive influence on females' behavioural intention to use Moodle LMS C. Service Quality According to Delone and McLean (2003), service quality is "the quality of support services that users get from the IT department". To put it another way, service quality in the context of online learning relates to the critical support offered to online learning users. There are a variety of services available, including network support, help with system upgrades, and hardware support (Yakubu & Dasuki, 2018). Service quality, as defined by Petter et al. (2008), is the degree to which information systems (IS) assist and support e-learning system users. It's also worth checking to see whether the system has any faults. Studies have shown that service quality has a significant impact on e-learning behavioural intention (Ramayah et al., 2010;Hassanzadeh et al., 2012;Li et al., 2012;Mtebe & Raphael, 2018). According to the results of this research, it can be concluded that women place a larger emphasis on quality than men, as they spend more time analyzing every aspect of the products and services they buy. Therefore, we hypothesize that; H3m: Service quality will have a positive influence on males' behavioural intention to use Moodle LMS H3f: Service quality will have a positive influence on females' behavioural intention to use Moodle LMS
D. Behavioural Intention to Use Moodle LMS and Actual Usage
Individual behavioural intention is an important indicator of whether or not a person will utilize a certain technological system (Schierz et al., 2010). In the new D&M IS success model, all components (factors) impacting the willingness to utilize an e-learning system were addressed. Some studies within the context of e-learning have shown a correlation between intent to use and actual utilization (Alkhalaf et al., 2012;Chow et al., 2012;Hassanzadeh et al., 2012). Intention and anticipated use of e-learning are covered in the intention dimension. Actual usage or system utilization monitors a user's behaviour or behaviours while employing a particular technology, such as an e-learning system. According to Venkatesh et al. (2003), a person's behavioural intention determines his or her actual behaviour or system use. The modified ISS model of DeLone and McLean (1992) posits that a person's behavioural intention will result in real use. This was validated by Mohammadi's (2015) study on e-Learning systems. In some instances, other investigations, like those by Chong et al. (2012) and Suki (2011), have validated this. Therefore, a user's intention to use a system is likely to impact his or her actual usage. The data were analyzed using SmartPLS, a widely used statistical program for Structural Equation Modeling. With SEM, you may use observational data to find structures and their relationships. The model's applicability was first examined by assessing the data's normality, internal consistency, and convergent and discriminant validity. The structural model's predictive power, accuracy, and links to the other components were explored further. A total of 552 university students took part in the study. Moodle is an elearning platform used by the University of education teachers and students. For some years, the university has used the Moodle learning management system (LMS) to assist teaching and learning, and it is critical to explore gender differences in its use. A total of 540 data points were obtained out of 552 questionnaires distributed to undergraduate students. Males made up more than half of the sample (59.1 %), with females accounting for the remaining 40.9 %.
A. Normality and Collinearity Test
The data were subjected to a normality test to assess whether they merited further investigation. The data were checked to see whether they were normally distributed using the skewness-Kurtosis method (Byrne, 2013). The results were found to fall within the projected ranges. As indicated in Table I, all skewness values between 2 and +2 and kurtosis values between -7 and 7 were consistent with the normality of the data (Byrne, 2013). However, when data are collected from several sources, there may be a substantial correlation between items or indicators, leading to the issue of multicollinearity. Variance Inflation Factor (VIF) values must be less than 5 (VIF 5) to prevent this (Kim, 2019). According to the study, all VIF values were less than 5, suggesting that multicollinearity was not a problem.
B. Evaluation of the Measurement Model
Confirmatory Factor Analysis was employed in this work to confirm the correctness of the measurement model. Factor loadings, Cronbach's Alpha, composite reliability (CR), average variance extracted (AVE), and discriminant validity were estimated (Henseler et al., 2015). Three items (SYQ1, SEQ1, and ACTU4) were eliminated from the LMS usage constructs owing to factor loadings below the recommended value of 0.70. (Gefen & Straub, 2005). The model was then re-evaluated to determine the parameters (Fig. 2). Internal consistency of the models was measured using Cronbach's alpha, with values greater than 0.5 considered acceptable as indicated by the study of Hair et al. (2010), Hu and Bentler (1998), and Hasan and Boa (2020). Furthermore, the composite reliability of the constructs was found to be above the necessary threshold of 0.70, showing a great level of construct dependability (Fornell & Larcker, 1981). To establish convergent validity, the AVE was also utilized, which assumes that each item measures what it was intended to measure. More than 0.50 is required for the AVE criteria, which indicates a lower measurement error than the observed structural variation, to be met. Overall, the convergent validity was good, since all AVEs were higher than 0.5 (see Table II).
An additional factor that was evaluated was discriminant validity, which indicates how one notion varies from the others. As a result of this, the Fornell-Larcker criteria were used (Henseler et al., 2015). As long as the square root of AVE exceeds the correlation between the constructs, discriminant validity has been achieved. Discriminant validity across components may be shown in Table III, where the square root of AVE (in bold) is larger than its inner correlation values. The Heterotrait-Monotrait Ratio (HTMT) is an additional criterion supporting discriminant validity. For HTMT to be attained, all correlation values must fall below the 0.900 thresholds. Table IV indicated that the discriminant validity was flawless.
C. Evaluation of Structural Model
This study proved that the measuring model was accurate and valid. The next stage was to evaluate the model's structure. This involved looking at the relationships between the variables, calculating the coefficient of determination, and evaluating the model's ability to predict the future. The structural model's model predictive capability or power is often measured using the coefficient of determination (R2). This usually explains the variability in the dependent variable which is caused by the independent variable. The result of R2 BIN = 0. 545 implies that system quality, information quality, and service quality account for 54.5 percent of the variation in Behavioral intention. In terms of actual use, Moodle accounts for 92.6 % of the variation (R2ACTU = 0.926). Stone-Geisser Indicator (Q2) was also used to evaluate the model's prediction accuracy (Henseler et al., 2015). It is possible to determine how accurate a model's predictions are by using the Stone-Geisser Indicator (Q2). An indicator value larger than zero (0) indicates a high-quality forecast (Henseler et al., 2015). Q2 values in Table V demonstrate that the model is accurate and that the constructs are necessary for general model tuning.
D. Gender Differences in the Study Constructs
To evaluate gender differences in the adoption factors, the data were split into two groups, namely, male and female using a multi-grouped Analysis in Partial Least Square. A construct relationship was established between the two groups as seen in Table VI and Table VII.
Concerning the construct relationship for males, all four hypotheses were tested (Table VI). The study depicts that the males' intention to use Moodle LMS was affected by how good the system was, with P < 0.000. Hence, H1m is supported. The men's intention to use Moodle LMS was also affected by the quality of the information given out by the system, with the P value (P < 0.000) showing a significant effect. Hence, H2m is supported. The Males' intention to use Moodle LMS was also affected by how good the service was, thus, service quality (P < 0.000). Hence, H3m is approved or supported. Lastly, men's intentions to use Moodle LMS have a positive and significant effect on how much they actually use it, with P < 0.000. So, H4m is supported. With respect to the females' hypotheses (Table VII). System quality showed a substantial influence on the female behavioural intention to use Moodle LMS, with P <0.000. as measured by the female construct relationship. As a result, H1f is supported. Information quality had no impact on the behavioural intention of females to use Moodle LMS. H2f is thus not supported. Nonetheless, Service quality showed a substantial influence on female behavioural intention to use Moodle LMS, with P < 0.000. Hence, H3f is supported. The females' behavioural intention to use Moodle LMS had a substantial impact on actual utilization to use it (P< 0.000).
VI. DISCUSSION OF FINDINGS
This is one of the first studies of its kind to use Multi-Group analysis to examine gender variations in e-learning achievement in a developing country. In an attempt to explain why students accepted the Moodle Learning Management System, five variables were selected (LMS). The study revealed that system quality had a significant impact on both men's and women's intent to utilize Moodle LMS. This research implies that if an e-learning platform is dependable, user-friendly, secure, rapid, and responsive, both men and women are likely to have a favourable image of it and consider it valuable. This conclusion gives validity to the findings of Cheng et al. (2012), Opoku et al. (2020), Yakubu and Dasuki (2018), and Kim and Lee (2014), who demonstrated that the quality of an e-learning system reliably predicts an individual's desire to utilize the system. It was shown that information quality had a statistically significant impact on men's behavioural intent to utilize Moodle LMS. This indicates that male students at the university are satisfied with the feedback generated by the e-learning platform. This outcome is consistent with prior research findings (Ramayah et al., 2010;Cheng, 2012). The nonsignificance of the relationship between information quality and females' desire to use Moodle LMS may be because the system developers may not have provided information that is simple to comprehend and presented clearly and concisely. Since women depend on a system as a productivity tool, they dedicate adequate effort to comprehending how the system operates. Probably, they will not find the system advantageous if they have difficulty understanding the system's feedback. This contradicts the conclusions of prior research (Hassanzadeh et al., 2012;Yakubu & Dasuki, 2018). Males and females both exhibited a statistically significant correlation between service quality and behavioural intent. This finding may suggest that an increase in the number of different types of services that users need from an e-learning system would boost their willingness to utilize the system. This is consistent with prior study results (Opoku et al., 2020;Cheng, 2012;Li et al., 2012). In conclusion, behavioral intention to use Moodle LMS was statistically significant in actual utilization across both genders. This shows that the quality of the system and the quality of the service will have a beneficial impact on the behavioural intentions of both males and females, which will influence Actual usage. The legitimacy of the study conducted by Chow et al. (2012), Opoku et al. (2020), andHassanzadeh et al. (2012) is enhanced by this finding.
VII. CONCLUSION
The DeLone and McClean quality antecedents' criteria were heavily weighted in this study since they were utilized to evaluate gender inequalities in the adoption of e-learning by students at a university in Ghana. In the end, this research found that system quality and service quality had a positive and significant influence on the behavioural intention of both men and women to use Moodle LMS. A positive effect of information quality on the behavioural intention of men to use Moodle LMS was also found, whereas a negative effect on the behavioural intention of females was found. In addition, the behavioural intention of both males and females to utilize Moodle LMS had a substantial influence on the extent to which they used the platform. This research is one of the first of its type to employ multi-grouped analysis to investigate gender differences in e-learning uptake, and it's one of the first of its sort overall.
VIII. RESEARCH LIMITATIONS AND SUGGESTIONS FOR FUTURE RESEARCH
The sample size of Ghanaian university students is the primary constraint of this study. Future studies may investigate the gender differences in the levels of teachers. Second, it is uncertain whether the model and its analysis apply to the tertiary education of other emerging nations. The study of Moodle LMS adoption in other developing nations will be useful. Students in a developing nation like Ghana may not be computer literate enough to utilize educational software, thus future research should examine students' computer literacy levels as a precondition for utilizing educational software to increase learning in the future. Understanding what factors impact the acceptance of elearning programs is critical because of cultural differences between developed and poor countries. It is acknowledged that the rate of technological adoption in developed nations may not be identical to that in underdeveloped nations. As noted by Opoku et al. (2020) and Yakubu and Dasuki (2001), there is a need for further academic study to investigate the various adoption rates (2018).
IX. STUDY IMPLICATION
The findings of this research have significant implications for the adoption of eLearning in developing countries, particularly in Ghana. When implementing e-learning in a university, administrators, academics, and system developers must consider gender. Users' willingness to accept new technologies is strongly influenced by the quality of the systems, services, and information provided. E-learning system designers should pay close attention to these aspects if they want to boost user adoption. It is also important that system interfaces be simple and need just basic IT skills to utilize. To boost their rate of adoption, students and instructors must also get enough training and guidance. It seems there are currently no studies in developing countries that focus on the gender difference in the use of eLearning technology, and this study fills that need. This study contributes to the current body of knowledge on e-learning adoption by testing the DeLone and McLean ISS model quality antecedent variable in sub-Saharan Africa.
FUNDING
There is no funding for this study.
|
2022-08-12T15:23:32.651Z
|
2022-08-02T00:00:00.000
|
{
"year": 2022,
"sha1": "5c87d9350a22cbfaea5c5773ff72ee5cd5928286",
"oa_license": "CCBYNC",
"oa_url": "https://www.ej-edu.org/index.php/ejedu/article/download/410/220",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c4ca5f1b133724fd5ceb70bcc5c5473366d44233",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": []
}
|
258666966
|
pes2o/s2orc
|
v3-fos-license
|
Modeling Duchenne Muscular Dystrophy Cardiomyopathy with Patients’ Induced Pluripotent Stem-Cell-Derived Cardiomyocytes
Duchenne muscular dystrophy (DMD) is an X-linked progressive muscle degenerative disease caused by mutations in the dystrophin gene, resulting in death by the end of the third decade of life at the latest. A key aspect of the DMD clinical phenotype is dilated cardiomyopathy, affecting virtually all patients by the end of the second decade of life. Furthermore, despite respiratory complications still being the leading cause of death, with advancements in medical care in recent years, cardiac involvement has become an increasing cause of mortality. Over the years, extensive research has been conducted using different DMD animal models, including the mdx mouse. While these models present certain important similarities to human DMD patients, they also have some differences which pose a challenge to researchers. The development of somatic cell reprograming technology has enabled generation of human induced pluripotent stem cells (hiPSCs) which can be differentiated into different cell types. This technology provides a potentially endless pool of human cells for research. Furthermore, hiPSCs can be generated from patients, thus providing patient-specific cells and enabling research tailored to different mutations. DMD cardiac involvement has been shown in animal models to include changes in gene expression of different proteins, abnormal cellular Ca2+ handling, and other aberrations. To gain a better understanding of the disease mechanisms, it is imperative to validate these findings in human cells. Furthermore, with the recent advancements in gene-editing technology, hiPSCs provide a valuable platform for research and development of new therapies including the possibility of regenerative medicine. In this article, we review the DMD cardiac-related research performed so far using human hiPSCs-derived cardiomyocytes (hiPSC-CMs) carrying DMD mutations.
DMD symptoms start at an early age, usually around 2-3 years, when proximal muscles of the lower extremities begin to weaken. Gradually, the weakness progresses to the distal muscles and the upper limbs. With age, symptoms become more prominent, and by the early teens, patients are usually wheelchair-dependent. The cardiac involvement of DMD includes dilated cardiomyopathy (DCM) which is present in virtually all patients by their late teen years, along with conduction abnormalities, various arrhythmias and extensive fibrosis. Eventually, patients die by their late 20s or early 30s due to respiratory and cardiac failure [16][17][18].
The current gold-standard treatment of DMD includes glucocorticoids (GCs), usually from the age of 4 years, aimed at improving motor and pulmonary function, while also potentially delaying the onset of DCM [16,19,20]. Beyond known side-effects including weight gain, hirsutism and other Cushing's syndrome symptoms, GCs do not change the disease outcome, but rather can only slow its course [19,21]. Angiotensin-converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs) are administered from around the age of 10 years to reduce mechanical stress on the heart [16,17]. Novel therapies include Eteplirsen, an exon 51 skipping drug, and Ataluren (PTC124), which promotes ribosomal readthrough of nonsense mutations [22,23]. However, these treatments are not intended for all DMD mutations, and there is a need for additional research and therapies to be developed. The primary animal model used for DMD research is the mdx mouse, which carries a nonsense point mutation in exon 23 of the dystrophin gene [24,25]. Although mdx mice exhibit chronic degeneration of myofibers, they do not manifest some prominent symptoms of DMD. mdx mice display a slower disease progression compared to human DMD patients, and their relative lifespan is significantly longer. The slow progression of muscle pathology does not lead to extensive fibrosis as in humans, and the mice retain their mobility. Cardiac involvement follows a different course than in humans, as mdx mice initially develop hypertrophic cardiomyopathy (HCM), while human DMD patients suffer from contractile dysfunction and DCM [26][27][28]. It has been previously found that mdx mice heart mitochondria display an increase in Ca 2+ uptake rate via activation of Ca 2+ transport, possibly compensating for a defective sarcoplasmic reticulum (SR) [29][30][31]. This adaptiveness of mdx mice may be a key feature differentiating this model from human patients [32]. Indeed, these challenges led to the development of the D2.mdx model which exhibits a significantly more prominent disease phenotype [33]. An additional limitation of the mouse model lies in gender differences between female and male mdx mice. These differences include a more prominent cardiac involvement and skeletal muscle degeneration in female compared to male mice [34,35], contrary to slower disease progression in human female carriers [36]. Another important animal model was developed in rats by means of TALENs targeting DMD exon 23 [37]. mdx rats manifest progressive muscle degeneration accompanied by a reduction in muscle force, as well as dilated cardiomyopathy. Importantly, mdx rats display significant fibrosis in skeletal and cardiac muscle, similar to human patients but contrary to mdx mice [37,38]. However, some differences remain between mdx rats and human patients, as well as cells derived from human patients, including the lack of muscle calcifications [37], normal L-type Ca 2+ current (I Ca,L ) [39][40][41], and unimpaired β-adrenergic cascade [39,[42][43][44] in mdx rats.
Human Induced Pluripotent Stem Cells
In 2006 Takahashi and Yamanaka published their successful attempt to reprogram differentiated rat somatic cells into induced pluripotent stem cells (iPSCs) by means of the induction of four factors: Oct3/4, SOX2, c-Myc, and KLF4 [45]; in 2007, these breakthroughs were repeated in human somatic cells [46]. Human iPSCs (hiPSCs) present classic embryonic stem-cell (ESC) characteristics including trilineage differentiation capability [47,48]. Thus, provided proper culture and media conditions, hiPSCs can be differentiated into various cell types. Like ESCs, hiPSCs and hiPSC-derived cells can be used for disease modeling and drug testing [49][50][51][52][53]. Furthermore, hiPSCs can generate a potentially endless pool of differentiated cells from a minute biopsy of a single living human donor, whereas ESC generation requires the sacrifices of embryos [54]. This enables previously unmatched research capabilities of various human diseases without the limitations of different animal models. Indeed, in the past years, numerous papers utilized the reprogramming technique for disease modeling and regenerative medicine. Patient-specific hiPSCs served as a means for many discoveries and advancements in research of different diseases [51,52,[55][56][57].
Due to the multitude of different mutations causing DMD [58], patient-specific hiPSCs provide a valuable approach to investigate the precise disease mechanisms resulting from these mutations. Furthermore, hiPSC research enables the potential development of new drugs and therapeutic approaches targeting specific mutations with higher efficacy than previous generic treatments. Importantly, investigating human-derived cells is preferable to using animal models which display different disease course and characteristics, compared to human patients.
Gene Expression Changes in DMD hiPSC-CMs
DMD has been previously linked to changes in cellular gene expression in different models including mdx mice, as well as in human patients [59][60][61][62]. To support hiPSCs as a valid model for DMD modeling and gain better understanding of the disease mechanisms, it is also imperative to investigate gene expression changes in DMD hiPSC-CMs. Lin et al. [41] discovered that DMD hiPSC-CMs exhibit higher rates of cell death compared to healthy cells, demonstrated by increased staining for CASP3 and elevated levels of DNA fragmentation. Additionally, they found extensive changes in gene expression including important apoptosis regulators such as CASP3, CASP8, CASP9 and antiapoptotic XIAP, [63], genes involved in contractility such as MYL2, MYL3, ACTN1 and TPM1 [64], and genes associated with heart diseases such as MAPK11, COL3A1 and CALM1 [65][66][67]. Moreover, bio-functional enrichment analysis of the genes revealed that categories related to heart disease conditions were positively enriched, while others involved in muscle development and contractility were negatively enriched in DMD hiPSC-CMs. Importantly, these enrichment patterns are consistent with clinical observations in DMD patients [68][69][70][71]. By means of whole-transcriptome sequencing analysis, Lin and coworkers discovered that alongside elevated expression of CASP3 and DIABLO, XIAP was decreased in DMD hiPSC-CMs, indicating that possible mitochondrial involvement in DMD hiPSC-CMs increased apoptosis [63]. Accordingly, transmission electron microscopy (TEM) demonstrated swollen mitochondria in DMD hiPSC-CMs [41]. Furthermore, FACS analysis of DMD hiPSC-CMs using JC-1 mitochondrial membrane potential staining revealed an increase in damaged mitochondria in DMD hiPSC-CMs [41]. Overall, Lin et al.'s findings suggest that a common mitochondria-mediated signaling network is involved in elevated apoptosis in DMD hiPSC-CMs.
Chang and coauthors [72] investigated the involvement of telomere shortening in various cardiomyopathies including DMD. Quantitative fluorescence in situ hybridization (Q-FISH) and quantitative polymerase chain reaction (qPCR) demonstrated increased telomere shortening in DMD compared to healthy hiPSC-CMs. The telomere has been shown be involved in gene expression of endothelial cells, and overexpression of telomerase protein (TERT) in mouse cardiomyocytes was proven to protect from myocardial ischemia [73][74][75]. Interestingly, mdx mice, which do not express cardiac symptoms as extensive as human DMD patients, have substantially longer telomeres than human patients [76]. Indeed, genetically engineered mdx mice with shortened telomeres developed severe heart failure similar to humans. The same group later discovered [77] by means of qRT-PCR, decreased transcript levels of telomere repeat-binding proteins (TRF1 and TRF2) and shelterin complex proteins (RAP1, TIN2, and POT1) in DMD compared to healthy cardiomyocytes, indicating possible involvement of these absent proteins in telomere shortening. Additionally, this group also found upregulation of p53 and p53-binding protein 1 (53BP1), indicating activation of the DNA damage response, as also demonstrated by increased apoptotic markers caspase-3, cleaved PARP, and accumulation of β-galactosidase signal in DMD hiPSC-CMs compared to healthy cells. Lastly, blocking the contraction of cardiomyocytes with blebbistatin, which locks the myosin heads in a low-affinity state preventing actin binding [78], abolished telomere shortening. These findings highlight aberrant contraction as an important factor involved in telomere shortening and shelterin downregulation, eliciting a p53-dependent DNA damage response.
Farini et al. [79] investigated the involvement of immunoproteasome (IP) dysregulation in the cellular pathophysiology of DMD. The group reported an increase in the expression of IP subunits PSMB8 and PSMB9 in DMD compared to healthy cardiomyocytes. Accordingly, administration of ONX-0914, an IP inhibitor, decreased intracellular Ca 2+ levels, as well as the release of cTnl, implying increased cell survival. Additionally, IP inhibition resulted in downregulation of TGF-β and type III collagen-α, suggesting reduced fibrosis [80]. Overall, these results demonstrate IP as a potential pharmacological target for DMD patients and its involvement in cardiac pathophysiology.
Our group [40] reported expression changes in genes encoding ion channels in DMD hiPSC-CMs, including HCN which encodes the channel responsible for the pacemaker funny current (I f ) during phase 4 of the cardiac action potential, as well as reduced I f density compared to healthy cells. Additionally, we found in DMD cardiomyocytes, increased expression of the CACNA1C gene, which encodes the channel responsible for the I Ca,L current, and we discovered that I Ca,L density increased accordingly.
Kamdar and colleagues [44] reported increased expression of known fibrosis genes COL1A1, ADAMTS2, COL6A1 and THY1 in DMD cardiomyocytes compared to healthy cells, consistent with other reports [79]. Additionally, Kyoto Encyclopedia of Genes and Genomes (KEGG) pathway analysis of dysregulated genes found that DMD hiPSC-CMs demonstrated upregulation of genes associated with extracellular matrix (ECM) organization, cellular proliferation and fibrosis, as well as downregulation of genes associated with intracellular Ca 2+ handing and contraction.
Jelinkova et al. [42] found an increase in K + channel Kir2.1 expression, despite no difference in action potential parameters between DMD and healthy hiPSC-CMs, contrary to other studies [40]. Furthermore, immunostaining demonstrated increased levels of the I Ca,L channel expression in DMD cardiomyocytes. However, contrary to our report [40], at the mRNA level there was no difference, indicating increased cellular localization rather than differences in transcription.
Our group also reported [43] changes in gene expression patterns in DMD hiPSC-CMs. RNA-seq analysis revealed 119 genes included in KEGG pathways for cardiac intracellular Ca 2+ handling, contraction and adrenergic signaling. Western blot (WB) analysis of SERCA2 demonstrated overexpression in DMD compared to healthy cells, possibly a compensatory attempt for impaired intracellular Ca 2+ handling. Furthermore, downregulation of the β1 adrenergic receptor (ADRβ1) and adenylate cyclase (AC) demonstrated by RNA-seq, combined with depleted SR Ca 2+ stores, likely underlies the blunted β-adrenergic positive inotropic response of DMD cells.
Yasutake et al. [81] investigated the involvement of the yes-associated protein (YAP, an important transcriptional factor involved in cell proliferation, growth and regeneration) in DMD cardiomyopathy. Previous research found decreased YAP activity in dystrophic muscles, resulting in poor regeneration and damage [82]. Interestingly, in DMD cardiomyocytes, the YAP nuclear/cytoplasmic (N/C) ratio was decreased compared to healthy hiPSC-CMs, indicating a lower degree of nuclear localization and active transcription of YAP. Accordingly, DMD hiPSC-CMs exhibited lower proliferation rate (measured by Ki67 expression) compared to healthy cells, suggesting a novel target for further research and therapeutic development.
Bremner and coauthors [83] used hiPSC-CMs in the form of engineered heart tissues (EHTs) that provide a 3D physiological cell culture platform to faithfully mimic mature cardiac tissue structure [84][85][86]. Contrary to a 2D culture, comparison of DMD and healthy EHT cardiomyocytes demonstrated dysregulation of genes related to cardiac function, including many related to cardiac muscle development, action potential and contraction, as well as extracellular matrix organization in DMD cardiomyocytes. Gene Ontology (GO) analysis indicated a dysregulation of genes related to cardiac muscle development, contraction, membrane potential, intracellular Ca 2+ handling, and extracellular matrix organization. In addition to supporting the notion of dysregulated genes in DMD cardiac pathophysiology, these results emphasize the important effect of culture structure on the maturity and gene expression patterns in hiPSC-CMs.
Marini and colleagues [87] utilized hiPSC-CMs to generate cardiac organoids (COs), 3D cellular structures possessing organotypic characteristics such as cytoarchitecture and tissue-specific physiological mechanisms [88][89][90]. Regarding changes in gene expression, Marini and coworkers found that DMD hiPSC-CMs expressed lower levels of α-actinin (ACTN2), pacemaker current channel HCN4 and troponin-related genes (TNNI1, TNNC1, and TNNI3) compared to healthy cells, consistent with other reports [40,41]. DMD cardiac organoids were larger in size than healthy organoids, consistent with cellular hypertrophy, a known hallmark of cardiomyopathy [91]. Contrary to healthy COs, DMD COs lost α, β, γ, and δ-sarcoglycan expression over time, likely due to a lack of connection to dystrophin. RT-qPCR demonstrated upregulation of genes related to cardiac contractility in DMD compared to healthy Cos, including ACTN1, IRX4, MYBPC3, MYL2, MYOM1, TNNC2 and TMP1. Marini et al. also found that ARCN1 and GORASP2, known endoplasmic reticulum (ER) stress markers, were increased in DMD cardiomyocytes compared to healthy cells, indicating a higher level of ER stress. Furthermore, immunofluorescence staining detected higher levels of NOX4, suggesting increased oxidative stress [92]. Histological examination revealed development of fibrotic-like structures in DMD COs, a finding also supported by upregulation of known fibrosis markers including COL1A2, COL3A1 and FN1. Additionally, this group discovered increased formation of adipose tissue in DMD COs by means of H&E staining, which was validated by BODIPY staining for lipid droplets and immunostaining of PDGFRα + , an adipocyte marker. These findings show that DMD COs exhibit fibrotic and adipogenic characteristics, resembling clinical features of DMD cardiomyopathy [93]. Importantly, the advantages of the 3D model over 2D hiPSC-CMs are further manifested by this group's previous report [94], which demonstrated immaturity of dystrophin-associated protein complex in 2D structures lacking protein level expression of α-, γ-, and δ-sarcoglycan. In summary, this 3D CO model enables mimicking some of the pathological findings in DMD cardiomyopathy.
Abnormal Excitation-Contraction Coupling Machinery in DMD hiPSC-CMs
Due to the important structural role of dystrophin, it is not surprising that its absence has been linked to impaired intracellular Ca 2+ homeostasis, resulting from extracellular influx [95,96]. Additionally, elevated cellular inflammatory mediators resulting from dystrophin absence promote an increase in inducible nitric oxide synthase (iNOS), which leads to destabilization of SR ryanodine receptors (RyRs), causing SR Ca 2+ leak into the cytosol [97,98]. Barthélémy et al. [99] further validated the key role of defective RyRs in DMD pathophysiology by demonstrating that RyR stabilizers including dantrolene, as well as Rycals S107 and ARM210, improve exon skipping efficiency in hiPSC-derived myotubes. Another possible mechanism was reported by Uchimura and Sakurai [100] who found that inhibiting store-operated Ca 2+ channel (SOC) components STIM1 and Orai1 prevented Ca 2+ overload and accordingly improved contractility in DMD hiPSC-derived myotubes, thus pointing to the role of these channels in excess Ca 2+ entry. Overall, excess cytosolic Ca 2+ can activate various proapoptotic regulators leading to cell death [101][102][103][104]. Accordingly, several studies of mdx mice demonstrated altered Ca 2+ handling [105][106][107][108][109]. Guan et al. [110] were the first to demonstrate abnormal Ca 2+ handling in hiPSC-CMs generated from DMD urine-derived stem cells (USCs), which included a lower recovery rate of Ca 2+ transients. Additionally, early mitochondrial permeability pore opening was demonstrated in DMD hiPSC-CMs, which may be attributed to Ca 2+ overload [111].
Additional ion channel abnormalities in DMD cells include sodium and potassium channels. mdx cardiomyocytes demonstrated a reduction in Na V 1.5 channel expression and subsequently lower inward Na + current [112]. In addition, a reduction in the inward rectifier K + current via the Kir2.1 channel was also observed in mdx mice, despite unchanged channel expression levels [113]. Interestingly, Jelinkova et al. [42] found increased Kir2.1 expression but no subsequent electrophysiological changes in DMD hiPSC-CMs. mdx mice cardiomyocytes also displayed a decrease in K ATP current, I K,ATP [114,115] which is known to be associated with cardiomyopathy development [114,115]. Lastly, our group demonstrated reduced I f current in DMD hiPSC-CMs despite normal HCN channel expression [40].
Contrary to enhanced I Ca,L reported in mdx mice [106], Lin and et al. [41] found decreased I Ca,L in DMD hiPSC-CMs. Using Ca 2+ imaging, this group reported a high resting cytosolic Ca 2+ concentration. Moreover, treating DMD hiPSC-CMs with Poloxamer P188, a membrane sealant, decreased cytosolic Ca 2+ and repressed CASP3 activation and apoptosis. These findings suggest that improving cell membrane integrity may be a beneficial strategy for prevention of cardiomyocyte loss in DMD patients.
Caluori and coauthors [116] used a system consisting of a microelectrode array (MEA) and simultaneous probing by a cantilever from an atomic force microscope (AFM) to couple electrical and mechanical recordings. In response to progressively increasing Ca 2+ concentrations, DMD hiPSC-CMs demonstrated a lower proportional decrease in spontaneous beat rate compared to healthy CMs, which may be attributed to pre-existing Ca 2+ stress impairing additional inward flux [117]. Introduction of verapamil, an I Ca,L channel blocker, also yielded a lower proportional decrease in spontaneous beat rate compared to healthy cells.
Our group [40] reported in DMD hiPSC-CMs the presence of arrhythmias represented by delayed afterdepolarizations (DADs) and irregular spontaneous firing patterns ( Figure 1). DADs are usually attributed to Ca 2+ overload [118], and this finding supports this notion. Additionally, we found increased I Ca,L density consistent with previous reports in mdx mice [106]. Accordingly, we discovered an elevated expression of the CACNA1C gene, which encodes the ion channel responsible for the I Ca,L . Furthermore, we found a prolongation of action potential duration (APD) stemming mainly from phase 2 of the cardiac action potential, likely due to the increased I Ca,L density [119]. Additionally, increased I Ca,L density may be involved in the decreased response to verapamil previously reported [116]. Tsurumi et al. [120] found that diastolic and systolic Ca 2+ concentrations measured by indo-1 fluorescence were elevated in DMD compared to healthy cardiomyocytes. Under mechanical stretching, diastolic and systolic Ca 2+ concentrations were further increased in DMD cardiomyocytes but not in healthy cells. The elevation in intracellular Ca 2+ concentrations, especially during diastole, correlates with diastolic dysfunction in DMD patients [121] and emphasizes the aberrant Ca 2+ handling in DMD cardiomyocytes. Tsurumi et al. [120] found that diastolic and systolic Ca 2+ concentrations measured by indo-1 fluorescence were elevated in DMD compared to healthy cardiomyocytes. Under mechanical stretching, diastolic and systolic Ca 2+ concentrations were further increased in DMD cardiomyocytes but not in healthy cells. The elevation in intracellular Ca 2+ concentrations, especially during diastole, correlates with diastolic dysfunction in DMD patients [121] and emphasizes the aberrant Ca 2+ handling in DMD cardiomyocytes.
Pioner and colleagues [122] investigated mechanical and structural abnormalities in DMD hiPSC-CMs and found that they exhibited hypertrophy manifested by increased cellular length, diameter and cross-sectional area compared to healthy cardiomyocytes. Ultrastructurally, DMD hiPSC-CMs displayed sarcomeric changes including shortening of A-and Z-bands, increased percentage of identifiable I-bands, and overall greater variability in structural organization, suggesting underdeveloped sarcomere ultrastructure, consistent with previous reports of decreased transcription of sarcomeric genes [41] and reduced actin turnover [123]. These authors also demonstrated that in response to extracellular Ca 2+ administration DMD hiPSC-CMs displayed reduced mechanical tension and prolonged relaxation. Furthermore, pCa 50 was greater in DMD hiPSC-CMs, indicating increased Ca 2+ sensitivity of tension generation, likely a compensatory mechanism aimed at increasing contractility in defective cells. Paced DMD hiPSC-CMs demonstrated a slower rate of contraction and relaxation compared to healthy cardiomyocytes. Post-resting period stimulation of DMD hiPSC-CMs yielded decreased twitch amplitude compared to healthy cells, suggesting a defective contractile reserve, likely due to impaired Ca 2+ handling. Measurement of Ca 2+ transients demonstrated low rates of rise and decay in DMD hiPSC-CMs, further supporting the notion of abnormal cellular Ca 2+ handling. Lastly, similar to our published results [40], the spontaneous beat rate was lower in DMD compared to healthy hiPSC-CMs. Taken together, these results demonstrate the defective Ca 2+ handling and contraction machinery in DMD hiPSC-CMs. Kamdar and coauthors [44] tested β-adrenergic cascade characteristics in DMD hiPSC-CMs and reported an increased rate of baseline arrhythmogenic Ca 2+ transients compared to healthy CMs. Under β-adrenergic stress introduced by isoproterenol, this group observed an increase in the frequency of arrhythmogenic Ca 2+ traces, which were reduced by the β-adrenergic receptor blocker propranolol. Additionally, Kamdar et al. found a downregulation of genes associated with cardiac contraction and Ca 2+ homeostasis. Overall, their results suggest an impaired β-adrenergic response in DMD cardiomyocytes, possibly correlated with autonomic dysfunction, known to affect DMD patients [124].
Jelinkova et al. [42] investigated molecular aspects of the excitation-contraction coupling (ECC) machinery and autonomic response in DMD hiPSC-CMs. Firstly, they found that compared to healthy cells, DMD hiPSCs presented lower differentiation efficiency to cardiomyocytes, manifested by a decreased fraction of spontaneously beating cells. Regarding Ca 2+ transients, lower rates of rise and decay, as well as increased transient duration, were observed in DMD cardiomyocytes. These data support the notion that dystrophin deficiency results in ECC disturbances via Ca 2+ handling abnormalities. They also found increased prevalence of delayed small-amplitude Ca 2+ release events in DMD cells, which may correlate with the arrhythmogenic DADs we reported in DMD cardiomyocytes [40]. Other groups also found a weaker contraction force measured by AFM [125][126][127] in DMD cardiomyocytes. Contrary to our report, Jelinkova et al. found no difference in the spontaneous beat rate. However, similar to our findings, they found increased beat rate variability (BRV) measures in DMD cardiomyocytes compared to healthy cells [40]. DMD cardiomyocytes also displayed an abnormal response to isoproterenol and metoprolol, strengthening the notion of an abnormal β-adrenergic cascade in DMD cells. Importantly, these investigators found increased expression of β1and β2-adrenergic receptors in DMD cardiomyocytes, likely a compensatory attempt of the defective cells.
Our group [43] investigated the β-adrenergic responsiveness and intracellular Ca 2+ handling in DMD hiPSC-CMs. Under isoproterenol-induced β-adrenergic stimulation, DMD cardiomyocytes displayed a blunted positive inotropic response including decreased Ca 2+ transient parameters such as amplitude, activation and relaxation rates, accompanied by corresponding changes in contraction parameters. Additionally, DMD cardiomyocytes did not exhibit a depressed chronotropic response to isoproterenol, suggesting the mechanism underlying the blunted inotropic response is not an impaired β-adrenergic cascade. In response to increasing extracellular Ca 2+ concentration, known to induce augmented SR Ca 2+ release and positive inotropy, DMD cells displayed smaller Ca 2+ and contraction parameters compared to healthy cardiomyocytes. Lastly, we tested the effect of caffeine, which induces SR Ca 2+ release, the common pathway of β-adrenergic stimulation and Ca 2+ administration inducing positive inotropy. DMD cardiomyocytes displayed a blunted response to caffeine, including smaller Ca 2+ amplitude and a shorter recovery time compared to healthy cells. In support of the notion of depleted SR Ca 2+ stores in DMD cardiomyocytes, administration of ryanodine and cyclopiazonic acid (CPA), a SERCA inhibitor, resulted in a smaller inhibitory effect compared to healthy cells represented by a smaller relative decrease in Ca 2+ amplitude. These results, combined with the reported changes in expression of genes related to Ca 2+ handling, SERCA2, ADRβ1 and AC, point to abnormal Ca 2+ handling as a key factor underlying abnormal β-adrenergic response in DMD.
YAP activity is known to increase due to mechano-transduction converting physical to electrochemical stimulus, regulated by actin dynamics [128,129]. Therefore, Yasutake et al. [81] assessed the actin state to determine its possible involvement in altered YAP activity in DMD hiPSC-CMs. Indeed, DMD cardiomyocytes were found to be initially smaller and rounder than healthy CMs and had a lower number of nuclei, in addition to failing to morphologically change with maturation as healthy cardiomyocytes, supporting the notion that DMD cardiomyocytes fail to progress morphologically in a proper manner. Immunofluorescence staining demonstrated a disrupted actin structure in DMD hiPSC-CMs as well, indicating a possible link between abnormal actin and YAP activity in DMD cardiomyocytes.
Chang and colleagues [77] devised a unique bioengineered traction force microscopy platform which enables mimicking stiffness of fibrotic heart tissue, a key feature of DMD cardiomyopathy, as well as controlling cardiomyocyte orientation. DMD hiPSC-CMs displayed increased resting [Ca 2+ ], decreased Ca 2+ transient amplitude, increased transient duration and increased transient decay rate, measured by Fura 2 fluorescence. These results demonstrate that impaired Ca 2+ handling is involved in DMD pathophysiology at the tissue level.
Bremner and coauthors [83] measured contractile force in electrically stimulated hiPSC-CMs 3D EHT to mimic mature cardiac tissue structure [84][85][86]. DMD cardiomyocytes generated decreased twitch force and contraction kinetic parameters compared to healthy cardiomyocytes, implying lower contractile performance. Staining of Z-disc and α-actinin followed by confocal imaging revealed that DMD cardiomyocytes had shorter sarcomere length compared to healthy cardiomyocytes, likely contributing to the lower contractile force. Using Fura-2 fluorescence, it was found that DMD cardiomyocytes displayed a higher baseline of cytosolic [Ca 2+ ] and increased transient peak, but overall decreased amplitude compared to healthy cells. DMD cells also exhibited slower rates of rise and decay of Ca 2+ transients. These results emphasize once more abnormal Ca 2+ handling as a key pathophysiological feature of DMD. Consistent with our previous report [40], DMD EHTs displayed increased BRV compared to healthy cells, strengthening the 3D structure as a valid important model for investigating DMD cardiomyopathy.
Pioner et al. [130] investigated in DMD hiPSC-CMs the maturation of the Ca 2+ handling machinery and adaptation to changes in substrate stiffness. Analysis of Ca 2+ transients revealed that DMD cardiomyocytes exhibited an initial smaller Ca 2+ transient amplitude and a smaller increase in the transient amplitude with maturation compared to healthy cardiomyocytes. Additionally, post-rest contraction potentiation was lower in DMD compared to healthy cardiomyocytes, indicating lower SR calcium storage and release. Ca 2+ /calmodulin-dependent protein kinase II (CaMKII) levels were higher in DMD compared to healthy cardiomyocytes; consequently, the RyR2 CaMKII phosphorylation site S2814, was more phosphorylated. This increased phosphorylation may have accounted for the SR Ca 2+ leak, resulting in the reduced Ca 2+ transient amplitude in DMD cardiomyocytes. Compared to healthy cells, DMD hiPSC-CMs exhibited reduced Ca 2+ transient amplitude on both soft and stiff substrates. Furthermore, while healthy cardiomyocytes displayed an accelerated decay of Ca 2+ transient amplitude when cultured on a stiffer substrate, DMD cells did not differ in their kinetics between the substrates, indicating failure to adapt normally to differences in tissue stiffness. These findings demonstrate the abnormal Ca 2+ handling of DMD cardiomyocytes and specifically the inability to adapt their Ca 2+ homeostasis in response to changes in ECM stiffness, which may be a key factor of the DMD cardiac pathophysiology.
Alterations in Cellular Energy and Metabolism in DMD hiPSC-CMs
Emerging evidence from recent years suggests that metabolic and mitochondrial dysfunction play prominent roles in DMD pathophysiology [131][132][133][134][135][136][137]. Afzal et al. [138] tested the effects of nicorandil, a known NO donor, K ATP channel opener and an antioxidant [139][140][141] on DMD hiPSC-CMs. The group found that in DMD hiPSC-CMs, nicorandil reduced H 2 O 2 -induced stress related LDH release and cell death to levels similar to healthy cells, through a mechanism involving the NO-cGMP pathway and K ATP channel opening. Nicorandil also decreased ROS levels and maintained mitochondrial membrane integrity following oxidative stress, an effect mediated via increase of SOD2 expression which converts superoxide to hydrogen peroxide. Lastly, nicorandil was also shown to decrease cytosolic hydrogen peroxide production in DMD hiPSC-CMs. Although further research is required, these findings provide evidence that nicorandil may have a potential to protect against stress-induced cardiac involvement in DMD.
Gartz and colleagues [142] investigated cardioprotective effects of exosomes on DMD cardiomyocytes. They found that exosomes collected from conditioned media of both DMD and healthy hiPSC-CMs decreased injury-induced ROS levels in DMD cells. Additionally, exosomes led to inhibition of Bax expression and mitochondrial translocation, which are known to trigger opening of the mitochondrial permeability transition pore (mPTP), resulting in caspase activation and cell death [143]. Furthermore, exosomes were found to reduce stress-induced loss of mitochondrial membrane integrity and early mPTP opening time in DMD cardiomyocytes, as well as decreasing caspase 3/7 levels and subsequent apoptosis. By pretreating exosomes with trypsin, the researchers discovered that surface proteins are required to exert the protective effect. Treated exosomes failed to initiate ERK1/2 phosphorylation, an important component of the mitogen-activated protein kinase (MAPK)-mediated response which is responsible for dictating cell death or survival [144]. Additionally, inhibition of p38 MAPK reversed exosome-protective effects, indicating its specific involvement in cardio-protection. These results demonstrate the protective potential of cardiomyocyte-secreted exosomes in DMD cardiomyocytes, specifically the involvement of surface proteins ERK1/2 and p38 MAPK. Gartz and coauthors [145] further investigated the involvement of secreted exosomes in the pathological response of DMD hiPSC-CMs to metabolic stress. Exosomes, known to play an important role in intercellular communication and signaling [142,[146][147][148], were studied in DMD [146][147][148][149][150] and found to be involved in modulation of inflammatory processes, oxidative injury, mitochondrial respiration, myocyte differentiation and cell death. In this study, the researchers identified several exosomal micro RNAs (exomiRs) dysregulated in DMD. Specifically, mir-339-5p was found to be upregulated in DMD exosomes and DMD hiPSC-CMs. The mir-339-5p level was also found to increase with cardiac injury and age in mdx mice, rendering it a potential cardiomyopathy-sensitive target in DMD. Interestingly, in DMD hiPSC-CMs, miR-339-5p-containing exosomes were found to be involved in stress-related mitochondrial-dependent cell death. Specifically, miR-339-5p, known to directly bind to MDM2, a p53 regulator leading to its downregulation [151], was also found to decrease MDM2 levels in DMD hiPSC-CMs. MDM2 was then reported to be downregulated in stressed DMD hiPSC-CMs. Lastly, miR-339-5p knockdown in DMD cardiomyocytes led to the preservation of mitochondrial membrane potential and reduction in stress-induced cell death, indicating its important role in modulating the cellular response to stress in DMD hiPSC-CMs. Collectively, these findings emphasize the importance of exosomal miR-339-5p involvement in cellular stress response, which may also serve as a basis for future research as a therapeutic target.
Duelen and coauthors [152] investigated the involvement of NOX4 in oxidative stress in DMD hiPSC-CMs. NOX enzymes, known to generate reactive oxygen species (ROS) in various cellular processes [153], are also involved in cardiovascular diseases [154,155]. On the basis of this knowledge, these researchers targeted NOX4 (mainly expressed in cardiomyocytes) in DMD hiPSC-CMs. Contrary to our report and that in mdx mice [40,106], DMD cardiomyocytes displayed a reduction in I Ca,L density, although, similarly to our findings, they observed arrhythmogenic firing patterns including DADs and oscillatory prepotentials (OPPs). Another finding in line with ours was APD prolongation in DMD cells. Cell death, measured by flow cytometric analyses using annexin V and 7-amino-actinomycin D (7AAD), revealed that DMD cardiomyocytes underwent accelerated cell death compared to healthy cells. Furthermore, DMD cells had a higher ROS content compared to healthy cardiomyocytes. Treatment with N-acetyl-L-cysteine (NAC), ataluren (PTC124) and idebenone increased DMD cell survival and reduced ROS levels. By means of JC-1, a fluorescent voltage-sensitive dye with membrane-permeant fluorescent lipophilic cationic properties, Lin and coworkers measured mitochondrial membrane potential. Similar to Lin et al. [41], DMD hiPSC-CMs displayed increased levels of mitochondrial depolarization, which was improved by treatment with NAC, ataluren, and idebenone. Consistent with previous reports of increased NOX4 expression in heart failure [154][155][156][157], NOX4 was also upregulated in DMD hiPSC-CMs. Treatment with PTC124, NAC and idebenone reduced NOX4 expression. To assess the role of NOX4 in increased ROS levels, antisense locked nucleic acid (LNA) GapmeRs was used to degrade NOX4 mRNA. Indeed, NOX4 degradation led to reduced NADPH-dependent ROS production. DMD cardiomyocytes displayed increased NOX4 levels, and following addition of idebenone, ROS production was reduced. Furthermore, addition of ATP, a NOX4 regulator, and idebenone treatment also reduced ROS production. Lastly, contractile function assessed in 3D EHT constructs showed that DMD cardiomyocytes displayed lower contraction amplitude compared to healthy cells; this decrease was rescued by PTC124, NAC, and idebenone.
Our group investigated various metabolic and bioenergetic aberrations in DMD cardiomyocytes [158]. We first focused on polar metabolites (such as phospho-sugars, organic acids, nucleotides, and fatty acids) by means of liquid chromatography/mass spectrometry (LC-MS)-based metabolomics to study the central carbon metabolism of DMD cardiomyocytes. Indeed, nine metabolites differed in DMD cardiomyocytes compared to healthy cells, pointing primarily toward impairment in fatty acid oxidation. Next, we used labeled glucose tracing to study cellular glucose-derived metabolites. Indeed, DMD cardiomyocytes exhibited lower levels of glucose-derived carbon into the TCA cycle, indicating decreased mitochondrial glucose oxidation. Seahorse XFe96 flux analyzer was used to assess mitochondrial respiration and mitochondrial oxidative phosphorylation by measuring the oxygen consumption rate (OCR). DMD cardiomyocytes displayed decreased basal respiration and ATP production coupled to oxygen consumption, compared to healthy CMs. Interestingly, uncoupling ATP synthesis from the mitochondrial respiratory chain restored respiration rate, suggesting defective mitochondrial ATP synthase (Complex V). To examine mitochondrial number and morphology, electron microscopy (EM) analysis was performed. DMD cells contained a higher number of mitochondria compared to healthy CMs. Additionally, a higher frequency of aberrations, including increased size, disrupted cristae and multiple focal swelling areas, were seen in DMD cardiomyocytes. To further investigate the association between morphological abnormalities and impaired mitochondrial function, we measured mitochondrial activity by means of live confocal 3D imaging. Florescent staining revealed that mitochondrial activity was attenuated compared to healthy cells. Taken together, these findings suggest that bioenergetic and metabolic impairments are involved in DMD cardiac pathophysiology and should be further investigated in search of novel therapeutic targets.
Gene Therapy and Gene Editing in DMD hiPSC-CMs
Current treatments for DMD can at best slow the disease course and alleviate some symptoms [19,159,160]. However, these treatments do not fundamentally change the outcome, and patients rarely live beyond their late 20 s [18,161]. Furthermore, since around one-third of DMD cases originate from de novo mutations with no familial background [162,163], preconception genetic testing is expected to miss a significant proportion of cases. Hence, to truly cure DMD, genetic correction is needed.
Different methods can be used to attempt genetic repair (Figure 2). Early efforts to genetically correct DMD hiPSCs were conducted using gene therapy. Kazuki et al. [164] demonstrated the successful delivery of a human artificial chromosome (HAC) containing the complete dystrophin gene. Zatti and coauthors [165] successfully delivered an HAC containing WT dystrophin into DMD hiPSC-CMs; these authors found that differentiated cardiomyocytes expressed all dystrophin isoforms and displayed corrected protein localization, in addition to normal Ca 2+ transients. Choi and colleagues [166] also reported correction of abnormal dystrophin gene expression profiles upon HAC delivery into DMD cardiomyocytes. Dick and coauthors [167] chose minigene delivery and exon-skipping to restore dystrophin expression in DMD cardiomyocytes. Additionally, Howard et al. [168] found that micro-dystrophin transgene containing the main functional domains was sufficient to preserve cardiac function and prevent fibrosis and inflammation in a DMD mouse model. While these techniques are useful tools for research, clinical application is not easily feasible as delivered HACs and minigenes are only transiently expressed in host cells [169]. In addition, delivery of such large complexes is a difficult challenge and carries a risk of nonspecific integration to the host DNA [170,171]. Another approach to correct DMD mutations is by editing the mutated gene directly. Gene editing requires a break in the DNA sequence at a specific position, followed by repair, which includes replacement of excised DNA with a new sequence. Cellular DNA repair mechanisms include two options [172][173][174]. The first is homology-directed repair Another approach to correct DMD mutations is by editing the mutated gene directly. Gene editing requires a break in the DNA sequence at a specific position, followed by repair, which includes replacement of excised DNA with a new sequence. Cellular DNA repair mechanisms include two options [172][173][174]. The first is homology-directed repair (HDR), which is based on a new sequence introduced to repair double-strand breaks (DSBs). This enables precise correction of mutations in the DNA sequence. The DNA template used for repair can be coupled and delivered together with an endonuclease which performs the DSBs. Delivering a template to be utilized by HDR is the preferred mechanism for correction of small genetic mutations which require an insertion of a relatively small new sequence [175]. The second option is nonhomologous end-joining (NHEJ), which joins two DNA break ends, but at the cost of frequent small insertions/deletions (indels), potentially disrupting the open reading frame (ORF) and subsequently leading to RNA degradation through the nonsense-mediated decay (NMD) mechanism or production of a nonfunctional protein. NHEJ is useful to correct large frame-shifting deletions or duplications, by means of deleting additional base pairs to achieve ORF restoration or excising the duplicated segment to accomplish complete correction, respectively [175,176].
To achieve safeness and effectiveness, the process of introducing an exogenous endonuclease alongside a DNA template must be highly precise with minimal off-target effects. To date, three main systems are in use for preforming DNA DSBs and editing: • Clustered regularly interspaced short palindromic repeats (CRISPR)/CRISPRassociated protein 9 (Cas9) endonuclease comprises an anti-bacteriophage bacterial system [177]. CRISPR are bacterial DNA segments used to store antibacterial genetic information. Upon infection with a virus containing RNA homologous to RNA transcribed from the CRISPR, effector proteins can be guided by the latter to cleave the virus. Cleavage is performed by endonucleases such as Cas9, Cpf1, and Cas12. The guiding RNA sequence (gRNA) can be edited and, together with the endonuclease, delivered into cells using different types of vectors [178,179]. • Zinc finger nucleases (ZFNs) are a group of enzymes capable of cleaving specific sites in DNA. ZFNs consist of DNA-binding zinc finger arrays and a FokI-based endonuclease [180]. Despite high specificity of ZFN binding, the process of producing precise zinc fingers for specific DNA segments is considered relatively difficult and expensive [181]. • Transcription activator-like effector nucleases (TALENs) are based on proteins produced by Xanthomonas bacteria which are capable of plant gene expression alteration [182,183]. Like ZFNs, TALENs production is difficult and expensive compared to CRISPR and its associated endonucleases [184,185].
Off-target effects are a major concern when considering the clinical application of gene editing. These include possible disruption of oncogenes, tumor suppression genes and DNA repair genes [176,186,187]. An additional issue is the possibility of an immune response against elements of the delivered DNA-binding and endonuclease complex, or even against corrected gene products [176,188]. Despite these concerns, it is widely agreed among researchers and clinicians that genome editing carries the potential to cure genetic diseases previously considered incurable.
To develop an approach which potentially benefits a larger target population, Young et al. [189] utilized the CRISPR/Cas9 system to delete exons 45-55 and most of their adjacent introns. The rationale is that, since most DMD patients suffer from out-of-frame mutations located within the "hotspot" of exons 45-55, deletion of these exons and restoration of the ORF will potentially benefit most of the patients [190,191]. Moreover, in-frame mutations in the same region are associated with the milder dystropathy, Becker muscular dystrophy (BMD) [192]. The successful restoration of the reading frame was confirmed via sequencing, and the corresponding protein expression was validated by immunohistochemistry and Western blot (WB). As a functional test, hiPSC-CMs and skeletal myocytes were exposed to osmotic stress. Indeed, in hypotonic solution, creatine kinase (CK) release levels were reduced in corrected cells to levels similar to healthy cardiomyocytes, indicating that the restored dystrophin provided sufficient membrane stability. Furthermore, miR-31 levels, which were reportedly high in DMD cells [193,194], were also reduced to levels comparable to BMD cells. Lastly, immunostaining and WB demonstrated successful restoration of β-dystroglycan in corrected skeletal myocytes. Additionally, hiPSC-derived skeletal myocytes were injected into NOD-scid IL2Rgamma (NSG) mdx mice, and dystrophin and β-dystroglycan were found only in the corrected and healthy cardiomyocytes. In conclusion, ORF restoration reinstated key functional cellular aspects of dystrophin, and this approach was proven as an important therapeutic strategy.
Zhang and colleagues [195] applied a similar principle, as skipping exon 51 can potentially restore the ORF in~13% of DMD exon deletion mutations [196]. This group used two Cpf1-mediated CRISPR approaches to restore the ORF in hiPSCs carrying an out-of-frame deletion of exons 48-50. The first approach involved a single gRNA aimed at introducing indels in exon 51 by nonhomologous end joining (NHEJ). Indeed, corrected hiPSCs were reframed successfully and displayed an in-frame connection of exons 47 and 51. Another approach involved two gRNAs to achieve ORF restoration by skipping exon 51 and connecting exon 47 and 52. Restoration of the truncated protein was confirmed by WB analysis and immunocytochemistry. As a functional test, the mitochondrial DNA (mtDNA) and cellular respiration rate of hiPSC-CMs were measured. Corrected cells displayed increased mitochondrial numbers and oxygen consumption rate (OCR) compared to DMD cells and comparable to WT. Overall, these results demonstrate another promising approach in which individual DMD patients with different mutations may benefit from a "generic" ORF restoration.
Kyrychenko and coauthors [197] compared three approaches to restore the ORF in hiPSCs carrying a DMD mutation (deletion of exons [8][9] involving the actin-binding domain-1 (ABD-1) [12]. The first approach was deleting exons 3-7, leaving out actin-binding sites 1 and 2 (ABS-1 and ABS-2). Another strategy involved deleting exons 6-7 and excising ABS-3. The third method was deleting exons 7-11, leaving all three ABS regions intact. The respective dystrophin expression was validated by WB analysis and immunocytochemistry. As a functional assessment, spontaneous Ca 2+ transients were analyzed in hiPSC-CMs. Indeed, release and reuptake parameters, including time to peak, Ca 2+ decay rate, and transient duration, were higher in the uncorrected DMD hiPSC-CMs compared to WT and isogenic cells. Interestingly, while corrected cells with exons 3-9 deleted displayed normalized Ca 2+ transient properties, corrected hiPSC-CMs carrying a deletion of exons 6-9 or 7-11 displayed only partial improvement of the Ca 2+ transient parameters. These results indicate that, among the three approaches of ORF restoration, the one leading to a deletion of exons 3-9 yields the most functioning protein. Additional functional assessment was performed by means of engineered heart muscle (EHM). Consistent with the Ca 2+ transient parameters results, enhanced contractile performance was observed in all corrected cell lines, with the most prominent effect in hiPSC-CMs carrying the deletion of exons 3-9. Overall, these results suggest that, of the three approaches, the correction leading to a deletion of exons 3-9 was the most effective in restoring functionality, implying the importance of ABS-3 over ABS-1 and ABS-2 of the ABD-1 domain. This also demonstrates the importance of functional testing, as they serve as a measure to evaluate the different approaches, specifically because there is no conclusive explanation as to why the correction resulting in the three ABS regions being intact (and a deletion of exons 7-11) was less effective than the approach leaving out ABS-1 and ABS-2.
Long et al. [198] identified single gRNAs capable of skipping and restoring the ORF iñ 60% of DMD mutations together with the CRISPR/Cas9 system [199]. As a demonstration, they treated three different types of mutations. DMD hiPSCs carrying a deletion of exons 48-50 were corrected by destruction of the splice acceptor in exon 51, thereby allowing splicing of exon 47 to exon 52 and restoration of the ORF. DMD cells carrying a nonsense point mutation in intron 47 were corrected directly leading to permanent skipping of the pseudo-exon and restoring full-length dystrophin. Lastly, DMD hiPSCs carrying a duplication mutation of exons 55-59 were corrected by targeting the junction of intron 54 and exon 55, thus leaving out the duplicated region. Correction was confirmed via RT-PCR, and dystrophin levels were measured by immunocytochemistry and WB. Threedimensional engineered heart muscle (3D-EHM) was used to assess functionality of WT, DMD, and corrected hiPSC-CMs; the force of contraction (FOC) was improved in corrected cells. Furthermore, this group found that 30-50% of cardiomyocytes need to be repaired for partial (30%) or maximal (50%) rescue of the contractile phenotype. In summary, these results support the notion of developing treatments benefiting groups of patients rather than individual mutations. Furthermore, their insistence on using single gRNAs simplifies the process and potentially increases efficacy.
Yuan and coauthors [200] utilized the previously published method of fusing nucleaseinactivated Cas9 with an activation-induced cytidine deaminase (AID) to convert targeted C to T or G to A, and subsequently restore the ORF in DMD hiPSCs lacking exon 51 [201,202]. The group successfully skipped exon 50 of dystrophin, thus restoring the ORF in 99.9% of dystrophin transcripts, as demonstrated by high-throughput sequencing (HTS). Corrected hiPSC-CMs released normal levels of creatine kinase (CK). Additionally, expression of miR-31 was suppressed in the corrected hiPSC-CMs, and β-dystroglycan levels were increased to levels comparable with healthy hiPSC-CMs. Importantly, contrary to CRISPR/Cas9 approaches which involve DSBs, targeted AID-mediated mutagenesis (TAM) does not require NHEJ or HDR; therefore, it is significantly less prone to indels, with higher correction efficiency [203]. These benefits strongly suggest that TAM is superior to other CRISPR/Cas9 correction methods for DMD and should be favored when applicable.
Sato and colleagues [204] investigated the effects of exon skipping on Ca 2+ handling in DMD hiPSC-CMs. They used phosphorodiamidate morpholino oligomers (PMOs) to skip exon 45 and restore the ORF in cells carrying a deletion of dystrophin exons 46-55. By means of Fluo-4 they measured Ca 2+ transient parameters following treatment and detected increased amplitude, as well as a decrease in the rates of rise and decay, all improvements resembling healthy cells. Additionally, the number of fluorescent-detected irregular pattern events suggestive of arrhythmogenicity decreased following exon 45 skipping. Taken together, these results demonstrate the effectiveness of the exon-skipping strategy to improve functional abnormalities in DMD cardiomyocytes.
Chemello et al. [205] investigated correction of exon deletion mutations using base and prime editing. For base editing, they used an adenine base editor approach with ABEmax-SpCas9 to induce exon 50 skipping by means of swapping the GT to GC sequence and restoring the ORF in DMD cells carrying an exon 51 deletion. The prime-editing strategy was used to introduce a swap of the TT to GT sequence in exon 52, resulting in ORF restoration. RT-PCR, sequencing, WB analysis, and immunocytochemistry confirmed correction and the resulting truncated protein. Next, the researchers tested the effect of correction on cardiomyocytes treated with isoproterenol. Indeed, corrected DMD cardiomyocytes displayed a reduction in arrhythmogenic Ca 2+ transients compared to DMD cardiomyocytes, similar to healthy cardiomyocytes. Overall, these results demonstrate the feasibility of both genetic correction approaches in alleviating DMD cardiac pathophysiology.
Zhang and coauthors [206] devised a new approach for delivering Cas9 as part of CRISPR/Cas9 genome editing. Instead of using Streptococcus pyogenes Cas9 (SpCas9), which requires a dedicated adeno-associated virus (AAV) vector aside from the sgRNA vector, they used a Cas9 ortholog from Staphylococcus aureus (SaCas9), which is small enough to be packed together with sgRNA into a single AAV vector. This strategy was then utilized to deliver SaCas9-mediated single-cut gene editing to correct an exon 48-50 DMD deletion, avoiding possible unwanted genomic modifications due to the "double-cut" strategy with dual sgRNAs [207,208]. Their aim was restoration of the ORF of exon 51 via a two-nucleotide deletion. Indeed, the reading frame of hiPSCs carrying a DMD deletion of exons 48-50 was corrected accordingly. Furthermore, following correction, Ca 2+ transient parameters in hiPSC-CMs, such as time to peak and rate of decay, were comparable to healthy cells, contrary to their abnormal elevation prior to correction. Additionally, genotoxicity analysis demonstrated no significant off-target editing, suggesting that this method is safe and effective at restoring the ORF and achieving functional restoration in DMD hiPSC-CMs.
Atmanli and colleagues [209] investigated various effects of CRISPR/Cas9-mediated corrections in DMD hiPSC-CMs. They generated two different hiPSC lines to restore the ORF in cells carrying a deletion of DMD exon 44: one using a single-nucleotide insertion to achieve reframing and the other by skipping exon 45 to restore the ORF. The correction of DMD hiPSC-CMs restored cellular area, volume and sarcomere length back to normal. DMD cardiomyocytes showed Ca 2+ handling abnormalities including increased release and reuptake times, in addition to impaired localization of RYR2s, all of which were corrected with the truncated dystrophin lines. To test the effects of the corrections on contractility, the researchers used a polydimethylsiloxane matrix as a substrate for seeding cardiomyocytes, allowing auxotonic contraction. By means of video detection they demonstrated in DMD cells decreased systolic force and prolonged relaxation time, which were corrected in the truncated DMD lines. Furthermore, they found that ORF restoration reversed changes in gene expression, including two hallmark DCM transcripts, NPPA and NPPB [210], which were downregulated in DMD compared to corrected hiPSC-CMs. The researchers also tested changes in membrane tension by means of fluorescent membrane tension probe Flipper-TR [211]. These researchers found that DMD cardiomyocytes displayed higher membrane tension compared to healthy and corrected cells, demonstrating the important structural role of dystrophin, which is also preserved with the truncated protein. Lastly, the investigators corrected DMD cardiomyocytes directly, after the appearance of cellular abnormalities. Interestingly, after loading cardiomyocytes with the voltage-sensitive probe FluoVolt to measure arrhythmogenicity, they found a reduction in arrhythmogenicity in corrected and healthy cells compared to DMD cardiomyocytes. In summary, this robust work demonstrated the important functional, transcriptional and structural abnormalities in DMD cardiomyocytes, which were all rescued with CRISPR/Cas9-mediated ORF restoration.
Summary
This article reviewed the DMD cardiomyopathy-related research conducted in recent years in hiPSCs-CMs, as summarized in Table 1. The importance of investigating DMD pathophysiology in an in vitro human model is emphasized by the limitations of the current popular animal model, the mdx mouse [26][27][28]212]. Furthermore, the breakthrough reprograming technique enables the generation of patient-and mutation-specific hiPSC lines, providing an endless pool of cells carrying specific mutations for research [49][50][51]55,56]. Indeed, research on DMD hiPSC-CMs has yielded novel findings thus far, as well as validation of results previously reported in other models. Furthermore, with the introduction of gene-editing techniques, promising therapeutic possibilities are becoming more likely, including complete genetic cure. These need to be tested first in vitro, and hiPSCs are the ultimate model for such research. Furthermore, gene editing enables the generation of isogenic controls which provide significantly stronger validation of results obtained using the hiPSC model. Lastly, although extensive research is needed before clinical application, hiPSCs and hiPSC-derived cells may be used in the future in regenerative medicine to repair damaged tissue which may otherwise not be treatable.
|
2023-05-14T15:16:17.386Z
|
2023-05-01T00:00:00.000
|
{
"year": 2023,
"sha1": "29fb166bd6e94279f35be098a1645c3fe0269d35",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/ijms24108657",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "40efb9038422ffeb607fbab86dcaaa3f308b5047",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": []
}
|
216121219
|
pes2o/s2orc
|
v3-fos-license
|
Bridging the Gaps: Bole and Terra Sigillata as Artefacts, as Simples and as Antibacterial Clays
Medicinal earths are an important and yet, so far, little scientifically explored archaeological resource. They are almost always identified by their source locality. Our work over the last few years has focused on their chemical and mineralogical characterization and their testing as anti-bacterials. This paper presents the results of the mineralogical analysis and antibacterial testing of six medicinal earths, bole or Terra Sigillata (stamped earth) of unknown date and provenance in the Pharmacy Museum of the University of Basel. Only one of them, a red (Armenian?) ‘bole’, was found to be antibacterial against both Gram-positive and Gram-negative bacteria. A yellow powder of Terra Tripolitania was mildly antibacterial and against one pathogen only. We argue that medicinal earths are in a pivotal place to bridge the gap between currently dispersed pieces of information. This information relates to: (a) their nature, attributes, and applications as described in the texts of different periods, (b) the source of their clays and how best to locate them in the field today, and (c) the methods employed for their beneficiation, if known. We propose that work should be focused primarily onto those medicinal earths whose clay sources can be re-discovered, sampled and assessed. From then on, a parallel investigation should be initiated involving both earths and their natural clays (mineralogy at bulk and nano-sized levels, bio-geochemistry, microbiological testing). We argue that the combined study can shed light into the parameters driving antibacterial action in clays and assist in the elucidation of the mechanisms involved.
trying to apply chemistry to myths' (quoted in [6] (p. 458)). Thomson [12] carried out analysis on samples of 16th century troches or stamped pellets of Lemnian Earth with the same results, concluding that it was a clay with no pharmacological value (for a detailed presentation of early Lemnian Earth chemical analyses see MacGregor [8]).
Our work over the past twenty years and on the purported place of its extraction, in Kotsinas in the north-east of Lemnos, has resulted in sampling local deposits of sedimentary clays and the waters of local springs [13,14]; the latter seem to have played a role in the 'preparation' of the earth, at least as hinted by the visitors to the island in the Ottoman period [14]. In the last three years we have also undertaken analysis and preliminary (qualitative) antibacterial testing of pellets of Lemnian Earth (sphragides) from the collection of the Pharmacy Museum of the University of Basel [15] (see also Figure 1h,j). More recently, quantitative assessment of the same samples showed that two of the three artefacts were antibacterial against both Gram-negative and Gram-positive pathogens, but the third was mildly so and against only one of the two bacteria tested [16].
For purposes of clarification, in this paper, we refer to the museum artefacts as 'earths' or 'stamped earth' or 'bolus' to denote the final product or 'medicine' and to differentiate them from the raw material, the 'clay'. By 'clay' we refer to the mixture of clay and non-clay minerals that made up the 'earths', and which was almost always subjected to some kind of beneficiation. Earths are 'medicinal' and were perceived as such for a number of reasons, for example as adsorbants of toxins [7].
In this paper we present the results of the mineralogical and microbiological analysis of six earths in the collection of the of the Pharmacy Museum of the University of Basel, of unknown date and provenance (Figure 1a-g). The three samples of 16th-18th century Lemnian Earth are included here both for purposes of illustration and comparison of analytical results (see Discussion) ( Figure 1h,j). Regarding the six samples, the Museum's catalogue refers to these earths as Terra Sigillata (stamped earth) but, in fact, few were stamped and, when they were stamped, the stamp was nearly illegible. Therefore, their provenance and date cannot be known for certain. Apart from Terra Sigillata, two additional descriptors are given and for two of the six samples only: sample 01277 (Figure 1c) was referred to as Terra Tripolitania, presumably from a locality in N.W. Libya's region of the same name, and sample 01406 (Figure 1b) was labelled Terra Sigillata alba = bolus.
Among the Dutch, German, and Swiss texts of the 17th to the 19th centuries, references to red and white medicinal clays, usually termed bolus Armenicus or rubrus and bolus albus respectively, give a reasonably consistent picture of their medicinal properties. For example, Schröder's pharmacopoeia [17] mentions rubrica as an astringent, drying agent with strengthening properties, used for various wounds, bleeding, anything requiring absorption and, externally, for poultices. In a lexicon of 1721, Lemery [18] refers to the red bolus being used internally for diarrhea (and similar intestinal ailments) and bloody coughs; good for reducing acidity, and externally it is a suitable styptic (Ianto Jocks, pers. comm.). The white clay dries and cleans, and externally it is applied to wounds and ulcers. On the other hand, Albrecht von Haller's Pharmacopoeia Helvetica of 1771 [19] (p. 43, 2nd vol.) mentions its use for fevers, epidemics, or vomiting, but does not mention external use. The first notable description of Armenian bolus as a simple (one-ingredient drug) appears in Galen [20] (pp. 189. 7-14, 192.1-3). It was described as yellowish (chroan ōchra) in color and was designated as a stone (lithos) by the person who gave it to Galen, although Galen himself believed that it would be better categorized as a type of earth (gē), since it formed a paste when mixed with water. It was soft and easily ground without any coarse components. Dioscorides [21] (pp. 63. [16][17][18][19] described the so-called armenion as blue in colour (chroian kyaneon) and similar to chrysokolla in action. Armenian bole was used for various kinds of stomach ailments, for mouth ulcers, and was administered to those suffering from breathing difficulties [20] (p. 190).
It was also recommended for the treatment of infectious diseases causing epidemics (i.e., bubonic plague, typhoid fever, smallpox). Galen mentions that it was used in the Antonine Plague in Rome (AD 165-180), but the exact nature of that epidemic is not clear [20] [13][14]. The reference to 'quartan fever' in the latter author's text is significant for the reasons outlined in the Discussion.
In this paper we present the results of the mineralogical (X-ray diffraction) analysis and the microbiological testing of the six earths against one Gram-positive and one Gram-negative pathogen to assess their relative antibacterial strengths and to compare them with similar results from previous work and with two samples of Lemnian Earth, in particular. Furthermore, we undertook DNA sequencing work on two samples 01405 and 01406, one bioactive and the other non-bioactive, respectively. The investigation aimed to determine the type of microorganisms present and, if present, whether they contributed to the bioactivity of the one of the two samples.
Bioactivity would be imparted via the production and excretion of secondary metabolites. Secondary metabolites are chemical compounds, not essential for growth, but exuded by the microorganisms themselves when under stress and for the purpose of aiding the organism's survival. They can achieve the latter by reducing competition from other microorganisms or by detoxifying their immediate environment. They are often water-soluble or volatile and affect other organisms at a distance [25]. Secondary metabolites can be anti-oxidants, sequestrants (they form chelate compounds with metal ions and thus remove the latter from solution), antimicrobials or redox regulators reacting to chemical stressors, (for example, free radicals or reactive oxygen species). Chemical detection of secondary metabolites is quite complex analytically. Before embarking on such work, it is therefore essential to ascertain, via their residual DNA signatures, that micro-organisms are indeed present and to identify them.
X-ray Diffraction (XRD)
The mineralogical composition of all samples was determined with X-ray diffraction (XRD), at the School of Mineral Resources Engineering, Technical University of Crete (Chania, Greece), on a Bruker D8 Advance Diffractometer equipped with a Lynx Eye strip silicon detector, 0.6° divergence and receiving slits, using Ni-filtered CuKaα radiation (35 kV, 35 mA). Data were collected in the 20 range 3-70° 2θ with a step size of 0.02° and counting time 1 s per strip step (total time 63.6 s per step). The XRD traces were analyzed and interpreted with the Diffrac Plus 13 software package from Bruker, Germany, and the Powder Diffraction Files (PDF). The quantitative analysis was performed on random powder samples (side loading mounting) by the Rietveld method using the BMGN code (Autoquan © software package version 2.8, Seifert GmbH & Co, Ahrensburg, Germany). About 1 g of finely ground sample <10 μm in size was used for the analyses.
Bacterial Strains
The bacterial strains used in this study were representative Gram-positive and Gramnegative indicators, which are usually employed for the assessment of antimicrobial properties of various materials. Specifically, Staphylococcus aureus NCTC 12493 (Grampositive) and Pseudomonas aeruginosa NCTC 10662 (Gram-negative) were chosen for the evaluation of the samples' bioactivity. Both bacteria were cultured on LB agar (Lab M-Neogen Culture Media) and LB broth (Lab M-Neogen Culture Media) and the desired bacterial concentration in each experiment was adjusted photometrically based on the McFarland scale. Those bacterial species were selected because of their relation to public health issues, as carriers of diseases, and their use as valuable bacterial indicators.
Antimicrobial Tests
Aqueous leachates of the samples were prepared, so as to assess any generated bioactivity. Samples were mixed with deionized water and ultrasonication was performed for 30 min at 25 °C (Julabo ultrasonic bath) followed by centrifugation at 10,000 g for 15 min for solids' removal. The leachate was sterilized in the autoclave (20 min, 120 °C) and was stored for further antimicrobial testing.
Bioactivity of all samples was studied using the broth microdilution method and estimating the Minimum Inhibitory Concentration (MIC
XRD
With the exception of sample 01277, the earths have comparable mineralogical composition consisting mainly of clay minerals, mostly kaolinite and quartz, minor illite and trace Kfeldspar, plagioclase and anatase (Table 1). In addition, samples 01405 and 01629-1 contain minor goethite and hematite which explain their red/brown color (Figure 1a,f). The presence of hematite and goethite explains the slightly elevated background of 01405 and 01629-1 compared to the remaining samples. The background was modeled adequately during analysis and did not affect quantitative analysis (see Figure S1). 01405 contains also trace chlorite. Sample 01627 has slightly different clay mineralogy as it contains also abundant illite in roughly equal proportions as kaolinite as well as trace siderite and sulfate minerals, namely alunite and anhydrite. Sample 01277, the powder of Terra Tripolitania, has a distinct mineralogy as it consists of dolomite, calcite, and quartz with minor kaolinite and illite and trace K-feldspar plagioclase and jarosite.
Antibacterial Activity
The bioactivity of the samples was tested using their leachates and estimating their MIC against two representative bacterial indicators, namely, P. aeruginosa and S. aureus. Based on the respective tests performed for the activity of antibiotics, we assessed the inhibitory concentration of the samples that was sufficient for the reduction of 60% of the concentration of the population of each strain (MIC 60 ). The derived results are shown in Table 2, according to which the majority of the samples did not exhibit any substantial antimicrobial activity. Samples 01406, 01629-1 and 01629-2 achieved low levels of bacterial reduction and only their high concentration (200 mg/mL) led to bacterial inactivation up to approximately 40% of the bacterial initial load. This result is referred mainly to S. aureus, which proved to be slightly more sensitive under the current experimental conditions, despite the fact that it is a Gram-positive bacterium, and therefore more resistant to environmental stressed conditions. This negligible superiority of the Gram-negative P. aeruginosa could be attributed to a certain interaction, possibly a chemical one, between the samples' components and the Gram-positive bacterial strain (S. aureus). For following the bacterial reduction (in %) as a function of concentration for each sample see Figure S2).
Sample 01627 resulted in negligible bacterial reduction, while the only sample with considerable activity was 01405, since (a) it was active even at low concentrations (e.g., 12.5 mg/mL) and (b) it had similar behavior towards both bacterial species. The level of bacterial inactivation was increased when higher concentrations of the sample were used, and complete bacterial decay was achieved when the sample concentration was 200 mg/mL. Sample 01277 was mildly antibacterial against S. aureus only (100 mg/mL).
DNA Detection
The amount of DNA material extracted from sample materials was below detection (0.6 ng/g-material), with absorbances at 260 nm and 280 nm < 0.001. Although values fell below the range for quantification, PCR amplifications were still attempted. Unfortunately, none of the assays for bacterial, fungal, or chloroplast DNA revealed any detectable signals greater than analytical backgrounds. As such, none was detected in the samples. This may suggest that the samples had no exposure to any biological agents, but it remains a possibility that DNA and/or naturally organic matter have degraded.
Discussion
Medicinal earths are a little researched but important archaeological resource. They formed an integral part of ancient and more recent pharmacopoeias. MacGregor [8] outlined the history of use of these earths (bole or Terra Sigillata) as a trajectory from 'wonder drug to folk remedy' and as such they deserve to be investigated further with modern analytical techniques. So far, their study has been approached from a number of different perspectives: (a) historical, as natural materials used to cure various ailments; (b) archaeological, as catalogued artefacts in museum collections, (c) chemical and mineralogical, as clay minerals (kaolinite/montmorillonite/illite) of which they largely comprise, and recently (d) for their selective bioactivity against specific pathogens (as antibacterials or antifungals). There is considerable research on the antibacterial properties of kaolins and smectites [31,32]. However, most of this work has been undertaken on samples from modern clay deposits and not on small quantities removed from historical samples with an acclaimed use over centuries. Can the study of medicinal earths contribute to antibacterial clay research?
In this paper we argue that medicinal earths can potentially bridge gaps in our information about the nature and function of antibacterial clays. This is because as archaeological artefacts they have a long history of use and associated efficacy. Our work over the last few years has been dedicated to the investigation of the antibacterial properties of some medicinal earths of the Aegean, like Samian Earth [33] and Lemnian Earth [15,16]; also clay-iron oxides composites (miltos) [26], as well as alunogen and alunite with kaolinite minerals [34]. For example, we demonstrated that antibacterial properties of Samian Earth may have been attributed to its smectite being naturally enriched in Boron [33].
In this contribution, we have analyzed six earths mineralogically and tested them for bioactivity against two pathogens, P. aeruginosa and S. aureus. Of the six, only one was found to be bioactive against both. This was the red earth 01405, which is presumed be an Armenian bole. We have compared this bioactive sample with bioactive earths from Lemnos (Figure 1i,j) Comparing the bioactivity of sample 01405 with those obtained from samples 700.17 and 700.18 [16], it seems that sample 01405 exhibited higher antimicrobial activity (MIC 60 = 12.5 mg/mL) for both bacteria compared to 700.17 and 700.18. Regarding S. aureus, MIC 60 of 700.17 and 700.18 were 10 mg/mL and 20 mg/mL, respectively [16]. By contrast, and regarding P. aeruginosa, a higher concentration was required of the Lemnian Earths (MIC 60 of 700.17 and 700.18 was 50 mg/mL for both). Based on our results, in all cases, P. aeruginosa was more resistant in the presence of the tested samples, of which 01405 resulted in higher reduction of bacterial load.
A third sample of Lemnian Earth 700.4 (white) (Figure 1h) contained 65.2% dolomite, 9.9% illite, 17.3% kaolinite, and 7.6% quartz [15]. Sample 700.4 is similar to 01277 despite the latter being more yellow than white; this is presumably on account of small quantities of jarosite and possibly of trace amounts of goethite, the abundance of which is below the detection limit of the method. Samples 01277 and 700.4 are both rich in calcite/dolomite and with only small amounts of kaolinite.
Both 01277 and 700.4 display some antibacterial activity against S. aureus (MIC 60 of 01277 = 100 mg/mL and MIC of 700.4 = 45 mg/mL) and even lower against P. aeruginosa (MIC 60 of 01277 ≥ 200 mg/mL and MIC 60 of 700.4 = 90 mg/mL). It is not clear why dolomite might have weak antibacterial properties. It has been reported that dolomite can develop antiviral (rather than antibacterial) properties only after it has been heated between 800 °C and 1400 °C [35]. Dolomite in association with gypsum and calcite has been reported in the vicinity of Garyan, in the region of Tripolitania, N.W. Libya [36] (p. 35), suggesting a Libyan source for sample 01277.
In a forthcoming publication we have argued that for the two samples of Lemnian Earths (700.17 and 700.18), the driving force behind their bioactivity may be attributed to their organic load, the secondary metabolites of the fungus Talaromyces spp., of the order Eurotiales, which includes Penicillium spp. [16]. We also concluded that bulk clay mineralogy (kaolinite/illite/montmorillonite), on its own, does not appear to drive bioactivity. This is expected in as much as the z-potential of the clay minerals present (kaolinite, illite, chlorite along with smectite), as well as the bacteria tested (S. aureus and P. aeruginosa) are negative in the pH range 4-10 [37][38][39]. Therefore, a mutual repulsion between the clay particles and the bacteria cells is expected in the aqueous leachates. However, nanoparticles, other than those of clay minerals, may also have a role to play.
In the same publication [16] and regarding the organic load of samples 700.17 and 700.18, we have suggested that it may have been introduced 'intentionally' as deduced from an account of the extraction of the Lemnian Earth provided by Galen [20] (pp. 169-170), albeit quite a few centuries earlier. In the case of the red (Armenian?) bole (01405), examined here, no such organic load was identified, so an alternative 'driver' needs to be sought. It is possible that the chemistry of the leachate of 01405 might provide some insight. In this paper we have not been able to provide chemical analysis (major and trace elemental composition) of the leachates of the earths, and in the manner carried out in other studies [16]. Nevertheless, in work carried out so far, the abundances of metalloids, non-metals and heavy metals in the leachates (e.g., B, Al, As, Hg, Sb, Se, Pb, Cu, Zn, etc.), which would be detrimental to bacterial cells, and thus be in themselves the key drivers of bioactivity, are in the level of a few ppb. Only when the clay is doped or is naturally enriched in one or more of these elements could they be the drivers of bioactivity [15, 16,33].
Regarding the Armenian bole, Galen mentions that it originated in a mountain in the city of Bagouana which is a Hellenised rendering of the ancient Armenian Bagawan, a place that was most probably near today's Diyad in the Turkish Province of Aǧri, very close to the modern Turkish-Armenian border ( [40] (pp. 283. [11][12][13][14]; [41] (pp. 160.14-17); [42,43]). Today, one of the local tourist attractions in the area are the Meya caves with an unspecified length of occupation. If there is a starting geographical/geological point in the investigation of Armenian bole, it could be from that locality.
Many earths, in addition to Armenian bole, were thought to be 'against the plague', but it is not clear what precisely was meant by the 'plague'. Regarding the beneficial effects against quartan fever, most probably a type of malaria, as suggested by the 6th century author, Alexander of Tralles, research into the characterization of the secondary metabolites in one sample of Lemnian Earth (700.18) produced some intriguing results. It revealed bioxanthracene, a fungal secondary metabolite [15]. Bioxanthracenes are known to be used as antibiotics and against malaria [44].
As mentioned earlier and based on the analyses presented here, we can provide no explanation for the antibacterial action of sample 01405, although chemical analysis of the leachate might have shed some light. It is not possible to know what effect long term museum storage under environmentally uncontrolled conditions may have had on the antibacterial properties of the original sample. It may have altered existing properties or indeed may have imparted the earth with antibacterial properties when none existed in the first place. Not all medicinal earths circulating in the post-medieval markets of Europe were genuine. Hence, the geochemical/biogeochemical characterization of the museum objects on their own cannot suffice.
As already mentioned in the introduction, medicinal earths are identified primarily by their geographical origin. We suggest research into potentially antibacterial medicinal earths should begin not at the 'museum' but in the 'field', the original place of extraction of the clays used in the making of these medicinal earths. Not all of these places of origin can be easily identified. Even those localities that are 'confidently' known, like Kotsinas on N E Lemnos, the place of the extraction of Lemnian Earth, present the investigator with a whole host of geo-archaeological queries requiring clarifications based on extensive surveying and sampling.
Further to the geology, there is a need to understand human agency. Beneficiation or 'washing' as is often referred to in the texts requires an understanding of the relevant practices entailed. Was it simply a case of levigating the extracted clay?
Once select clay deposit localities used in antiquity have been tentatively identified and practices tentatively characterized, then starts the 'long haul' of experimental work on both natural clays and archaeological samples, alike. As mentioned earlier, some earths, like Armenian bole, were thought effective in the treatment of infectious diseases causing epidemics. The parallel investigation of (a) earths in museum collections with 'secure' provenance and (b) clays from their geographical sources, with a battery of analytical techniques followed with microbiological testing, is highly likely to provide an insight into their antibacterial and potential other properties. These are ambitious and long-term projects and they require the combined efforts of many scholars and from many fields. However, given the ongoing fight against antimicrobial resistance and the need to find potentially 'new' agents that prevent bacteria and other microorganisms from developing this resistance, turning to the medicinal earths of antiquity and more recent times, for both insight and solutions, may prove to be a worthwhile task.
Supplementary Material
Refer to Web version on PubMed Central for supplementary material.
|
2022-07-15T15:27:39.824Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "0c39b29072000e9016fc98fd8778917f162d02f9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2075-163X/10/4/348/pdf?version=1586871635",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce4cba26cf002544aa7299d2ab560f1def11b7eb",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geography",
"Medicine"
]
}
|
41878190
|
pes2o/s2orc
|
v3-fos-license
|
Air Pollution Estimation from Traffic Flows in Tehran Highways
Urban areas confront with increases in air pollution because of increasing urbanization, expanding the use of vehicles and development of economic activities. In this research carbon monoxide concentration as a pollutant analyzed and modeled within highways in Tehran. In this regards factors affecting the concentration of atmospheric pollutants analyzed on the basis of geometric, atmospheric and traffic data at five stations in Tehran and finally models runs based on existing methods. The model predictions results are match well with field data.
environmental impact studies in traffic management plans and in the road design is considered and the purpose is to determine the transportation request in condition that the negative environmental effects of that not be more than standards.Analysis of carbon monoxide pollution levels in places where traffic is the main source of air pollution is selected as the main target of research.
It must be mentioned that the issue of air quality is essential in urban design and planning urban transportation.Although these analysis is not only limits to predict concentrations of pollutants caused by traffic in urban streets but it is so important in these areas because this problem is more significant in urban streets.
In summary, in this study the effective factors on concentration of pollutants in the atmosphere surveyed in the vicinity of highways of Tehran, near the surface and sidewalk based on field data in 32 points on 5 stations.Calculations have been done in short-term (one hour) and usual traffic conditions in these areas.
The final purpose of the research is to provide a model for estimation carbon monoxide concentration in urban highways with good accuracy and set the acceptable amount of traffic and other features in the system by investigate the causes of these pollutants and prediction of that.
Literature Review
Understanding the characteristics of the air flows besides and upper of streets is necessary for understanding the transmission and distribution performance of pollutants in urban highways.There are three main methods for this problem: all dimensions measurement, reduce field measurements by using of physical models and mathematical models 1 .
Many studies about the Highways confirm that the vortex flow in the streets will expand when the wind that blows in roof surface is perpendicular to the direction of the wind spread.The result of this vortex flow is transmission of the pollutants in upstream and in the direction of the wind and then transfer it back to the wind and finally increasing the pollutants levels in the back side of wind 2 .Jacko et al., Study the transmission parameters of a point pollutant source at the center of town using a wind tunnel model.They found that with increasing two times in average height of building, buildings with equal density, the concentration of pollutants at ground level in urban areas will double too.There are accurate studies about measurements of the wind profile in real urban highway by Oke & Nunez in 1977, Sheih in 1986, and Nakamura in 1988.Generally, these studies show a form of a vortex in street, but this cannot be true in all cases (especially at low wind speeds).Physically, studies of the wind tunnel model are simpler from study of all dimensions from the viewpoint of guidance and control.However inaccurate boundary conditions and incorrect scaling may cause errors.Studies of Hoydysh in 1988 and 1991 show the pollutants concentration pattern depends on the path symmetry and apparent ratio of the block size.And concentration of pollutants in the vertical direction is reduced exponentially.Also the concentration of pollutants is more in the back of wind toward the face of wind 3 .
Numerical models used for the issue of urban highways can be in several forms: some of them simulate the fluid flow and contaminant transmission and some are empirical models based on observational data.Advances in computer hardware technology have provided new opportunities for the simulation of environmental aspects.
Johnson et al.,, in 1990 and Shuzo et al.,, in 1992 investigate the wind flow as the fluid in urban highways and approved a number of wind tunnel results, such as the wind vortex rotation when the roof-level wind flow is perpendicular to street 3 .
In 1973 Johnson et al.,, made an empirical model based on observed data in all dimensions in the State of California.The model predicates the decrease of concentration from a linear source against the wind and linear decreases of concentration toward height level on the opposite side of the wind.They found that the wind direction in roof-level controls the levels of CO concentration pattern.Concentration of CO near the street surface in the backside of wind is significantly higher toward the opposite side 4 .
Lin and Niemeier (1998) used observed traffic data to estimate hourly allocation factors and disaggregated traffic volume into hourly values.These indirect methods inevitably lead to inaccuracies in emission modeling.In theory, numerical modeling of traffic flow on road can provide every detail required for the calculation of traffic emissions.Unfortunately, previous efforts failed to do this because of road network complexity 5 .
L. Xia, Y. Shao (2005) used a Lagrangian traffic flow model.According to their study the traffic flow model is simple, but has been found to be quite efficient.With the specification of travel behavior, their model was capable of simulating traffic flow on a road network.The model applied successfully to Hong Kong Island.The simulated traffic flows in three cross-harbour tunnels and at three counting stations on the island for weekdays and weekends were compared with observations.The temporal variations of traffic flow in the crossharbour tunnels and at the counting stations were reproduced by the model at satisfactory level 6 .
METHODS
There are three appropriate methods for study of factors affecting the traffic pollutants levels [1]: 1-Developing the two-stage models (diffusion -distribution) that Distribution mechanism simulates with the Navier-Stokes equations and appropriate boundary conditions are considered (numerical modeling).
2-
Developing experimental models based on wind tunnel and field observations (simulation wind tunnel using a small-scale model).
3-
Developing empirical models based on true understanding of all factors influencing the concentration of pollutants and also the data collected at different sites with a wide range of traffic and atmospheric conditions.
Methods 1 and 2 require accurate models to predict emission rates of pollutions based on traffic parameters.Because there is not prepared accurate diffusion models in Iran, two stages modeling is not applicable.Furthermore accurate measurements of parameters in emission models is complicated and requires high cost.Also comparison of laboratory data and field data is difficult and using wind tunnel simulation models requires extensive laboratory facilities.
According to the purpose of this study and the available facilities the first and second methods are not recognized suitable.Therefore the third method that apply the effective factors and field data are used and relationships between geometric, atmospheric and traffic conditions are analyzed.This method has advantages over other methods as follows [7] 1.
Because analyzing the levels of pollutants and effective parameters are with each others, expanding the relations based on equal weights in the distribution and emission becomes possible.
2.
In terms of costs these models that based on field observations are the best option.
3.
Empirical relationships are simple and not required powerful computer to estimate concentrations of pollutants for purposes of transportation planning.
4.
When empirical relationships for evaluating
Data collecting
Generally the factors affect the pollution concentration in urban highways are divided into four groups.1.
Surroundings (background) concentration
There are define methods for measurement in methods 1 through 3 and there is general agreement about their digitizing.But surroundings concentration seems to be some complicated.The surroundings concentrations are the amount of pollutant that exists in the air without traffic there.Distribution of residential and industrial areas near the areas under study will impact of this issue 7 .
About the first three methods there are distinct measurements methods and general agreement to digitize it.But the Surrounding concentrations seem to be some complex.The background concentrations are the pollutants that exist in the air without traffic condition.Distribution of residential and industrial areas near the areas under study, affects this issue 7 .
Fig. 1: Positive and good correlating between Co concentration and hourly traffic
The main focus of this research will be on the first three groups of variables because the reason as follow: ´First, traffic is effective in pollution concentration on the ground surface and near the urban streets and at low altitude areas.´Second, it is due to studies about the role of traffic in the concentration of CO in Tehran City.According to studies, more than 90% of the CO gas production is arising from transportation in Tehran.´The third reason is related to the stations selection.It is tried that the selected station be in commercial and residential areas, and away from industrial areas that produce CO gas.
The main problem with field data in this study is requirement of taking the traffic, air pollution and the atmospheric data in the same time.
The locations of existing measurement stations of "Tehran air pollution control" and "environmental protection organization" have been checked first.Because more of existing stations were besides the crossroad and squares and some others were set in wide area generally there were no suitable stations that show the pollution conditions of urban streets.
So used from portable samples.Furthermore portable samplings have some advantages because some parameters like station situation and the height level of samples are controllable.High amount of traffic and highways condition takes into consider in sampling as well as the other foresaid important conditions.Therefore Modares and Resalat highways selected for data measurement.
Geometric variables
Geometric variables in stations were measured in three dimensions: elevation, longitudinal and transverse.Data measurements have taken at adjacent stations in different parts of the two highways.Geometric variables in these stations are very close together, so the main focus is on the relationship between pollution levels, traffic parameters and meteorological conditions.
Meteorological variables
In this study three variables analyzed that includes: wind speed, wind direction and environment temperature.This information has been taken from meteorological stations of Resalat highway.Meteorological variables at different hours and days of the week were existed and in which time the data are collected the desired meteorological data is extracted too.
Model development
In order to develop a model to predict and estimate well, firstly correlated pairs of variables were analyzed.Based on this analysis, traffic volume and wind normal speeds were identified as the most effective variables.Figures (1) and (2) shows the correlation between these variables and hourly concentrations of carbon monoxide.Correlation between traffic volume and the carbon monoxide concentration is +0.814 and between the wind normal speed and the concentration of carbon monoxide is -0.655.
The calibration of the previous models shows good agreement for SRI and Crompton & Gilbert models.In this stage through the following analysis and using SPSS software four combine forms were define and calibrated.
Traffic : the rate of traffic (vehicles of hours) W.Sinα: wind normal speed (m/s) W t : total road width (m) Temp: temperature To select the final model the ability of models checked.As it can be seen from table (1) and (2) model No.1 suggested as the best models according to prediction ability and computational error.
CONCLUSION
In order to determine a viable method for quantifying the contribution of traffic emission to regional air quality, all methods analyzed and according to the analyses developing empirical models that apply the effective factors and field data has advantages over other methods.The correlation between the variables and hourly concentrations of carbon monoxide shows that the main parameters influencing the distribution of carbon monoxide concentration in the city streets are traffic volumes and normal speed and the study shows that the calibrated models SRI and Crompton & Gilbert are good in existing traffic, geometric and atmospheric conditions of Tehran.In addition four integrated traffic emission model developed and good agreement has been found between them and field data therefore can accurately predict carbon monoxide concentrations due to traffic and atmospheric conditions of Tehran.Finally the study shows that these models have good ability and can be use as a technique by traffic managers to reduce air pollution in polluted cities in Iran, to control the volume of traffic with environmental standards.
|
2017-10-27T09:14:41.398Z
|
2012-09-24T00:00:00.000
|
{
"year": 2012,
"sha1": "6bee149562e4e22dc41bb53d85ce0cd31eda1415",
"oa_license": "CCBY",
"oa_url": "http://www.cwejournal.org/pdf/vol7no1/CWEVO7NO1P01-06.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6bee149562e4e22dc41bb53d85ce0cd31eda1415",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
1597914
|
pes2o/s2orc
|
v3-fos-license
|
Treatment with Oxidized Phospholipids Directly Inhibits Nonalcoholic Steatohepatitis and Liver Fibrosis Without Affecting Steatosis
Background and Aims Previous studies demonstrated that toll-like receptors 4 and 2 (TLR-4 and TLR-2), which are expressed on liver-resident Kupffer, hepatic stellate cells, and circulating monocytes, play a role in nonalcoholic fatty liver disease. Lecinoxoids are oxidized phospholipids that antagonize TLR-2- and TLR-4-mediated activation of innate immune cells and inhibit monocyte migration. In this study, we tested the effect of two functionally different lecinoxoids on the development of nonalcoholic steatohepatitis and liver fibrosis in a mouse model. Methods Two-day-old C57BL/6 mice were injected with streptozotocin and fed a high-fat diet from Week 4 after birth. At Week 6 post-birth, lecinoxoids VB-201 or VB-703 were given orally, once daily, for 3 weeks. Telmisartan was administered orally, once daily, for 3 weeks, as positive control. At experiment conclusion, biochemical indices were evaluated. HE stain and quantitative PCR were used to determine the extent of steatosis and steatohepatitis, and Sirius red stain was used to assess liver fibrosis. Results Treatment with lecinoxoids did not alter the concentration of blood glucose, liver triglycerides, or steatosis compared with solvent-treated mice. However, whereas VB-201 inhibited the development of fibrosis and, to some extent, liver inflammation, VB-703 significantly lessened both liver inflammation and fibrosis. Conclusions This study indicates that using lecinoxoids to antagonize TLR-2, and more prominently TLR-4, is sufficient to significantly inhibit nonalcoholic steatohepatitis and liver fibrosis. Inhibiting monocyte migration with lecinoxoids that are relatively weak TLR-4 antagonists may alter liver fibrosis and to some extent nonalcoholic steatohepatitis.
Introduction
One of the major complications ensuing insulin resistance and metabolic syndrome is nonalcoholic fatty liver disease (NAFLD), which can progress from fatty liver (steatosis) to fatty liver with inflammation (steatohepatitis) and liver fibrosis. While the majority of subjects do not progress beyond steatosis, it is not clear why others continue to develop steatohepatitis. Supporting evidence suggests that modulations in gut microbiota can be central to the disease pathogenesis. In this context, it was shown that the integrity of the small intestine is impaired in patients with NAFLD and that the variety and proportion of gut flora in nonalcoholic steatohepatitis (NASH) patients are altered compared with healthy controls [1,2]. This microbiota dysbiosis can promote NASH by increasing the levels of methylamines and alcohol [3,4], compounds that induce liver inflammation and injury [5]. Alterations in gut microbiome were also shown to promote NASH by affecting bile acid metabolism, resulting in impaired FXR-driven signaling and consequently advancing steatosis [6]. Finally, wild-type mice demonstrated increased disease severity in a methioninecholine-deficient diet-induced NASH model when cohoused with mice with altered gut microbiota [7].
Bacterial components from residing gut microbiome such as lipopolysaccharides (LPS) and peptidoglycans (PGN) can travel through the portal vein into the liver and encounter toll-like receptors (TLRs) [8]. TLRs are a family of receptors imperative for the innate immune response against microbial invasion. Two of the TLRs, TLR-2 and TLR-4, recognize the bacterial products PGN and LPS, respectively [9,10]. The interaction between these TLRs and their cognate agonists instigates a cascade of signals that include downstream phosphorylation events, which culminate in the secretion of pro-inflammatory cytokines such as IL-6, IL-1b, and TNF-a. Several reports demonstrated the correlation between TLR-2 and TLR-4 and NAFLD pathogenesis. In TLR-2 KO mice, a lower rate of liver inflammation was observed when fed with a choline-deficient amino acid-deficient diet or high-fat diet [11,12]. This decrease was manifested by a reduced number of macrophages and the expression of pro-inflammatory cytokines in the liver compared with wild-type mice. In addition, TLR-2 KO mice also exhibited lower rates of liver fibrosis [12]. Mice bearing nonfunctional mutated TLR-4 or TLR-4-targeted genes that were fed methionine-choline-deficient diets demonstrated reduced liver expression of TNF-a and TGF-b together with decreased liver content of collagen and a-SMA compared with control mice [13,14].
Liver-resident Kupffer cells and hepatic stellate cells (HSC), which express TLR-2 and TLR-4, are central to the development of steatohepatitis and fibrosis [15][16][17]. The activation of TLR-2 and TLR-4 on Kupffer cells induces the production of TNF-a, TGF-b, and IL-1b, which in turn augment HSC activation. Combined with the direct TLR-4mediated activation of HSC, ligation of these TLRs promotes fibrogenesis [8,18]. The recruitment of circulating monocytes into injured liver was also shown to be of importance to the development of liver fibrosis. Indeed, in the absence of circulating monocytes, such as in CCR2deficient mice, fibrosis following acute liver injury was reduced, suggesting that monocyte-derived macrophages promote liver fibrosis [19].
We showed previously that a synthetic small molecule analog, VB-201, which belongs to the lecinoxoid family (lecin for lecithin-i.e., phospholipid, and oxoid for oxidized), directly binds to the CD14 and TLR-2 and consequently antagonizes TLR-2-and TLR-4-induced activation of monocytes, macrophages, and DCs [20]. Moreover, we demonstrated that VB-201 inhibits migration of monocytes toward various chemokines. In vivo treatment with VB-201 impaired migration of monocytes into the aorta and decreased the size of the aortic plaque in an atherosclerosis mouse model [21]. In the current report, we investigated the effect of VB-201 treatment on the development of NASH and liver fibrosis in a mouse model. In addition, we used VB-703, an unreported lecinoxoid designed in silico for improved efficacy, that does not affect monocytes migration, but exhibits increased inhibition of TLR-4 over VB-201. The results demonstrate that lecinoxoids restrict liver inflammation and profoundly ameliorate liver fibrosis.
Materials and Methods
Animals C57BL/6 mice (15-day-pregnant females) were obtained from Japan SLC (Japan). All animals used in the study were housed and cared for in accordance with the Japanese Pharmacological Society Guidelines for Animal Use.
Western Blotting
Monocytes (10 6 /ml) were pretreated for 20 min with VB-201 or VB-703 followed by 15 min activation with the TLR-4 agonist LPS (100 ng/ml) (Sigma) or the TLR-2 agonist PGN (10 lg/ml) (InvivoGen). Cells were washed and resuspended in lysis buffer containing 1:100 dithiothreitol (DTT), phosphatase, and protease inhibitors (Thermo Scientific). Samples were loaded onto a precast Criterion TGX gel (Bio-Rad, Hemel Hempstead, UK) and transferred onto nitrocellulose membrane. Blots were blocked with 5 % milk or BSA in tris-buffered saline and Tween 20 (TBST) for 1 h, followed by incubation with primary and secondary antibodies. Membranes were developed using an ECL kit (Thermo Scientific). The following antibodies were used for immunoblotting.
ELISA
To measure the effect of VB-201 and VB-703 on IL-12/ 23p40 and IL-6 production, cells were collected 5-6 days post-culture, counted, and seeded (10 6 /ml). Cells were pretreated for 1 h with VB-201 or VB-703 followed by 24-h activation with 100 ng/ml LPS to induce cytokine production. IL-12/23p40 and IL-6 concentration in supernatant was measured by ELISA. Cells activated with solvent (0.5 % ethanol in PBS) were used as control.
In Vitro Cell Migration Assay
Human monocytes were pre-incubated for 20 min with solvent (0.5 % ethanol/PBS), VB-201, or VB-703 as indicated. RANTES (100 ng/ml) and MCP-1 (100 ng/ml) were dissolved in 0.5 % FBS/RPMI-1640 and placed in the lower chamber of a QCM 24-well migration assay plate (Corning Costar; 5-mm pores). Migration assay was conducted by seeding 300,000 treated cells in the upper chamber, followed by incubation for 3 h. The number of cells that migrated into the medium in the lower compartment was determined by flow cytometry (BD FACSCalibur).
Induction of NASH and Treatment
In humans, diabetes mellitus is a risk factor for liver fibrosis, which in some cases may culminate in hepatocellular carcinoma; NASH is critical to the link between diabetes and liver fibrosis. The NASH model used in this study, which enables to assess the histopathological events that lead from diabetes to liver fibrosis, was induced in C57BL/6 male mice (Japan SLC Inc) by a single subcutaneous injection of 200 lg streptozotocin solution (STZ, Sigma-Aldrich, USA) two days after birth and a high-fat diet (HFD, 57 kcal% fat, cat# HFD32, CLEA Japan, Japan) after 4 weeks of age. The injection of STZ induces inflammation in pancreatic islets that drives b-cell injury, leading to diabetic conditions. The subsequent, high-fat diet promotes liver steatosis and the recruitment and activation of macrophages in the liver, similar to what is seen in human NASH. At Week 6, after steatosis was established, solvent (n = 8), VB-201 (n = 8), VB-703 (n = 8), or telmisartan (n = 8) were administered by oral gavage from Week 6 for three more weeks. VB-201 and VB-703 were given at a dose of 4 mg/kg once daily. Telmisartan, which attenuates steatohepatitis progression, was administered at a dose of 10 mg/kg once daily and used as positive control. Normal mice (n = 5) were fed a normal diet without any treatment until 9 weeks of age.
Measurement of Whole Blood and Plasma Biochemistry
Nonfasting blood glucose was measured in whole blood using LIFE CHECK (EIDIA, Japan). For plasma biochemistry, blood was collected in polypropylene tubes containing anticoagulant (Novo-Heparin, Mochida Pharmaceutical, Japan) and centrifuged at 10009g for 15 min at 4°C. The supernatant was collected and stored at -80°C until use. Plasma alanine transaminase (ALT) levels were measured by FUJI DRI-CHEM 7000 (Fujifilm, Japan).
Measurement of Liver Triglyceride Content
Liver total lipid extracts were obtained by Folch's method [22]. Liver samples were homogenized in chloroformmethanol (2:1, v/v) and incubated overnight at room temperature. After washing with chloroform-methanol-water (8:4:3, v/v/v), the extracts were evaporated to dryness and dissolved in isopropanol. Liver triglyceride contents were measured by Triglyceride E-test (Wako Pure Chemical Industries, Japan).
Diagnosis and Scoring of Steatosis, NASH, and Liver Fibrosis
The expression level of inflammation mediators associated with steatohepatitis was used to determine NASH severity. To that end, RNA was prepared from livers using RNeasy mini kit (Qiagen). For cDNA preparation, 2 lg of RNA was mixed with qScript reaction mix and qScript reverse transcriptase (Quanta BioSciences) for 5 min at 22°C and then for 30 min at 42°C. Reaction was completed by incubating for an additional 5 min at 85°C. All real-time PCR were performed using the 7300 Real-Time PCR System (Applied Biosystems). Q-PCR was performed with sets of probes with primers for mouse IL-1b, IL-6, IL-12/ 23p40, and MCP-1 (Applied Biosystems). GAPDH was used to normalize RNA levels. To assess steatosis and liver fibrosis, sections were cut from paraffin blocks of liver tissue prefixed in Bouin's solution (Wako Pure Chemical Industries). Steatosis score was calculated according to the criteria of Kleiner [23]. Coverage of collagen deposition in the liver was used as a marker to evaluate extent of fibrosis. To visualize collagen deposition, Bouin's fixed liver sections were stained using picro-Sirius red solution (Waldeck, Germany). For quantitative analysis, bright field images of Sirius red-stained sections were captured around the central vein using a digital camera (DFC280; Leica, Germany) at a 200-fold magnification, and the positive areas in five fields/section were measured using ImageJ software (National Institute of Health, USA).
Statistical Analyses
Statistical analyses for in vivo studies were performed using Bonferroni's multiple comparison test on GraphPad Prism 4 (GraphPad Software, USA). p values \0.05 were considered statistically significant. A trend or tendency was assumed when a one-tailed t test returned p values \0.05. Results were expressed as mean ± SD. Student's t test was performed for the in vitro studies. p values B0.05 were considered statistically significant.
Effect of Lecinoxoids on TLR-2 and TLR-4 Activation and on Chemokine-Induced Migration
We first compared the inhibitory effect of VB-201 on TLR-2-and TLR-4-mediated activation and chemokine-induced migration of human monocytes to its derivative VB-703. The results demonstrate that VB-703 inhibits TLR-4-mediated signaling events and cytokine production with a profoundly higher degree of activity than VB-201 (Fig. 2a, b) but similar to VB-201's inhibitory effect on TLR-2-mediated phosphorylation (Fig. 2c). Moreover, VB-703 showed annulled activity in the case of monocyte migration (Fig. 2d).
Effect of Lecinoxoids Treatment on Pathophysiological Characteristics of NASH-Induced Mice
Next, we assessed the effect of VB-201 and VB-703 on the development of steatosis, steatohepatitis and liver fibrosis in a NASH mouse model, using telmisartan as standard of care. Mean body weight of the solvent group at experiment conclusion was lower than that of the normal group. Mean body weight of the telmisartan group was significantly lower than that of the solvent group. No significant differences were observed in mean body weight between the solvent or the telmisartan group and the groups treated with VB-201 or VB-703. Nevertheless, the solvent group exhibited a significant increase in mean liver weight compared with the normal group. The telmisartan group showed a significant decrease in mean liver weight compared with the solvent group. There were no significant differences in mean liver weight between the solvent group and the groups treated with VB-201 or VB-703. Furthermore, we found that solvent-treated mice had elevated blood glucose and plasma ALT levels, and that treatment with lecinoxoids or telmisartan did not affect these parameters. Next, we sought to determine whether lecinoxoids can decrease liver fat content in NAFLD. The results show that solvent-treated mice had significantly increased liver triglyceride content compared with the normal group, and that while telmisartan treatment significantly decreases liver triglycerides, the administration of lecinoxoids had no apparent effect (Table 1). In accord with the latter result, telmisartan significantly reduced steatosis, whereas treatment with lecinoxoids had no effect on steatosis (Table 1).
VB-201 and VB-703 Effect on Steatohepatitis
We now examined the effect of each of the lecinoxoids on NASH, where NASH is defined as increased expression levels of inflammation mediators in the liver compared with normal mice. For that, upon killing, RNA was prepared from livers and the differences between tested groups in the expression of inflammation mediators associated with steatohepatitis were determined by quantitative PCR. Analysis of inflammation mediators IL-1b, IL-6, MCP-1, and IL-12/23p40 in the liver of NASH-induced mice showed that VB-703 for the most part significantly inhibited expression of pro-inflammatory mediators, whereas VB-201 significantly attenuated only the expression of IL-12/23p40 (Figs. 3a-d).
Lecinoxoids Restrict Liver Fibrosis
In order to evaluate whether treatment with lecinoxoids restricts liver fibrosis, liver samples were stained with Sirius red to assess collagen content, as a marker for the presence of fibrogenesis. The extent of fibrosis was determined by calculating the percent of positively stained area in five fields within a section. The results, presented in Fig. 4a, b, demonstrate that VB-703, more than VB-201 and even telmisartan, significantly restricted the development of liver fibrosis.
Discussion
NAFLD is a progressive disease that begins with a fatty liver and that, if left untreated, may advance to liver inflammation and in few cases might culminate in liver fibrosis. The majority of drugs currently tested in clinical trials focus on targeting molecules that are involved in the regulation of lipid and glucose metabolism. These drugs include agonists of nuclear receptors, such as FXR and PPAR-a that induce anti-steatotic responses in NASH patients [24], a niacin analog that reduces triglycerides levels, and a glucagon-like peptide-1 receptor agonist that induces insulin secretion. In this study, we evaluated an alternative approach according to which oxidized phospholipid small molecules are used to target pathways directly associated with liver inflammation and fibrosis for treating NASH and ensuing liver fibrosis. Two major stages were suggested to promote NASH and liver scarring: the recruitment of circulating monocytes into the fatty liver and activation of these monocytes, together with resident Kupffer and stellate cells, through TLR-2 and TLR-4. In virtue of the biological function of the lecinoxoids, treatment of mice with VB-201 or VB-703 did not alter steatosis. However, when indices directly associated with inflammation and fibrosis were analyzed, differences in the efficacies of VB-201 and VB-703 were revealed. Functionally, VB-201 differs from VB-703 in its ability to also inhibit monocytes migration, but it is a weaker antagonist of TLR-4 than is VB-703. Nevertheless, our results demonstrate that significant effects of lecinoxoids on liver inflammation could be attained by targeting TLR-2 and predominantly TLR-4, since VB-703 was superior to VB-201 in inhibiting all of the inflammation mediators tested. Although VB-201 showed a weaker influence than VB-703 on steatohepatitis, these results do not exclude the therapeutic role VB-201 may have on fibrosis. Indeed, limiting monocytes migration to ameliorate liver fibrosis has already been shown with cenicriviroc mesylate, a CCR2 and CCR5 antagonist, which reduced fibrosis by 50-65 % both in a rat thioacetamide model and in a mouse NASH model and is currently undergoing evaluation in a phase II clinical study in patients with liver fibrosis. Future efficacy studies either in animal models or in clinical practice should therefore include test compounds that are preferentially strong antagonists to TLR-4 or combination therapies to decrease monocytes migration and TLR-4 activation to treat both steatohepatitis and liver fibrosis. In this study, we did not test the effect of VB-201 and VB-703 at the cellular level, in the sense that hepatic cells involved in NASH and liver fibrosis such as HSC and Kupffer cells were not isolated and analyzed for their response in the presence of VB-201 and VB-703. Moreover, in this model, the extent of fibrosis reached in the control group was relatively low. Accordingly, in vitro studies are currently underway to test the effect of these and other lecinoxoids on pure resident liver macrophages and HSC, alongside in vivo experiments in other liver fibrosis models. To conclude, small molecules oxidized phospholipids that strongly antagonize TLR-4 and inhibit monocytes migration should be further explored for their potential to treat subjects with NASH and liver fibrosis.
Compliance with ethical standards
Conflict of interest The authors are employees and stock option holders of VBL Therapeutics.
Open Access This article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
|
2017-08-02T22:22:32.565Z
|
2016-04-13T00:00:00.000
|
{
"year": 2016,
"sha1": "17e57bad163f61709e5b95f153b5bbf7b638e6c6",
"oa_license": "CCBYNC",
"oa_url": "https://europepmc.org/articles/pmc4980417?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "17e57bad163f61709e5b95f153b5bbf7b638e6c6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
3721592
|
pes2o/s2orc
|
v3-fos-license
|
Mannich base-connected syntheses mediated by ortho-quinone methides
This article provides an overview about specifically modified Mannich reactions where the process involves an ortho-quinone methide (o-QM) intermediate. The reactions are classified on the basis of the o-QM source followed by the reactant, e.g., the dienophile partner in cycloaddition reactions (C=C or C=N dienophiles) or by the formation of multicomponent Mannich adducts. Due to the important pharmacological activities of these reactive o-QM intermediates, special attention is paid to the biological activity of these compounds.
Introduction
The Mannich reaction is an important, one-pot, multicomponent, C-C bond forming reaction that is widely used in the syntheses of many biologically active and natural compounds [1][2][3][4][5]. Originally, the Mannich product is formed through a three-component reaction containing a C-H acid, formaldehyde and a secondary amine. Recently, one of its special variations called modified Mannich reaction, has gained ground, in which the C-H acid is replaced by electron-rich aromatic compounds such as 1-and 2-naphthols as active hydrogen sources [6]. At the beginning of the 20th century, Mario Betti reported the synthesis of 1-aminobenzyl-2-naphthol starting from ammonia, benzaldehyde and 2-naphthol. This protocol is known as Betti reaction and the compound formed as Betti base [7][8][9]. Several examples have been published to extend the reaction and to synthesize varied substituted aminonaphthol derivatives [10]. Their relatively easy accessibility and promising biological properties have led to the resurgence of their chemistry coming again into the focus of pharmacological research.
The formation of aminonaphthols can be explained by two mechanisms. According to one possibility, first the reaction of the amine and the aldehyde yields a Schiff base and then the latter reacts with 2-naphthol in the second nucleophilic addition step. The other theory assumes the formation of an orthoquinone methide (o-QM) intermediate by the reaction of 2-naphthol and benzaldehyde. Re-aromatization, the driving force of the transformation, takes place in the second step by the nucleophilic addition of the amine component.
The class of o-QMs has recently been investigated from many aspects. They are known as short-lived species playing an important role as key intermediates in numerous synthetic pathways. Reviews have recently been published about o-QM generation, applicability in organic syntheses and biological properties [11][12][13][14][15][16][17]. However, in this review we would like to focus on their role in syntheses connected to Mannich base chemistry as well as their wider applicability and properties.
Formation of Mannich bases via o-QM intermediates Synthesis of amidoalkylnaphthols
The preparation of amidoalkylnaphthols has recently been discussed from many points of view [18]. This indicates the importance of this reaction because 1-amidoalkyl-2-naphthols can be easily converted to important biologically active 1-aminoalkyl-2-naphthol derivatives by a simple amide hydrolysis.
The mechanism of the Mannich reaction is depicted in Scheme 1. First, the reaction between the aldehyde and 2-naphthol, induced by the catalyst, leads to the generation of o-QM intermediate 3 that reacts further with the amide component to form the desired 1-amidoalkyl-2-naphthol derivatives. This second step can also be considered as a nucleophilic addition of the amide to the o-QM component.
Various catalysts and conditions were used to optimize reaction conditions considering economical and environmental aspects. These include microwave-assisted reactions, solvent-free conditions and the reusability of the catalyst (Table 1). Procedures are carried out as one-pot multicomponent transformations without the isolation of the intermediates formed. Therefore, with the application of nontoxic, readily available and inexpensive reagents, both time and energy are saved.
Recently the applicability of nanocatalysts in these reactions has been of interest since nanocatalysts, in general, are stable and recyclable and they exhibit higher activity than conventional catalysts. A few notable examples are worth mentioning here. Aluminatesulfonic acid nanoparticles (ASA NPs) proved to be efficient under neat conditions for the synthesis of 1-amidoalkyl-2-naphthols [19]. Zali nanoparticle-supported sulfuric acid (MNPs-SO 3 H) [22]. As shown in Table 1 3 } as the first nanostructured molten salt [23]. As depicted in Table 1, entry 5, the catalyst gave remarkable results at room temperature in short reactions (5-30 minutes) in 90-96% yields. Comparing these results with those achieved by the application of tin dioxide nanoparticles (nano SnO 2 , Table 1, entry 6), molten salt catalysis affording higher yields in shorter reactions is definitely more advantageous.
Safari et al. combined the benefits of using magnetic nanoparticles and ionic liquids by the application of magnetic Fe 3 O 4 nanoparticles functionalized with 1-methyl-3-(3-trimethoxysilylpropyl)-1H-imidazol-3-ium acetate (MNP-IL-OAc) as catalyst [28]. As shown in Table 1, entries 11 and 12, syntheses carried out by conventional heating at 100 °C required long reaction times affording yields of 82-97%. In contrast, [42] or wet cyanuric acid (wet-TCT) [43] was also tested. These methods suffer from a number of drawbacks, such as strong acidic media, high temperature, and prolonged reactions. Furthermore, the yields are often not satisfactory.
To eliminate the disadvantages of previous strategies, Samant et al. reported an ultrasound-promoted condensation catalysed by sulfamic acid [44]. As shown in Table 1, entries 39 and 40, both dichloroethane (DCE) and solvent-free conditions were tested. The catalyst worked at low temperature (28-30 °C) and the products were formed in short reaction times in up to 94% yields. Shinde et al. also published iodine catalysis carried out at room temperature in DCE [45]. Whereas long reaction times were needed in the latter process, good yields could be achieved under mild conditions.
In additional publications listed in [60], polyphosphate ester [61] and amberlite IR-120 [62] were used as catalysts. These latest strategies provide efficient syntheses under mild conditions without using harsh chemicals. Furthermore, the application of [66] microwave irradiation or sonication is also preferred to conventional heating methods to accelerate the reactions.
Synthesis of aminoalkylphenols
The mechanism of the formation of phenolic Mannich bases is similar to that discussed above for the synthesis of amidoalkylnaphthols. [64]. Later, the same research group published a similar reaction extended by various lactams carried out in trifluoroacetic acid in water [65]. As reported in both papers, Mannich bases formed 9a-t were isolated in good yields. Plausible reaction pathways were described and the energetic values of the transition states were calculated.
In one of the latest publications with respect to this topic, Priya et al. disclosed the synthesis of a wide range of novel 2-[(benzo[d]thiazol-2-ylamino(phenyl)methyl]phenols 10a-m [66]. In their study, 2-amino-1,3-benzothiazoles, various aldehydes and substituted phenols were reacted in the presence of ZnCl 2 as catalyst.
The synthetic protocol was then extended to isolate benzo[a]xanthen-11-ones or chromeno[3,2-g]β-carboline-8,13dione starting from 2-naphthol and 1H-β-carboline-1-one Mannich bases [71]. Although a high temperature was needed (reflux at 153 °C for 4 hours), the desired products were isolated in good (53-91%) yields. The authors reported better results with the use of polyheterocyclic initial compounds. This can be explained by a dearomatisation step taking place in the transformation of phenolic Mannich bases, leading to the disappearance of the only aromatic ring. In a recent publication by same research group [72], they elaborated a simple route to 1,2-dihydronaphtho[2,1-b]furan and 2,3-dihydrobenzofurans via baseinduced desamination. They also reported the development of a simple, general route to 2,3-dihydrobenzofurans 21 starting from phenolic Mannich bases. The syntheses were also extended to 2-naphthol Mannich bases as initial compounds affording C-2- Bray et al. reacted ortho-hydroxybenzylamines with 2,3dihydrofuran and γ-methylene-γ-butyrolactone in DMF at 130 °C [73]. This method could successfully be applied in the synthesis of the spiroketal core of rubromycins 22 and 23.
One of the latest publications around the topic is published by Hayashi et al. in 2015 [74]. Starting from diarylmethylamines 24 and arylboroxines, they successfully developed a rhodiumcatalyzed asymmetric arylation process leading to triarylmethanes 25. With the application of mild reaction conditions (40 °C, 15 h), a high enantioselectivity (≥90% ee) was reached with good to excellent yields. (Scheme 2).
Watt et al. achieved the regioselective condensation of bis(N,Ndimethylamino)methane with various hydroxyisoflavonoids to synthesize C-6-and C-8-substituted isoflavonoids 33 and 34 in a Mannich-type reaction [76]. These o-QM precursors by a thermal elimination of dimethylamine were then reacted with differ-ent cyclic dienophiles to give various inverse electron-demand Diels-Alder adducts 35-37. In case of 36, the cis-fused ring system found to be similar to bioactive xyloketals isolated from fungi (Scheme 4) o-QMs are also known to undergo oligomerization in the absence of dienophiles and nucleophiles via an oxo-Diels-Alder protocol (Table 4). During the syntheses of 1,4,9,10anthradiquinones with potential antitumor activity, Kucklaender et al. isolated new spiro derivatives 38 [77]. These latter spirocyclic dimers formed in a Diels-Alder dimerization process by heating the corresponding Mannich bases under reflux in dichloromethane for 2 hours. mal desamination of the initial compounds [78]. However, some of the publications report this phenomenon as an advantageous reaction rather than the formation of unexpected side products. As mentioned above [71],
Reactions with C=N dienophiles
The preparation of novel o-QM-condensed poliheterocycles is a relatively new area of Mannich base chemistry. Our research group has also been interested in cycloaddition reactions of o-QMs generated from Mannich adducts 42, when a serendipitous reaction occurred. Namely, the formation of new naphthoxazino-isoquinoline derivatives 43 under neat conditions staring from 1-aminoalkyl-2-naphthols and 6,7-dimethoxy-3,4-dihydroisoquinoline was observed [79]. At the same time, Osyanin et al. reported the same reaction extended by various substituted aminonaphthols [80]. Achieving the syntheses in ethanol at 78 °C, [4 + 2] cycloaddition took place between the o-QM generated from the corresponding aminonaphthol as diene component and cyclic imines playing the role of heterodienophiles (Scheme 5).
Fülöp and co-workers then extended their studies by applying both 2-aminoalkyl-1-naphthols and 1-aminoalkyl-2-naphthols [81]. These bifunctional compounds were reacted with various cyclic imines such as 4,5-dihydrobenzo[c]azepine or 6,7-dihydrothieno [3,2-c]pyridine to have new naphthoxazinobenzazepine 44 and -thienopyridine 45 derivatives [82]. Transformations at 80 °C in 1,4-dioxane as solvent were performed in a microwave reactor to utilize the advantages of this method. As expected, reaction times shortened, while the products were isolated in higher yields in comparison with those found by conventional heating.
The application of (4aS,8aS)-hexahydroquinoxalin-2-one served as the first example with respect to the use of an enantiomeric cyclic imine in this type of reaction [83]. The formation of the possible naphthoxazino-quinoxalinone diastereomers 46 was investigated and studied by theoretical calculations (Scheme 5).
In this and all previous cases, the conformational behaviour of the polyheterocycles formed was also described.
Reactions with electron rich aromatic compounds
The formation of aza-o-QMs is also possible, if the initial phenolic Mannich base bears an aromatic moiety on its benzylic carbon atom. Rueping et al. recently performed reactions between aza-o-QMs in situ generated from α-substituted orthoamino benzyl alcohols 48 and substituted indoles catalysed by N-triflylphosphoramides (NTPAs) [85]. (Scheme 6) The process provided new C-2 and C-3-functionalized indole polyheterocycles 49 and 50 in good yields with 90-99% ee.
Miscellaneous reactions
It is also known that o-QMs could cross-link two biologically important molecules such as peptides, proteins or nucleic bases. several N-, O-and S-nucleophiles [88]. They examined both thermal and photochemical generations of such intermediates. By selecting the appropriate reaction conditions (various pH and temperatures), they were able to alkylate free amino acids, e.g., glycine (Gly), L-serine (Ser), L-cysteine (Cys), L-lysine (Lys), L-tyrosine (Tyr) and glutathione (Glu) in aqueous solution to isolate 55 (Scheme 8).
Rokita et al. focused on generating o-QMs and used them as cross-linking and DNA alkylating agents. Starting from Mannich base 56 and transforming it by a number of synthetic steps, they were managed to elaborate a process that provides easy access to o-QM precursors containing a broad array of linkers 57, which were used to connect with site-directing ligands [89] (Scheme 9).
As reactive intermediates, o-QMs can also play the role of monomers in polymerization reactions. Ishida et al. reported the ring-opening polymerization of monofunctional alkyl-substituted aromatic amine-based benzoxazines [90]. It was shown that the methylene bridges can be formed by o-QMs that are resulted by the cleavage of phenolic Mannich bridge structure 56 (Scheme 9).
Biological properties
As discussed earlier, o-QMs are known as short-lived, highly reactive intermediates. Therefore, their biological activity is mostly examined from the point of view of their application as DNA alkylating agents. One of the first examples was reported by Kearney et al. in 1996 about preformulation studies of the antitumor agent topotecan [91]. The antitumor activity of the compound could be explained by its degradation to highly active zwitterionic species via an o-QM intermediate. Dimmock et al. subsequently examined the cytotoxic activity of phenolic azobenzene Mannich bases [92]. Correlations were found between structures and activities against murine P388DI and L1210 cells, human T-lymphocyte cell lines and, in some cases, mutagenous properties were also shown.
Freccero et al. examined the photogeneration by laser flash photolysis and reactivity of naphthoquinone methides as well as their activity as purine selective DNA alkylating agents [93]. Farrell et al. studied the mechanism of the cytotoxic action of naphthoquinone-platinum(II) complexes [94]. Both DNA binding and topoisomerase I inhibition studies proved that the coordination and stabilization of the quinone methide structure can effect marked changes in DNA reactivity. In a recent publication, 3-(aminomethyl)naphthoquinones were investigated from the point of view of cytotoxicity, structure-activity relationships and electrochemical behaviour [95]. Derivatives that contain an aromatic amine and salicylaldehyde or 2-pyridinecarboxaldehyde moieties were found to be the most active against the HL-60 (promyelocytic leukaemia) cell line. Zhou et al. obtained phenolic Mannich bases bearing functional groups that are suitable for cross-linking DNA; therefore, their antitumor effects could also be confirmed [96].
The formation of o-QMs and their biological properties were also illustrated by kinetic studies. Rokita et al. using laser flash photolysis showed that formation and reactivity of these intermediates strongly depended on the presence of electron-donating or electron-withdrawing functional groups of the o-QM precursors [97].
Conclusion
The high number of publications that has recently appeared on the o-QM-mediated Mannich-type transformations is a clear in-dication that the application of this highly-reactive intermediate has made the modified Mannich reaction to be a hot topic again in organic chemistry. This review presents a wide range of applications including cycloadditions and the synthesis of bifunctional amino-or amidonaphthols that can later be transferred as building blocks into several natural or biologically active compounds. Thanks to the immense number of possibilities for Mannich reaction through the use of various amines, aldehydes and electron-rich aromatic compounds, the continued evolution of the literature on these reactions appears to be guaranteed. By the application of various cyclic imines and subsequently extended by the use of nonracemic derivatives, a wide range of enantiomeric polyheterocyclic compounds could be isolated and might be tested as potential anticancer drug candidates.
|
2018-03-09T21:14:05.527Z
|
2018-03-06T00:00:00.000
|
{
"year": 2018,
"sha1": "bd99823716bdef5dc9fc3aa5c256662c7cc760f1",
"oa_license": "CCBY",
"oa_url": "https://www.beilstein-journals.org/bjoc/content/pdf/1860-5397-14-43.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bd99823716bdef5dc9fc3aa5c256662c7cc760f1",
"s2fieldsofstudy": [
"Chemistry",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
213406135
|
pes2o/s2orc
|
v3-fos-license
|
Overview of Policies, Guidelines, and Standards for Active Assisted Living Data Exchange: Thematic Analysis
Background A primary concern for governments and health care systems is the rapid growth of the aging population. To provide a better quality of life for the elderly, researchers have explored the use of wearables, sensors, actuators, and mobile health technologies. The term AAL can be referred to as active assisted living or ambient assisted living, with both sometimes used interchangeably. AAL technologies describes systems designed to improve the quality of life, aid in independence, and create healthier lifestyles for those who need assistance at any stage of their lives. Objective The aim of this study was to understand the standards and policy guidelines that companies use in the creation of AAL technologies and to highlight the gap between available technologies, standards, and policies and what should be available for use. Methods A literature review was conducted to identify critical standards and frameworks related to AAL. Interviews with 15 different stakeholders across Canada were carried out to complement this review. The results from interviews were coded using a thematic analysis and then presented in two workshops about standards, policies, and governance to identify future steps and opportunities regarding AAL. Results Our study showed that the base technology, standards, and policies necessary for the creation of AAL technology are not the primary problem causing disparity between existing and accessible technologies; instead nontechnical issues and integration between existing technologies present the most significant issue. A total of five themes have been identified for further analysis: (1) end user and purpose; (2) accessibility; (3) interoperability; (4) data sharing; and (5) privacy and security. Conclusions Interoperability is currently the biggest challenge for the future of data sharing related to AAL technology. Additionally, the majority of stakeholders consider privacy and security to be the main concerns related to data sharing in the AAL scope. Further research is necessary to explore each identified gap in detail.
Background
People are living longer than ever before. According to Statistics Canada's 2016 census, seniors outnumber children aged under 15 years for the first time [1]. The aging population is growing faster than the working-age population. This adds stress on population growth, potential loss of economic productivity, and output, and a significant portion of the gross domestic product is spent on health care and pensions [2]. Aging also brings specific challenges around declining health, increase of chronic diseases, increased need for daily care and monitoring, and the financial burden of expanding health care costs. As a result, health and independence are top priorities for seniors in Canada. Active assisted living (AAL) offers technology that may address some individual and governmental concerns. In Canada, AAL technologies have been explored as a combination of smart-home, telehealth, and assistive technology (AT) [3]. In contrast, Europe has the most advanced programs for AAL standardization, and many of their projects are developed by the AAL Joint National Program [4] to support their growing aging population. Moreover, while AAL has clear applications among an aging population, similar needs exist within vulnerable populations for timely health care, monitoring, and support.
To provide better living conditions and assistance in daily routines for older adults and vulnerable populations inside and outside of their homes, researchers and innovators have explored the use of different technologies such as wearables [5], camera-based sensors [6], and mobile health technologies [7]. These new technologies and products for daily assistance allow individuals to lead a more independent life [8].
Challenges
As mentioned by Rashidi and Mihailidis [9] in their article, the majority of older adults prefer to stay in their homes instead of transferring to a home care facility, and therefore it is essential to create and develop new technologies that support graceful aging in place. However, vulnerable populations have unique needs and limitations. Therefore, it is necessary to understand the differences and challenges in designing and developing technologies as well as collecting data for this specific user group. Standards, guidelines, and policies play an important role, ensuring better quality and safety of new technologies. In addition, standards enable knowledge sharing and act as a mechanism to make appropriately protected knowledge public and widely accessible [10].
The AAL technology landscape may seem relatively new and can sometimes be confused with the Internet of Things (IoT) as both involve data acquisition from the environment using wireless technologies [11]. IoT is the extension of the internet into physical technologies and everyday objects, which enables the creation of systems that operate over a network, collecting and exchanging data, and acting upon objects in our lives [12].
IoT devices can be used for any purpose other than health care, thus differing from AAL technologies that have the primary purpose of assisting users' quality of life. Although this new field already has many expanding technologies, AAL lacks a rigorous process of development aimed at specific users. In reviewing existing literature, some key studies on AAL technology, standards, and frameworks have been identified. The majority of the studies in this field focus on the technical specifications of AAL technologies, scenarios reviewing products, and the available tools and solutions [9,[13][14][15]. Other studies focus on existing projects and platforms within the AAL field [2,16]. One particular study highlighted the frameworks, platforms, standards, and quality attributes of AAL technology, and so it was used as the baseline of our literature review [17].
The initial literature review and analysis demonstrated the need for researchers to understand the existing technologies and their applications. There is insufficient information on the challenges encountered in creating new AAL technologies or about ways to connect AAL technology through the use of integrated data for the benefit of the user.
Objectives
This paper presents a review of standards, frameworks, policies, and guidelines in the creation of AAL technologies. In particular, the objective of this paper was to provide an overview of the primary standards and frameworks available as well as to highlight the gaps, challenges, and opportunities existing in the AAL area. As such, this paper did not intend to review currently existing technologies or indicate which ones to use or not. Similarly, the scope excluded any standards and frameworks targeted at medical devices even if the device is collecting personal information and is installed in a home environment to improve the quality of life. For this paper, AAL technology consists of all technology, devices, and wearables connected to the internet that enables data collection and exchange and is used for health care monitoring or to enhance the daily life of individuals.
Active Assisted Living Concepts and Terminology
AAL technology is a subset of a broader concept called ATs, which refers to "any item, piece of equipment, or product system, whether acquired commercially, modified, or loop customized, that is used to increase, maintain, or improve [the] functional capabilities of individuals with disabilities" [18]. In other words, ATs can be any tool or device capable of assisting a person to achieve something that would not be possible without the support of that technology [8]. Wheelchairs, walkers, electronic jar openers, screen readers, hearing aids, and educational software are all examples of ATs in everyday use.
Despite being a subset of an overarching concept, AAL is also an umbrella term that describes technologies designed to improve the quality of life, aid in independence, and create healthier lifestyles for those who need assistance at any stage of their lives. AAL involves concepts, products, and services that combine new technologies and environment to improve the quality of life at all ages [19]. AAL uses information and communication technologies combined with the physical and social setting to provide easy-to-use devices either at home or to support lifestyles outside of the home environment [4]. According to the International Electrotechnical Commission (IEC) Systems Committee on AAL (SyC AAL), this technology supports systems for the elderly in industrialized countries by helping them live healthier lives [20]. The term AAL can be referred to as active assisted living or ambient assisted living as both terms are used interchangeably throughout the literature. This research report will follow the terminology defined by the SyC AAL committee that defines AAL as active assisted living technology [20]. The active assisted living terminology was chosen because it goes beyond the ambient and can be used outside the residence such as a smart walker.
Technology could prove highly beneficial in providing a higher quality of life for an aging society or for anyone who needs additional help to perform their daily tasks and activities inside or outside their homes. To address this matter, the International Medical Informatics Association of the United States approved the creation of a new workgroup on smart homes and AAL in November of 2006 [21].
The AAL environment can integrate multisensors inside or outside of the home to gather data and monitor individuals in their homes [22]. The integration of sensors, embedded in homes, is also known as ambient intelligence (AmI) [23]. AmI applications should be transparent to users while meeting security and privacy requirements [24]. Many current devices, sensors, and health/wellness trackers are capable of collecting and sending information to a caregiver or physician for remote patient monitoring. However, the creation of new solutions for the aging population requires special attention, should not rely on the user's effort, and needs to consider the cognitive, perceptual, or physical limitations of users [9].
AAL technology devices can be either simple, such as presence sensors, or complex such as a smart wheelchair controlled via eye movements. AAL technologies are not subject to the same rigorous standards and evaluation protocols required for medical devices. Standards are widely used by companies in the planning, development, and production of these products and technologies. Without standards, possible interactions between products could be inconsistent, processes would not be defined and secure, and there would be security and safety-related risks [25]. For the Institute of Electrical and Electronics Engineers (IEEE), standards are "published documents that establish specifications and procedures designed to maximize the reliability of the materials, products, methods, and/or services people use every day" [26]. This project focused on identifying existing standards relevant for AAL technology and explored the existing gaps in terms of standards for supporting the development of AAL technologies, and to do that we interacted with different stakeholders in this space. In addition to standards, it is necessary to explore protocols used within the standards. In this case, protocols are a set of rules or procedures for the way information will be structured and transmitted for electronic devices to send and receive data [27].
Study Design
This project was planned in three phases. The first phase focused on conducting a literature review to understand what currently exists regarding standards and guidelines for AAL technologies. The second phase interviewed key industry stakeholders to develop a better understanding of the use of standards, the use of data-sharing practices, and the challenges of AAL technology to identify the existing gaps in the development of AAL technologies. The third phase aimed to validate the results of the literature review and interview phases through conducting a workshop. The interview method was chosen because it allows more feedback points to be collected from a single individual [28]. As for the workshops, participants promoted group discussions and elaborated on each other's responses, influencing the direction of the workshop [29]. However, the workshops by themselves have disadvantages around the existence of dominant people that dominate the discussion vs interviews that allow users to voice their opinion with regard to that dominance [30].
It is essential to mention that standards or guidelines related to medical devices and their safety were excluded from the scope of the project as there are existing regulations and standards in place to support the development of these devices.
Literature Review
To meet the objectives of this project, a narrative literature review was performed to understand the existing material regarding technical standards, frameworks, and platforms related to AAL technology. Databases used included Scopus, IEC, IEEE, and PubMed as the primary sources for academic references and standards references. The IEC and IEEE databases were selected from the collection of publications related to engineering and computing standards and guidelines. Scopus and PubMed were used to review publications regarding science, technology, health care systems, and medicine. The academic literature led to an in-depth evaluation of gray literature and websites from AAL governmental programs around the world. Furthermore, results from the academic literature leverage the creation of the questions for the semistructured interview used in the next phase of the project. The technology-oriented standards covered in this research report were driven by the research conducted by Memon et al [17] in the article titled "Ambient Assisted Living Healthcare Frameworks, Platforms, Standards, and Quality Attributes;" a website from Postscapes [31] called "IoT Standards and Protocols;" and Salman's [32] paper titled "Networking Protocols and Standards for Internet of Things."
Interviews
In the second phase of this research project, we used a semistructured interview (see Multimedia Appendix 1) with 12 to 17 questions. The semistructured method uses a list of predetermined questions that guide the interview and may or may not be used according to the course of the conversation and previous answers. This method brings out how the interviewee interprets the topics and problems presented [28].
Over 50 stakeholders in AAL technology, from Canada, were invited to participate in interviews. Stakeholders were selected among four distinct categories: (1) health care providers such as physicians, nurses, and social workers; (2) academics and researchers who represent a large percentage of the stakeholders in the field; (3) industry representatives from well-established corporations to small start-ups working to innovate and find better ways to help people; and (4) health care administrators responsible for decision making in research or acquisition of new technologies. The stakeholder list was formed along with a project advisory panel that identified and suggested the names of experts from across Canada with some interest or involvement with AAL. The stakeholders were then divided into the suggested categories.
A round of invitations were sent to all stakeholders, along with an information letter and a description of the project objectives. A total of 15 invited participants agreed to be interviewed. A date and time were scheduled for each participant. The interviews were conducted over the phone by 2 researchers. Each phone call lasted approximately 60 min and was recorded. After a brief introduction of the project, the respondents were presented with approximately 17 questions (the questions could vary according to previous answers) on four distinct areas: "What is AAL?;" "Standards;" "Data Sharing;" and "Main Challenge." The interview results were coded using a thematic analysis because it is best suited to identifying topics within verbal or written interviews using semistructured interviews [33,34]. Furthermore, the data were analyzed using the 6-phase approach to a thematic analysis proposed by Braun and Clarke [35]. We identified saturation on our themes, instead of saturation on the data.
Workshops
The literature review and interview phases generated a list of standards, platforms, and frameworks, as well as a list of topics. These results were then presented at a workshop organized to fulfill the predefined purpose of validating the results and leverage new insights into and suggestions on the topics presented. A second round of emails were sent to more than 70 stakeholders from Canada, inviting them to participate in a face-to-face workshop with the possibility of online participation, and a total of 11 participants attended the workshop conducted in April 2018. The workshop was created in an unstructured manner where the project researchers acted as facilitators guiding the session. Participants were given the opportunity to present their ideas related to the topics presented and to challenge other participants' ideas. The expected outcome was to identify the collective understanding of the topics presented and thus build a common meaning and validate the results.
Future steps and opportunities related to AAL technology were identified as a result of conducting the workshop.
Standards
Considering a variety of possible information technology standards, the literature review showed that standards related to essential technologies, hardware, devices, application programming interfaces, and middleware are well-covered by the existing standards of leading institutes such as IEEE, International Organization for Standardization (ISO), and IEC.
On the basis of this information, the identified standards relevant to AAL technology were grouped into the four following categories: (1) design and terminology; (2) communication and transport; (3) privacy and security; and (4) data content. For this study, the design and terminology group was responsible for representing concepts using correct terminology and processes related to design, modeling, and planning. Any standards responsible for ensuring the information were transmitted reliably, and independent of the message sent, they were classified as communication and transport. Privacy and security standards are responsible for setting administrative, physical, and technical actions to protect the confidentiality, availability, and integrity of the information. The data content group contains standards responsible for the transferred information and data format that usually uses existing communication protocols. The frequencies of the different groups are shown in Table 1, where the first column displays the standard group and the second column lists the number of standards or protocols identified for a given group as well as the percentage relative to the total. Figure 1 represents the primary standards and protocols investigated in this report and the existing association between them. The figure is a radial chart, sectioned in four categories. The top-left area (solid yellow line with no fill) shows the standards responsible for data content; the top-right area (dotted blue line with no fill) shows patterns used for design and terminology; the bottom-left area (solid grey line with pattern fill) shows the security and privacy standards; the bottom-right area (solid orange fill) shows patterns related to communication and data transport. The link between the standards can represent a dependency-in this case, one standard does not exist without the other-or the indication that one standard is based on another one.
Framework and Platform
Beyond the standards and protocols, several frameworks and platforms were identified as relevant for the AAL context. These frameworks bring together multiple standards and guidelines, thus enabling products compatible with the platform or framework to integrate with other products of the same family with ease. Some frameworks or platforms were created for health system purposes or specifically for AAL technology, including the following: HealthKit provides a standardized framework for the storage and sharing of health information, allowing users to control their data access and integration [40].
Interviews and Workshops
Interviews have highlighted that approximately 40% of the interviewees were well versed with the term AAL, while 27% had heard of the term but could not explain the meaning of the abbreviation and 33% did not know the terminology. After a brief explanation of the AAL concept, 60% of the interviewees already knew the idea even though they were not familiar with the specific terminology. The interview participants also confirmed that the terminology is considered a problem, with the first issue being the acronym AAL that can mean either ambient assisted living or active assisted living depending on the particular context or group. Another concern is regarding the existing stigma with the term assisted because it implies the need for assistance and support, which several technology users do not desire. These findings were confirmed by the workshop participants as well.
Questions related to standards showed that all the interviewees agree on the use and creation of specific standards and guidelines of AAL technology. Participants in academia pointed out that even though standards are not fully applicable in the research area, they are of paramount importance in the development of new technologies. In addition, standards-related responses showed concerns with end users, user safety, product accessibility, and the purpose of using the technology.
When questioned about data sharing, privacy and security were identified as the major challenge. The participants also expressed concern about proper interoperability between products to ensure the correct exchange of information. However, all participants agreed that they would share their data for research and to improve public health if adequate safety policies were implemented and data anonymity is practiced. Another point raised was related to data accessibility, especially in the context of the elderly and vulnerable populations.
Regarding the challenges of creating new AAL technologies, most participants understood that technology challenges or lack of technology is not the problem in the creation process. If the technology does not exist yet, it will probably be created. The problem lies in ensuring security, privacy, proper data sharing, product interoperability, and the correct use of technology. It is essential to understand the purpose of the product being developed to be able to select appropriate technology to ensure the greatest benefit to the end user.
The Five Challenges
After the round of interviews, the notes were analyzed together with a revision of the standards and framework, and five significant gaps were identified: (1) end user and purpose, (2) accessibility, (3) interoperability, (4) data sharing, and (5) privacy and security. Table 2 shows the number of mentions for each gap by the type of stakeholder. Home care administrators are primarily concerned with the benefits for the end user and if the proposed technology does what it is intended to do, while the industry and health care providers place more value on privacy and security. Furthermore, academics mention data sharing as the biggest challenge. Figure 2 shows the frequency of each gap mentioned during the interviews and workshops. Privacy and security is the primary issue, with 30% of mentions, followed by data sharing with 29%, and end user and purpose with 23%. Accessibility is the least mentioned gap with 5%, and the second-lowest gap is interoperability with 13% of mentions. Although interoperability is at the end of the list, it is identified in the literature to be the most significant technical challenge within AAL technologies.
End User and Purpose
AAL technologies are meant to assist users and keep them and their data safe. There is an overall perception that there is not enough consideration for the end user during product development. The end user should participate in all the phases of development, helping in the planning, designing, and testing of new technologies to ensure that their specific impairments, diseases, and disabilities are being accurately addressed. The technology should be adaptive to the context and satisfy the users' needs. Another concern raised by the participants is that a significant portion of the technology available in the market was not developed to solve a clinical problem. Instead, it was adapted from the original purpose (eg, home security) to solve an alternate problem.
Accessibility
The access to and use of technology, regardless of user ability, is an essential aspect of AAL technology. Accessibility is one of the themes identified by interview and workshop participants as differentiating average users from AAL users. The word accessibility has emerged as one of the top 5 significant gaps related to standards and guidelines within the scope of AAL technology. Creating products that are accessible to all is where the effort must be concentrated to ensure maximum benefits from technology. Lack of accessibility can lead to a decrease in device acceptability due to a deficiency of support for AAL users' needs.
Interoperability
The lack of interoperability between AAL technologies was one of the most predominant technical challenges in the reviewed literature and among the participants interviewed. Owing to the lack of regulation, the manufacturer enables integration with other devices that use the same protocol when choosing a solution (eg, the ZigBee protocol), disabling integration for devices that have opted for other alternatives. Many of the standards, protocols, and framework presented in this paper have the goal of solving this problem.
Data Sharing
Standardizing the data-sharing process between devices is one of the major challenges in the field of AAL technologies. Most devices cannot communicate with each other due to a lack of proper interoperability. Even when they do use the same technology, data exchange is not always feasible. As such, the challenges related to data sharing between AAL devices are related to the availability, reliability, integrity, validity, and accuracy of the collected data. In particular, how to ensure that data collected on one device are transmitted to another device without loss of information and quality. The interviews and workshops reveal that it is necessary to work with the terminology of the data so that the information is significant and creates unified terminology models across all manufacturers. In doing so, it is possible to transform data and present the results to the end user in a clear and understandable way without the need for technical or specialized knowledge for data interpretation.
Privacy and Security
Security concerns range from technical issues-whether devices are protected against viruses or hackers-to ensuring that devices are designed and developed with security in mind by choosing the best algorithms and encryption available. Devices that serve more than one purpose run a higher risk of having security and privacy requirements that are not correctly designed. Ensuring that the data collected are properly anonymized and aggregated where appropriate, so as not to pose a potential risk to the end user, is the most significant privacy concern. Most participants report the need for a transparent process that clarifies how data flow across the internet or devices, which data are being collected, and how the data are used. There should be a clear explanation of when and how data are shared and who has access to the information. Trust will only be achieved with proper end-user education, transparency, and accessible presentation of end-user policy contracts.
Overview
This study used literature reviews, interviews, and workshops to identify existing patterns, structures, and guidelines for the development of AAL technology, as well as to identify existing gaps in the area.
Our research has shown that the Canadian aging population could benefit from the innovation of AAL technology in the coming years. AAL technology can provide solutions to increase the security and independence of the population, as well as improve the quality of life, allowing seniors to age in their homes. Therefore, it is crucial to continue investing in projects and solutions to improve the development of AAL technologies.
Interestingly, the results showed that most of the challenges within the scope of this problem are not related to the availability of technology but to the way technology is applied to solve current problems.
Literature Review
Our findings from the literature review showed that most technical standards such as ZigBee, Z-Wave, Bluetooth, ISO/IEEE 11073, and others listed in the standards section of this report are already available or currently in development and applicable in the development of AAL technologies. Different organizations are working on arranging technical standards in specific frameworks. An example is the IEEE 2413-IoT architecture, which is a unified approach to the development of IoT systems and the ISO/IEC JTC 1/SC 41-IoT and related technologies that include sensor networks and wearable technologies.
The list of privacy-related standards for IoT technologies related to AAL technology was limited when the literature review was initiated in September 2017. Only 1 year later, more than 10 ISO privacy standards relevant to AAL were found under development. This evolution in standards addresses one of the concerns identified during the study-that technology evolves very quickly, and innovation is an ongoing process. Guidelines on which technologies and structures to use can very quickly become obsolete or subject to unnecessary bureaucratic intervention.
Although the literature review has resulted in a list of existing technical standards, or under development, for use in the scope of AAL technologies, nontechnological issues have the most gaps in terms of standardization. Issues related to the human interface, processes and methods, vocabulary and social and cultural norms require special attention. Currently, there is no common terminology; hence, finding common terminology is one of the most critical areas to be consolidated in the AAL domain. Similarly, it is necessary to identify the requirements for all possible use cases, the need for specific human processes and interfaces, and create appropriate standards for each scenario. Furthermore, user engagement is widely accepted as an essential concept in the development of new systems and technologies and should be extended to the development of AAL technologies.
The design process of new AAL technologies requires special attention due to the cognitive, perceptual, or physical limitations of the target users. Such technology should not depend solely on the user's effort and input but rather create automated solutions with minimal interaction. Yet, it is essential to take privacy considerations and concerns seriously and discuss these issues with users. Incorporating the end user in the early design stages is critical to increasing the acceptance of technology, and it is an essential part of avoiding unexpected user experience conflicts.
In this field, it is crucial to understand that the concept of end users is not limited to patients, older adults, and people with disabilities or specific health problems. Users also include therapists, health care providers, physicians, and family members who support the daily routines of vulnerable populations through the use of technology.
Interviews and Workshops
When talking to interviewees about standards and challenges in the AAL environment, the need to go beyond the technical aspect became clearer. Some challenges include creating goal-oriented and user-friendly solutions, understanding the user's needs, and choosing the right technologies to meet those requirements. AAL technology is directly related to the intended use of the device. Each specific use case may require different details, standards, design, security, access, and data sharing. The same device that is usually considered consumer goods may, in another scenario, serve as an assistive device to an elderly or a vulnerable individual if it can improve their quality of life. Therefore, the definition of AAL technology becomes a challenge, as many consumer technologies could have AAL applications. For example, Google Home, a smart-home system, is not immediately identified as AAL technology, but it can also be used by individuals with special needs to control light switches because they cannot reach them. In this case, there is the adaptation of existing technology used to address a particular need. Adapting technology that is not designed for a specific use could put the end user at risk and compromise safety and trust.
In addition to the consideration for the end user, the purpose of the technology, accessibility, and privacy, interoperability remains to be one of the main challenges of AAL technology. The integration of products from different manufacturers through common standards will not happen without significant effort from governments and standardization agencies. Furthermore, the use of data collected at a population level for public health analysis and improvement of overall health has the potential to provide value to the data currently being collected by multiple devices. This analysis has the potential to aggregate individualized data and extracts meaning. Innovators should focus on making raw data more understandable and relevant to users and clinicians by providing context for the collected data.
It is necessary to address the concern of whether the users, health care providers, family members, and technology itself are collecting and storing data correctly, securely, and with sufficient data quality for clinical use. The creation of guidelines to ensure data reliability, trust of the data source, and trust in the process of aggregation and analysis will be critical to enable the integration of AAL technology data into clinical practice. Another critical issue is individual literacy. The focus needs to be on educating the public about AAL technology and getting them to be aware of the benefits of existing solutions in the market. Educating target users, influencers, health care providers, and the local community to guide families to better understand AAL technology and its uses can be a viable solution.
In summary, ethics, user friendliness, user acceptance, economic benefit, legal challenges, and data privacy have to be considered to provide sustainable and well-accepted AAL solutions in Canada. Additionally, more interorganizational collaborations and user-focused studies are necessary to explore the benefits of AAL technology in Canada.
Conclusions
Although adopting a set of standards may not address all of the gaps identified in this paper, they are essential tools that can be combined with regulations, policies, and programs to promote change. Various opportunities have been identified in this report through an extensive literature review and stakeholder consultations through interviews and workshops. User friendliness, user acceptance, and data privacy have to be considered to provide sustainable and accepted AAL solutions in Canada. Interorganizational collaborations and user-focused studies are necessary to explore the benefits of AAL technology for Canadian citizens and to ensure that this technology makes a significant positive impact on our health care system. Further in-depth research is needed to explore the existing gaps in AAL technologies.
|
2020-02-27T09:31:23.375Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "d69847c7906b5baa945bd3dc90eeeaf16cde343c",
"oa_license": "CCBY",
"oa_url": "https://mhealth.jmir.org/2020/6/e15923/PDF",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4eeaf20759c08ad29af3cddd5bdfd869b5041d65",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
}
|
151467764
|
pes2o/s2orc
|
v3-fos-license
|
Religious Mahbär in Ethiopia: Ritual Elements, Dynamics, and Challenges
This article explores the religious association mahbär, also called tsïwwa, in Ethiopia. Data from lay practitioners as well as priests show that religious mahbär has many religious as well as social functions. It is a ritual with long traditions in the Ethiopian Orthodox Täwahedo Church. The authors show that what characterizes mahbär as a ritual is its unusual richness, complexity, multifunctionality and flexibility. By placing it within the Ethiopian religious context and the present development, the authors discuss why religious mahbär is in decline despite its multiple functions, flexibility, and support from the Ethiopian Orthodox Täwahïdo Church. In difficult economic times one would expect traditional rituals such as mahbär to become more important to people, and hence to be strengthened, but this does not seem to be the case here. In the authors’ view, three factors are pushing this decline: economic challenges, time constraints, and member recruitment.
Introduction
In many parts of Ethiopia, especially among Ethiopian Orthodox Täwahïdo Christians, it is customary to have spiritual gatherings.The most important association organized by lay people is referred to as mahbär, also called tsïwwa.1 In a religious mahbär2 members honor the saints by gathering at a member's house on a saint's day each month, with a rotating host providing food and drinks for the guests (Pankhurst and Endreas 1958).Among the indigenous associations in Ethiopia, mahbär is the least studied.Several research works discuss mahbär together with associations such as iddir (burial associations) and equb (savings associations) (Alula and Damen 2007;Dessalegn 2008;Levine 1965;Korten with Korten 1972;Pankhurst and Endreas 1958).These studies sketch the main functions of the associations without showing in detail how they are practiced in people's everyday lives, or placing them within the Ethiopian religious context.Ancel (2005) provides a recent and rich source of linguistic, historical, and ethnographic knowledge about mahbär that compares the two religious associations mahbär and sänbäte.Ancel underlines its influence in both rural and urban areas: "To be a member of these associations is a sign of an important social status in the parish community and the reality of both mahbär and sänbäte shows the existence of a way of dialogue between the Church and the faithful" (Ancel 2005, 111).As a religious fraternal association mahbär is reputed to have existed in Ethiopia for hundreds of years (Encyclopaedia Aethiopica vol. 3 2007).As a religious brother-and sisterhood, mahbär has many similarities with sänbäte (Encyclopaedia Aethiopica vol. 4 2010;Ancel 2005).One main difference is that sänbäte is celebrated within the church compound and not in people's homes.Ancel's study is based on a survey in Debre Markos, but the article does not make it clear whether the respondents are both lay people and clergy, and whether both sexes are represented.Research by Marcus (2001) and Pankhurst (1992) are among the few studies that include information about how women practice mahbär as part of their everyday lives.While they find that in rural areas mahbär are strictly gender segregated and single sexed, we find that it is not uncommon to find mixed-gender mahbär in urban areas.In his study Getachew (1998) analyzes mahbär in its own right by exploring the associations' potential for supporting rural-development initiatives.The present article builds on and supplements this existing work by focusing on gender and urban areas.Its particular contribution is analyzing religious mahbär as a ritual and discussing its place in current Ethiopian society.
The article's overall aim is to discuss why membership in religious mahbär seems to be declining despite its multiple functions and flexibility.We discuss this by taking as our starting point how mahbär is practiced in people's Journal of Religion in Africa 46 (2016) 3-31 everyday lives.Furthermore, we situate it in the web of connections of which the ritual is a part.After locating mahbär within the religious context of Ethiopia and within the Ethiopian Orthodox Täwahïdo Church, we then describe the elements and dynamics of the ritual, and analyze the different factors that make mahbär important to Orthodox Christians as well as the factors pushing people away from it.We problematize and discuss how to understand the declining popularity of this particular indigenous association.As we see it, three factors are pushing this decline: economic challenges, time constraints, and member recruitment.
Framing Religious Mahbär
The Religious Context of Ethiopia Ethiopian Orthodox Christians are known for their deep religiosity.Most people's daily lives are linked to church life, and follow the rhythms of fasting, praying, and attending church, especially on the numerous festivals that rule the calendar (Molvaer 1980;Chaillot 2002;Messay 1999).Not all Orthodox Christians attend church on a regular basis but they do celebrate the festivals, either in church or at home.Christianity, Islam, and animism have exerted much influence in the development of diverse cultural traits.Although both Christianity and Islam are widely followed, Orthodox Christianity was the 'state' religion.Following religious rules and rituals is considered appropriate and sometimes mandatory for Orthodox Christians in order to fit into the social system (Bahiru 2002).
If we look at the distribution of followers between the different religious groups as recorded in the national census from 1984 to the most recent one in 2007, we see a clear pattern of the Orthodox Täwahïdo Church losing terrain, particularly in relation to Protestants.A decline is clearly evident, from 54 percent Orthodox Christians in 1984 to 43.5 percent in 2007.While the share of Muslims remains constant with a change from 32.9 in 1984 to 33.9 percent in 2007, the number of Protestants3 has increased from 5.48 percent in 1974 to 18.5 percent in 200718.5 percent in (CSA 2011)).These numbers indicate that the Orthodox Täwahïdo Church is under pressure, particularly from Protestants.Abbink (2003) identifies the more-competitive religious environment as a result of 'transnational' religious challenges.Externally supported missionary educational institutions as well as Islamic movements and groups financed from the outside will have consequences: It will tend to make the EOC [Ethiopian Orthodox Church] lose much of its historical attitude of condoning . . .other forms of religious expression Journal of Religion in Africa 46 (2016) 3-31 because it will be forced to much more assert itself.In general, local religious identities and expressions-especially in the context of contested ethno federal nation-building in Ethiopia-will change in the light of such transnational connections.Abbink 2003, 4 The Orthodox Christians we talked to tended to be more skeptical toward Protestants than toward Muslims.They viewed Islam as a religion with a long history in Ethiopia, while Protestantism does not.Protestants are considered newcomers or mät'e, as the Ethiopians term it, implying that Protestantism comes from the outside and is not 'inherently' Ethiopian.Karbo (2013) assesses that Ethiopia faces a major challenge in managing the diversity of religion and ethnicity.The conflict among religions, between religions and the state, and within religions has intensified.
The Ethiopian Orthodox Täwahïdo Church
The Ethiopian Orthodox Täwahïdo Church is the oldest national church in Africa and has a long history.Even though Ethiopia follows the main beliefs and rituals of Orthodox Christianity, it has a strong native flavor with significant Judaic influences (Pankhurst 1992;Ullendorff 1965).The church is also known by its name 'Täwahïdo' (oneness).For most of its history the Ethiopian Orthodox Täwahïdo Church has been isolated from mainstream Christianity in Europe.First, the Roman Catholic and Eastern Orthodox churches both rejected the Ethiopian Täwahïdo Church's theology at the Council of Chalcedon in 415 A.D. Second, it was separated from Europe politically and geographically by the Muslim conquests in North Africa in the seventh and eighth centuries.This resulted in the development of unique practices and rituals, and in possessing a biblical canon that differs from that of both Catholic and Protestant churches.When a church is threatened, a general understanding is that it tends to harden its patterns in an effort to maintain its identity and seeks to emphasize its distinctive forms, rituals, and dogma.According to Berhanu (2000), this principle applies especially to the Ethiopian tradition.
Methodology
In exploring how people practice religious mahbär, we employed a qualitative research methodology.We conducted semistructured interviews with nineteen women and eleven men.All were mahbär members from Arada and Kirkos, which are subcities in Addis Ababa, and mahbär members from Šašämäne in the Oromia region.These interviews were conducted mainly between 2008 and 2010.Some follow-up interviews were conducted in 2014.The interviewees varied in age from approximately eighteen to 75 years old, and had varied ethnicities-Amhara, Oromo, Gurage, Tigre, and mixed.Their socioeconomic backgrounds ranged from petty traders and daily laborers to housewives and government employees.We interviewed the leaders of the associations about the division of labor in the association and how it is organized and managed.We approached the associations from two angles; initially via its individual members and later at the level of the association itself through the leaders and the clergy.In addition to the interviews, we accompanied some of the interviewees to their mahbär to participate in meetings and gain in-depth information about how the rituals are performed and how the association supports its members.All the mahbär members interviewed were Orthodox Christians.We interviewed several religious leaders about the role of religious mahbär in the different religions and parts of Ethiopia, including two Orthodox priests, a secretary at an Orthodox Sunday school, two Protestant leaders, and two Muslim elders.We knew from a previous study (Flemmen 2008) that at least some Muslims and Protestants in Addis Ababa participated in iddir, so we wanted to explore whether mahbär was also common among Muslims and Protestants in Ethiopia.We found that the Muslims and Protestants4 did not practice mahbär and so they were excluded from the analysis.
We purposefully selected the areas Arat Kilo and Qera in Addis Ababa for interviews because they are old, multiethnic areas with residents from varied social groups.We chose Šašämäne in order to get empirical data from an Oromia region and an urban area outside of the capital.The interviewees here were from several congregations.We believe that the selected areas jointly cover a variety of ways of practicing religious mahbär in urban areas of Ethiopia.
As for our authorial voices, as a team comprised of one native Ethiopian and one Norwegian researcher, we had the benefit of insight into Ethiopian culture, language and customs as well as an outsider's view that prevented us from taking too much insider knowledge for granted.
Elements and Dynamics of the Ritual
Since there is little available research on mahbär and how it is practiced, we feel it is important to describe the setting of a meeting.Constructing an example based on our empirical data5 allows us to make the rituals' choreography explicit.Africa 46 (2016) 3-31 A bed of grass is spread on the living room floor.Wz.Almaz,6 the host, has been busy checking that everything is in order before the members of her Maryam7 Mahbär arrive.Food and drinks are on the table.At 1:00 p.m., nine members arrive one by one.All the members take off their shoes in respect for Saint Mary before entering the room.Saint Mary's picture is on the wall.The members salute the pot called the tsïwwa8 as they greet each other by kissing shoulder to shoulder before sitting down to chat.When the priest, Wz.Alamaz's father confessor, arrives he is ceremonially greeted while each member bows and kisses the four corners of his Coptic cross.He invites them to sit down.They exchange greetings.Everybody rises and the priest leads the prayer and cuts the small bread, the salnaq.9The rest of the bread is distributed for the members by the leader, the Muse,10 and served with qolo (snack made of roasted grain) and a small cup of t'ela ((local home-brewed beer).
Journal of Religion in
Facing the picture of Saint Mary, they bow their heads and pray for the best for their country, their children, and their health.All present wear their nät'äla (white, woven cotton shawl) as is customary in church, covering themselves from head to toe.After singing psalms about Maryam, each member rises, faces the other members, and says, 'Please forgive me if I offended you, knowingly or unknowingly' .The Muse calls the attention of the members by saying: 'I've been informed that two of our members had a disagreement.Drinking the same tsïwwa we should not have disagreement between us, so I ask you to forgive each other' .The two members stand up and hug each other while the priest blesses them.
The main meal is put on the table, a whole ïnǧära (round, large pancake)11 on a tray.In this setting it is of importance that the ïnǧära is unbroken to symbolize the unity of their mahbär, the Muse explains to us.The priest stands up and preaches the gospel.After eating the main meal, the members socialize while drinking the local beer, t'älla, and eating the roasted grain, qolo.Wz.Almaz goes outside to the compound to serve food, drinks, and snacks to waiting beggars.In return, the beggars give Wz.Almaz their blessing.
To mark the end of the meeting, the Muse brings in the large bread, an important part if the ceremony.The central part of the large bread is cut by the priest12 to fit into a container, known as a mäsob (basket).The Muse asks 'Manäš Balä Samïnt?'/Who is next?and calls forward the member hosting the next mahbär meeting.Wz.Aster comes forward, kneels down, and receives blessings from the priest for her willingness to host the next meeting.The mäsob is carefully handed over to her.The Muse cuts the remaining bread and distributes it to the members for them to take home.Three close friends of Wz.Aster are among the members who accompany her when she carries the mäsob, the picture of Maryam, and the tsïwwa to her home.13Wz.Aster's neighbors and family members are waiting at her home to celebrate the arrival of the tsïwwa.They begin ululating (ïlïlta) when the group enters the house.Wz.Aster takes the bread out of the mäsob, cuts it, and distributes it together with the t'älla in the tsïwwa to the people present.Next month when she is the host, she will rely on them to help her prepare the food and t'älla for the meeting.
This short description serves to illustrate the dynamic of the ritual.There is great variety in how elaborately the religious element is enacted in the different religious mahbär we have studied.The patron saint chosen determines the day for the monthly gatherings.In the Ethiopian Orthodox Täwahïdo Church, almost every day of the month is dedicated to a particular saint.Like the Eastern Orthodox Churches, the Ethiopian Orthodox Täwahïdo Church still uses the Julian calendar.
In attempting to break down the ritual into its elements, we have taken human action and interaction as the starting point.Following Grimes (2014), we identified the ritual's core elements by identifying actions, actors, places, times, objects, languages, and groups.In this context, the term 'groups' includes the collective considerations of the ritual such as its hierarchy, economics, political dimensions, collective agency, and social distinctions (Grimes 2014).
Despite its variations, we see religious mahbär as containing the following core elements: • the tsïwwa • the mäsob • commemoration (zïkïr) of a patron saint • a muse • membership • lay people's homes • praying • sharing food and drink Mahbär as a ritual is the sum of these core elements.14They represent a stylized version that we have found useful in identifying the ritual's constituent parts and to see how the components interrelate.In the following discussion we elaborate on and discuss the more-internal as well as collective considerations of the ritual.
Mahbär according to the Ethiopian Orthodox Täwahïdo Church
An article on tsïwwa in the magazine Hamer published by the Ethiopian Orthodox Täwahïdo Church (1998, states: 'The first Christians used to come together and eat.They used to call this food the "food of love" or "Agape". Later the 5,000 Christians who lived together were not able to come together and eat because they had to disperse to different countries after the death of Stephanos' (our translation from Amharic).The article follows up with this quote from the New Testament: (44) All the believers were together and had everything in common.
(45) They sold property and possessions to give to anyone who had need.( 46) Every day they continued to meet together in the temple courts.They broke bread in their homes and ate together with glad and sincere hearts, (47) praising God and enjoying the favor of all the people.And the Lord added to their number daily those who were being saved.
Acts 2: 44-47, English quote from the New International Version.
Consequently, this article links the practice of religious mahbär to the practices described in the New Testament where the sharing of food and drink is central, as is giving to the needy.The Historical Dictionary of Ethiopia (Prouty and Rosenfeld 1981) and Ancel (2005) claim that the association is inspired by the Last Supper.
According to the priests interviewed, the Ethiopian Orthodox Täwahïdo Church views mahbär as an ancient custom that symbolizes the first founders of the religion.When Jesus Christ was teaching the world about God, he first organized the twelve disciples.The twelve disciples, the 36 kïdusan anïst,15 and the 72 ardits16 are called 'The 120 family' (Interviewee 16).The 120 family is regarded as the first founders of the religion.Mahbär is formed in the name of religion, and each mahbär has a patron saint.Since mahbär is hosted according to the church's rule, people believe that it needs a protector.Consequently, in most mahbär meetings priests are present to pray and bless the food and drinks.In addition to these blessings, the priest also teaches the members about the saint they have chosen to celebrate in the mahbär, and cautions the Journal of Religion in Africa 46 (2016) 3-31 members to refrain from evil deeds.In mahbär life, love, tolerance, and avoiding bad things are considered important (Gorgorios 1982).
From the church's point of view, the mahbär have several functions.One priest (Interviewee 17) explained: (1) people come together and organize to help feed the needy in the name of a saint, regardless of ethnicity, reciprocity, and so on; (2) members help the church; (3) mahbär teaches members religion; and (4) mahbär helps members to be blessed and to develop a good character and tolerance.The priests link the religious functions with the social.One priest (Interviewee 16) explained that today when the whole world is striving to eradicate poverty, he is happy to see that mahbär has been gaining recognition for its contribution toward this end.As an example, he states that just as the brothers in monasteries live together and help each other, the members of a mahbär help each other.For this reason the church attaches great value to mahbär.'We protect it as one of our antiques.When people outside the church organize themselves in groups to help eradicate poverty, they follow a strategy that the church followed from the outset' .Another priest stated this view in an even-stronger manner: 'One of the manifestations of being Ethiopian is through religious mahbär' (Interviewee 17).This priest establishes mahbär as a symbol of Ethiopian culture, and a symbol of Ethiopian society itself.He seems to feel that to be Ethiopian is to help others, coupled with a strong devotion to the Orthodox Täwahïdo Church.
Priests view mahbär as important not only to the church but also to the community as a whole.It is a place where the community can discuss numerous issues.The priests use mahbär to inform and create awareness of health issues such as HIV/AIDS; some mahbär arrange trips to places of pilgrimage such as Lalibela and Axum.For this reason the church values mahbär as important and useful.The priests are happy to attend mahbär meetings to bless the food and drinks and to encourage further mahbär.If the mahbär members do not share a priest, the host's father confessor will do the blessing and teaching.
Mahbär is a place where the church spreads its message.Since the priest's main role is to serve the people, the church sends priests to the people.As one priest explains: What makes the [Ethiopian Orthodox Täwahïdo] church special is that in this way it is present in every house.There is no Orthodox Christian without a religious father and the religious father is responsible to the church.All the churches are reporting or answering to the Hagärä Sibkät.17structure is the reason why the Ethiopian Orthodox Church protects itself and our territory.
Interviewee 16 The priests we talked to link the history and ancient customs of the Ethiopian Orthodox Täwahïdo Church to the needs of present-day society.In this way they make mahbär relevant by reproducing but simultaneously reinterpreting the practice.Organized around Orthodox Christianity, religious mahbär have an explicit religious and spiritual purpose.As we have seen, the associations are placed outside the church itself; however, they can create links to the church if they choose to, i.e., through the father confessor being present at the meetings, involving the church in making the bylaws for the mahbär, or by establishing a mahbär for the specific purpose of serving the church by providing labor or money for a particular cause.The priest is not a necessary part of the association but works as a kind of conduit to Christ through the holy water and the tsïwwa, the sacred, ritual pot that all members drink from to symbolize their unity.The priests enable and catalyze, but the association is not dependent on the presence of a priest.As Ancel underlines, mahbär and also sänbäte show 'the existence of a way of dialogue between the Church and the faithful' (Ancel 2005, 111).
Factors Ensuring the Popularity of Religious Mahbär
Why do lay people spend time practicing religious mahbär?Below we show how members describe the importance of the mahbär (the pull factors).Inspired by Bell's (1997) analytical categories, we categorized the most important functions described in our empirical material as: serving God, social networking, information exchange, conflict resolution and reconciliation, entertainment, and social insurance.We now take a closer look at each of these functions.
Serving God
Joining a mahbär can be experienced as serving God, as expressing gratitude and doing one's duty as a Christian.A particular incident may trigger people to join a mahbär or initiate one.One female interviewee explained that her mahbär responded to a call issued by the famous and controversial Ethiopian monk Bahitawi G/Meskel (Interviewee 1).He called for subae (meditation and prayer) Downloaded from Brill.com06/14/2019 09:05:41AM via free access at Mïdre Käbd, close to Abuye.The mahbär members traveled to Mïdre Käbd.They saw that the church was on the verge of collapse and decided to form a mahbär with the goal of renovating the church.They discussed how to raise money, and found a mahbär to be a way to organize and perform this responsibility.This story describes a mahbär engaging in one specific, practical task.Several told stories of children who recovered after long periods of serious illness because of prayers.Women and men described joining mahbär in gratitude.One man explained that when his son became ill the family went to South Africa in search of a cure, but did not succeed.Instead, the man claimed, the child's life was saved by his mother who made a sacrifice in God's name, and by the child drinking holy water from the Trinity Church.After this experience they joined a mahbär.This man became a member to demonstrate his gratitude after receiving what he perceived as God's help (Interviewee 32).Others believe that being a member will help them in the future.One member shared the personal drama that led him to attend a Maryam mahbär: Saint Mary's history is very special.Her work is very special.I never thought I would be able to stay alive, get married, or have children.I used to drink very heavily.Even when we had collected sacks of sand on donkeys' backs for the church, I wanted people to give me two bottles of aräqe [strong alcoholic drink] rather than food.So on her celebration day, one religious father had prayed on the aräqe and gave it to me to drink.I drank it, and since then I have never drunk aräqe again.If I drank it now I would be sick.Interviewee 7 This man joined a Maryam mahbär to show his respect and gratitude to Saint Mary.He believed that all his wishes would be fulfilled through her since she was the Mother of Christ, and that he would obtain more favors by commemorating Saint Mary's day.Several of the interviewees shared stories about miracles they had witnessed that strengthened their religious belief.However, one member took a more psychological approach to this.He compared miracles to the placebo effect: Even medicine does not cure unless the patient believes in it.Eighty percent is psychological belief.For example, my father was unable to attend his mahbär for financial reasons and hence he was in a crisis.However, I noticed that once he rejoined the mahbär he was all right.His father's psychological well-being depended on his connection to his mahbär.According to Bell (1997), ritual acts of offering, exchange, and communion tend to invoke complex relations of mutual interdependency, not only between people but also between humans and the divine.
Helping the needy and providing them with food is an important task for many but not all of the mahbär.This is considered a Christian virtue and duty.Several interviewees explained that the point of a mahbär is to fulfill what is written in Matthew 10:40, feeding the needy whether they are fellow members or outsiders who are less fortunate.A mahbär that fails to support the needy is not considered a mahbär at all, according to one interviewee (Interviewee 34).Consequently, an Orthodox Christian should be a member of a mahbär in order to fulfill his/her duties as a Christian.As we have seen, this is a view supported by priests in the Ethiopian Orthodox Täwahïdo Church.Participation in a mahbär was viewed as part of being a responsible community member.According to one participant, being a member of a mahbär where like-minded individuals come together establishes religious ties with other Christians.
In our religion he/she who respects saints will be respected. . . .We are like-minded people.There is no senseless talk.We study the gospel.Since all of the members have a good understanding of the mission of the mahbär, we help each other in times of difficulty.When we prepare food and drinks, the needy benefit as well.
Social Networking
Many of our informants stressed that membership in a mahbär is also a membership in a social network.Most members appreciated the opportunity to socialize with people of their respective mahbär.Members meet regularly during various social events.These include not only funerals but also weddings, christenings, and times of childbirth in members' families.Members are expected to support each other during good times and bad.
One interviewee described her relations to other mahbär members: Our mahbär is love and peace.Even when we quarrel with our family or friends, it is here in the mahbär that we feel good.All the members share my problems.When I feel sad, they know it from my facial expression.
The members who live abroad dream about me and ask what happened to me.When my sister and mother passed away, the members encouraged me.We are like a family.We have a sisterly and brotherly relationship.Mahbär membership is often equated to being a family.In line with this analogy, members are also expected to stay together when a family member dies.Most members frequently visit the family of the deceased, particularly during the first three or four days.They provide financial or labor support when a family member dies and regularly visit if a member or their family members are ill.One interviewee explained, 'Mahbär members are always with you when you are happy and sad.For instance, when there is sorrow, we spend twelve days together' (Interviewee 1).
Membership comes with a great deal of social obligation toward fellow members as well as their families.According to one interviewee, a female member supported a young man for five years since he was unable to find work (Interviewee 1), providing him with shelter and covering all his expenses until he received the opportunity to go abroad.This seems to be an extreme example of generosity, and is not the norm among the mahbär we contacted.Obligations of this scope represent support for members but can also present a challenge in times of hardship.Associations will do their utmost to retain members since they attach great importance to staying together in order to uphold the social network.'When a person wants to leave the mahbär, we will ask why.He/she may, for example, give the cost of living as a reason, or he/she may say, "I am getting old and cannot prepare the food and drinks any more".If these are the reasons, we do not oblige them' (Interview 4).They will try to find other ways to allow the members to stay.
All members of the association socialize, irrespective of their gender, but the female members in one mixed mahbär were especially close.They socialized and talked to each other to a greater extent, discussed personal issues outside mahbär meetings, and shared activities.The men, on the other hand, said they tended to go out and drink together, an activity the female members of the mahbär seldom took part in.
The women isolate themselves in a group, but the men do not have a group. . . .We love each other.Of course, when the women sit in a group they could have something to discuss because of their femininity. . . .They may do their own private things.We drink.After attending the mahbär we say to each other, 'Let us drink and be happy' .We drink draft beer.We contribute whatever we have.It could be 50 or 100 birr.20After we drink, we say goodbye to each other.
Journal of Religion in Africa 46 (2016) 3-31 members.Accepting new members is a democratic process in which members check the potential newcomer's background, have discussions, and make a joint decision: People who want to register also come and tell us that they came to join us because they like us, our patron saint, etc.We do not decide on the same day.We check whether there is a member who is not on good terms with him/her.Finally, we would tell him/her our decision at the upcoming meeting.
Interviewee 4 Mahbär promotes interdependence and mutual coexistence built on trust.As Messing (referred in Korten and Korten 1972) points out, mahbär is a unique organization within the Amhara society in offering this choice of membership.
In urban areas neighbors are often involved in preparations for mahbär meetings.Despite not being members of the association, they may provide help in preparing food and drinks.Involving the neighbors in this way strengthens the relationships in the neighborhood and, as the following interviewee underlines, strengthens the social fabric.
The next host will take the bread that was cut to fit the mäsob [the salilit].It is big.She will also take qolo and t'älla to her home.Then her neighbors welcome her by making coffee.She will distribute the bread [the salilit] to them.It is known as mmät'ubïš mätačč ['It is now my turn'].If her neighbors are served, they will help her while she prepares her mahbär. . . .It is said: 'Please drink mmät'ubïš' .It is not for the members.We, the members, leave the house and then the neighbors are served. . . .The invitation [to eat the salilit] helps keep them informed well in advance.It strengthens the social fabric.Our main goal, if it is God's will, is to see people well connected. . . .If she tells all her neighbors about it, even at the very beginning they will help her.In general, this helps strengthen people helping each other.Interviewee 15 In addition to establishing a social network with the other members, in this way mahbär also strengthens the social relationship with the neighbors.
Information Exchange
While some members are very articulate and explicit about the different social functions of the mahbär, others are more concerned with their own practical and immediate use of it.Information exchange is one such use.At the gatherings people share a variety of information about the economy and the cost of living, where to get cheap products, health and illness, or exchanging advice and information about cures, healers, physicians, medicines, and so on.Such information can be of great importance in a society with a high degree of illiteracy.Political issues that cannot be easily discussed in public, such as the relationship between Christianity and Islam, between the Orthodox and Protestants, ethnicity, the government, and elections, are discussed.In addition to such general topics, they share what is happening in the local community or neighborhood; who has died, who is sick, who got married, or who gave birth.Finally, there are also discussions of personal issues and relationships, such as raising children, dealing with teenagers, handling spouses, or maneuvering in marriage conflicts.In this way mahbär constitute an important forum for information gathering, particularly for women.According to Elias ( 2008), voluntary associations in general are well suited to promoting the social lives of Ethiopian women, mainly because they are formed based on mutual trust and reciprocity.
Conflict Resolution and Reconciliation
Ritual action facilitates social life.Ritual activities perform social work by forming and establishing social bonds of human community (Bell 1997).These processes of socialization occur when people appropriate common values and categories of knowledge and experiences.Channeling and resolving social conflict is not an unimportant part of these processes.Unsurprisingly, we find that reconciliation and conflict resolution are important functions of mahbär.Since most mahbär members live in the same neighborhood, they come into conflict on various occasions.Particularly in poor neighborhoods such as some we visited, the women are in close contact since they share kitchens and compounds; this can often cause disagreements and conflict.Members bring issues of conflict, mainly among mahbär members, to their monthly meetings to resolve them in that forum.Getachew points out that mahbär provide an effective means of conflict resolution: 'Compared with the government courts, resolving conflicts with the help of mahbär saves people's time and money' (Getachew 1998, 507-508).
Conflict resolution is institutionalized in mahbär.According to our observations, time is allotted for conflict resolution at mahbär meetings and follows specific procedures.The priest or the muse plays a significant role in settling problems between members.At the beginning of the meeting the priest or the muse is informed about any conflicts among members.Each member approaches the priest or the muse and informs them about specific conflicts.Before the members start to eat and discuss other matters, time is set aside to ask for the other members' forgiveness.At a Maryam mahbär meeting we observed, each member stood up in front of all the mahbär members and asked for their forgiveness.This includes mistakes that the person recognizes or those of which they have no recollection.Each person uttered the sentence, 'Awïqem bihon balämawoq qasqäyämkuaččhu bbämaryam sïm yïqïrta ïndïtadärgulïñ ït'äyïqalähu' (If I offended you, knowingly or unknowingly, please forgive me in the name of Saint Mary).This is both a ritual pardoning as well as a morespecific and personal seeking of forgiveness.One member explained that forgiveness is a major principle in Christianity, and since members are like sisters and brothers they have to forgive each other.
According to one interviewee, harmony and peace are more prevalent among mahbär members than among iddir members.She said this is because mahbär is a religious association and members' fear of God helps them to forgive each other.She asked, 'How could mahbär members fight each other while drinking a tsïwwa, the symbol of unity?'She added that restraining from quarreling (nägär) is important.Gossip is forbidden and members are strongly encouraged to discuss their problems openly.According to Korten and Korten (1972), rules against gossiping or speaking ill about members are common in mahbär.Korten and Korten interprets suppression of conflict as a way of maintaining group solidarity in a context dominated by individualistic hostility.As one interviewee observed, 'Since iddir is all about mourning and helping each other, rumors are more intense in iddir than in mahbär.Since mahbär is about religious belief, and despite the fact that rumors exist in it too, they are not very strong' (Interviewee 15).We note that the interviewee refers to the "fear of God" being the driving force for people forgiving each other.
In some instances the members solve or attempt to solve family problems during mahbär meetings.These are mainly problems between spouses and they are discussed with a selected few of the members.In one of the mahbär we visited, one particular woman played an important role in helping the other women solve their marital problems.The conflicts that the women brought forward included physical violence.This woman had gained the other members' respect for her wisdom over the years.Some of the women even asked her to talk to their spouses.
Entertainment
Entertainment and social transgressions are also part of mahbär life.Most of the members we interviewed said they eagerly anticipate the monthly meetings because of the enjoyable time they spend with fellow members.One told us that she counts down the days before the meetings.For many members this is their only opportunity to have fun with friends and acquaintances.Drinking creates an atmosphere conducive to talking about light and entertaining issues.
Interviewees noted that the meetings provided an opportunity to relax and set aside serious life issues that worry them in their everyday lives.
Mahbär members may play tricks on each other for fun.For example, they tie their nät'äla (cotton shawl) together.The joke is that the member is stuck because of the tied nät'äla and so she cannot leave the meeting.Normally members like to sit next to those who crack jokes or make others laugh.These jokers are popular members.Some members are very good at cracking jokes including jokes that are normally called 'dirty jokes' .One interviewee observed that the women are curious about sexual matters, and on some occasions women describe personal stories about extramarital affairs (Interviewee 15).She added that there were times when she had to stop some women from telling personal stories that might later damage their marriages and their reputation.
On some occasions the women drink too much, which can lead to conflict with their husbands.Women have experienced physical abuse because their husbands get angry when they get drunk and stay out late at night.For this reason, members try to remind each other to get home before dark.However, since the women enjoy the meetings so much, they often forget to leave early enough.
These aspects of a mahbär meeting are what Bell calls rituals of fasting, feasting, and festivals.In general, these kinds of rituals are larger-scale social events with religious content.They can extend and overlay the religious value system.By deconstructing the routine for a period, these rituals appear to recognize sources of power outside the system (Bell 1997:136).As such, they legitimate and facilitate changes in the system.Rituals of rebellion are considered a subgroup of rituals of fasting, feasting, and festivals, and are common all over the world.As Gluckman points out (1955, in Gilhus andMikaelsson 2001), these rituals are important even though they very seldom lead to real societal change.By turning hierarchies upside down, women can, for example, make male power visible; they can play out social tensions and conflicts within these settings.The rebellion is temporary and ritual.As transgressions in carnivalesque rituals, they upset hierarchies in very framed and standardized ways.What seems different in religious mahbär is that this upset is less standardized and thus more open to negotiation.The limits of the ritual are more diffuse, and what is acceptable and unacceptable is therefore less clear.Every association negotiates its own terms, making it more open to the members' needs but also more challenging.Some parts of the ritual are more flexible and negotiable than others.Social Insurance Membership in mahbär can furthermore be understood as a kind of social insurance.We have identified three kinds of social insurance in our material: first to protect the name, second to keep the tradition alive, and third to build reputation and prestige.
Protection of names motivates membership in mahbär.One mother explained that she took the place of her deceased daughter in a Maryam mahbär to keep the daughter's name alive.More often, the child will be the one that succeeds his/her mother; it is considered very important for the daughter or son to ensure that the mother is not forgotten by taking her place in the mahbär.The point is to keep her name alive or "to let her name be called out" (sïm mast'ärrat).In this way the community members remember the mother and the child honors their mother.Raising a child that fulfills such duties reflects well on the mother's social reputation.
It is quite common for family members to take the place of members that have passed away.We understand this as a social obligation.One of the members we interviewed explained that he took his father's place when his father passed away even though the interviewee was only six years old at the time.The members attended his father's arba21 and sämania.22 The father's association is a rural mahbär and members often contribute labor rather than money.If a member dies in the summertime each member will plow for one day in support of the bereaved family.If the death occurs in the winter they participate in sowing his crop.23Over the last 60 years the mahbär has celebrated his father's name.In his absence, the son appoints a person to keep the membership on his behalf.
For some members it is very important that the deceased's name should be kept in the association in which they were members.This means that a son takes his father's place, a father takes his son's place, or a son takes his mother's place; this legacy membership has no restrictions when it comes to the member's gender.A man can replace a woman in a women's mahbär and a woman can replace a man in a mixed mahbär.These practices can be interpreted as keeping the memory alive, but they may also be examples of rituals of affliction.
Being a member of several iddir and mahbär seems to be a way of building one's reputation and prestige (see also Ancel 2005, Korten andKorten 1972), as well as strengthening one's social position and securing one's good name.Membership provides social security and social prestige, and indicates that you are an active member of the community, a responsible person, and a good citizen.Even though iddir is more important to people, being a member of Downloaded from Brill.com06/14/2019 09:05:41AM via free access Journal of Religion in Africa 46 (2016) 3-31 mahbär enhances one's credibility.24People often ask about the number of associations they participate in.Some mahbär have a special history that makes them particularly impressive.One member explained that it was an honor for him to be a member of the same association in which Emperor Melenik II was a member 100 years ago: It is a great honor for me to attend a mahbär that was attended by Emperor Melenik II.The other thing that makes it special is that although my mother gave us a new pot [a tsïwwa] three times after the original pot was cracked, we are still using the same pot that was used at the time of the emperor.Since it has been cracked it leaks water when it is washed but surprisingly it does not leak the holy t'älla.As a result, we believe it has a great secret in it.
Interviewee 27
Being a member of this particular mahbär is prestigious for the interviewee and consequently builds his reputation and prestige in the community.
We have now explored the factors that make this particular ritual attractive to its members and to the Ethiopian Orthodox Täwahïdo Church.To sum up, religious mahbär is a ritual that allows the members a great deal of freedom in its practice.According to the stories shared by the members, the most important religious function seems to be serving God by commemorating the patron saint, drinking from the same pot (the tsïwwa) in order to strengthen the bond with fellow religious practitioners, and establishing relationships of sisterhood and brotherhood.The most important social functions identified were creating a social network, exchanging information, conflict resolution and reconciliation, entertainment, and finally social insurance.The ritual can be viewed as a cultural toolbox in which laypeople have great flexibility in adjusting it according to their needs.
Factors Contributing to the Declining Support for Religious Mahbär
In recent years changes have been observed in religious mahbär.We found that most mahbär have fewer members than in the past.As stated earlier, in our view three factors are pushing this decline: economic challenges, time constraints, and member recruitment.Economic Challenges For the most part engagement in mahbär seems to be determined by the household's resources.A number of interviewees stated that nowadays it is difficult to uphold a mahbär membership.Many are worried about economic challenges: 'Things have changed.In the old times, there were people who used to attend more than one mahbär.I used to have two mahbär.Currently I have only one' (Interviewee 3).In one Maryam mahbär, only five of the 40 members remained.As we have seen, members can feel forced to leave their mahbär when they cannot afford to prepare the feast for the monthly meetings.According to the interviewees, preparing food and drinks requires a strong financial situation at home.At mahbär meetings neighbors and relatives are sometimes expected to enjoy the food in addition to the members, thereby increasing the number of invitees.For many poor people hosting the meetings and preparing food for the members is expensive enough; adding nonmembers or even a priest to the meeting adds an extra economic burden.
Not all mahbär have priests who attend their meetings on a regular basis.There are many reasons for not having a priest present, but some members believe that they are expected to provide contributions or gifts for the priest or the church in order for a priest to attend.The members view these contributions as important to the church in that they allow it to arrange proper religious celebrations.The contributions can be money, candles, a gabi (a thick woven shawl of cotton), or an umbrella for ritual purposes.
The decision to quit a mahbär is very difficult.Interviews were conducted with members who were about to leave their mahbär.While some feared they would no longer receive religious favors, others were unhappy about losing their friends and social relationships.Had it not been for financial constraints, all would have continued their memberships.Leaving the mahbär was one of the most difficult decisions they had to make in their lives.Some stated that they were afraid that the nation's economic problems would threaten the very existence of religious mahbär and hence the Ethiopian way of life for Orthodox Christians.Others made suggestions as to what should be done to overcome the current challenges: As for me, it would have been better if we go to the church and do what is appropriate.Though we prepare the food and the drinks, it is us that consume them.Neither Saint Mary nor Saint Michael will come to eat.Rather, it would be good if we pray and help the needy that are the poorest of the poor.I imagine this is what God would welcome very much.For instance, if we prepare t'älla and bread it would cost at least 300-400 birr.I would be happy if we omitted all those things and believe in the Arc of the Covenant (Tabot).25I decided many times to leave the mahbär, but they begged me in the name of the 'tsïwwa' to not quit.Nowadays the cost of living is very high, and I am the one who is affected.Had it not been for my sisters' request to not quit the mahbär, I would have helped the poor monks.
Interviewee 4 This member is referring to the possibility of turning the mahbär into a sänbäte.Although in many ways a sänbäte is equivalent to a mahbär, as previously mentioned, the members of a sänbäte meet in the compound of the church rather than in members' homes.The social obligations are less in a sänbäte since they normally have more members, and they often prepare less food.
One of the priests also addressed the economic challenges when he emphasized, 'The main objective of a mahbär is not eating and drinking together, it is the spiritual relationship' .He reminded us that the Holy Bible states, 'Commence your gifts from clean water' (Interviewee 17).This priest was concerned about the decline in mahbär, which he attributed to the amount of food being served and hence the expense.In our interviews we found that the priests shared the view that people should reduce the amount of money spent on preparing food and drinks for the gatherings in order to enable poor people to participate.Some churches even monitor the amount of food and drinks prepared to make sure that mahbär is not used as a means to show off wealth or social class.Preparing food and deciding which food to serve are topics of negotiation in many of the associations.Although the associations tend to prioritize ïnǧära, initially reducing the kinds of wät' they will serve.If that is not inclusive enough, then members may decide to not serve ïnǧära at all.Some have decided to only serve qolo, bread, and t'älla and eliminate the ïnǧära and wät' (traditional Ethiopian sauce eaten with ïnǧära).'For the last six months, we prepare qolo and bread, if available.If not, we prepare whatever is available.Since things are getting expensive and people are competing, we decide to celebrate it only with qolo and bread, and if possible t'älla' (Interviewee 4).We can view the church's preoccupation with limiting the amount of food served at the meetings as its way of securing the future existence of religious mahbär.
Prices have increased significantly every year in recent years, making the cost of living and hence the cost of preparing food for a mahbär meeting very high.As a response to these challenges, some mahbär have decided to arrange meetings every other month instead of monthly meetings.Some members have vowed to never leave their mahbär despite these difficult economic circumstances.Quitting or even reducing the type of food to be prepared for the regular meetings is considered unacceptable to many because, according to them, this signifies a weak commitment to one's religion.
Members do not discuss their financial hardship because they think that this might dishearten Saint Mary.Some say nothing is more important than celebrating her, even when they are very poor.Surprisingly, when they bring her into their home they get something good.As a result, people have a firm belief.
Interviewee 15 Most members strongly believe that they have to continue despite the economic difficulties of preparing food for the meetings.This belief indicates the dilemma these members are facing and the difficulty of deciding whether to remain or quit.Others believe that these things will be taken care of: 'You know, even if you prepare only a handful, you will still have leftovers.Since you prepare it in the Saint's name it will always be enough' (Interviewee 5).In addition to providing food for the meetings, which the small fee contributed does not cover, we have seen that the members have extensive social obligations toward members' families.They are expected to participate in weddings, christenings, and funerals in addition to visiting the sick and new mothers after delivery.Since gifts must be provided, all these social obligations are both expensive and very time consuming.Reducing the amount of food or having the meetings every other month does not solve the challenges of the extensive social obligations toward members' families.We suggest that in times of economic hardship, which include our entire research period, the Orthodox then prioritize iddir.As is the case with many African countries (Tostensen et al. 2001), Ethiopia has a variety of indigenous associations that are diverse in purpose, membership size, and importance in the communities.The association that is the most ubiquitous is iddir, the main function of which is to provide financial and in-kind support to households when a family member dies.This assistance is usually in the form of finances to cover funeral costs.While iddir is a collective insurance for the family as well as the individual, the mahbär is attended more often to strengthen people individually.Mahbär also tend to be a smaller group of approximately twelve members, while iddir may have from twelve to several hundred members (Encyclopaedia Aethiopica vol. 2 2005).Compared to iddir, the membership in religious mahbär seems to be more personal.
Time Constraints
Time constraint is a relatively recent challenge that may in time create changes in the associations' practices.People complained that they did not have time to socialize with members of their respective mahbär.'Earlier people used to help each other.Currently the youngsters do not have time' (Interviewee 3).It has become difficult to visit the sick and socialize with members because people need more time to earn an income for their livelihood.
If forced to prioritize, people will choose to uphold their iddir membership due to its important function in assisting with burials and its function as economic insurance for the whole family.The iddir will pay money to the family of the deceased from the fees that the members have paid over the years.The amount paid out depends on whether it is the main provider or another family member who has died.People may have several iddir as forms of economic insurance, and being forced to terminate one's iddir membership is considered devastating.
Recruiting New Members
A third challenge is recruiting new members when old ones die or are forced to terminate their membership.Urbanization processes provide neighborhoods with new residents on a regular basis, so how can it be difficult to recruit new members?Members express the challenge as finding someone to trust.Some attribute this to a change in people's values in which they have become more selfish and less caring toward each other.The increased time spent on income-generating activities allows less time to get to know people in the neighborhood, and hence to build relationships of trust.Trust in other members is essential since they trust each other not only with money but also with experiences, thoughts, and feelings.In addition, sections of the educated class and the youth consider membership in religious mahbär as somewhat oldfashioned, and may prefer other ways of socializing and worshipping.In our view, this is one of the most significant reasons for the ritual's decline.
Contextualizing the Changes in Religious Mahbär
In our view, the modernization of Ethiopian working life, including changes in gender relations, is contributing to the decline of religious mahbär.We propose that following their increased involvement in paid labor and the formal economy, women on the one hand have reduced flexibility regarding their time use, but on the other hand they have more freedom to participate in social activities outside the neighborhood.While in earlier times religious mahbär may have been one of the few legitimate opportunities for women to socialize with other women and enjoy themselves, they now have other options such as spending time with workmates and friends.In our opinion women are now in a better position to negotiate their time in relation to their husbands and other family members.Equal rights for women have been legally secured in Article 35 of the Ethiopian Constitution (from 1995) and in the new Family Law (revised in 2000).Awareness about these issues is increasing in the Ethiopian society.
The flexibility of the mahbär allows some of its elements to be transported into different associations.However, the explicitly religious ritual elements seem to be pushed into the background or even lost.We see this in the increased popularity of different kinds of mahbär such as friendship mahbär, family and relative mahbär, schoolmate mahbär, workplace mahbär, and so on.In these new versions of mahbär the complexity of the ritual seems to be reduced or to disappear, retaining only the socializing and network dimensions as well as the regularity of the meetings, preferably once a month.These new mahbär have more flexible rules with fewer regulations and obligations than traditional ones.In this type of mahbär members do not necessarily meet once a month, there is not necessarily a patron saint even though they could have one, at least to signal which day of the month they are to meet, they may meet in a public place, and there are fewer associated social obligations.The main purpose of such mahbär is to facilitate people staying in touch.These associations can be an addition to as well as a substitute for religious mahbär.
The changes in the Ethiopian religious field with the strengthened position of Islam globally as well as other variants of Christianity such as Protestantism, and Pentecostal and other evangelical Christian congregations provides an important context for understanding the changes in religious practices among Orthodox Christians.Offering services on weekday evenings on weekdays can be viewed as one answer to such challenges for the Orthodox Täwahïdo Church, as is preaching and spreading the gospel in the Amharic language rather than only in Ge'ez.26Increased attendance at these services and in religion in general has been observed, including among youth.
Despite the changes, it is interesting to see that some of the social elements are preserved through new kinds of mahbär, while people find new ways of strengthening their religious engagement (i.e., through increased attendance at church services).The religious everyday practice seems to be on the move from the private or semiprivate to the public sphere.This may also be a result of the increased religious competition in Ethiopia, and the need for the Ethiopian Orthodox Täwahïdo Church to exert its presence in a new way.
Conclusion
Through empirical analysis we explored religious mahbär as a ritual, its dynamics and main functions, and the challenges it faces in today's urban Ethiopian society.The tsïwwa as a ritual is perhaps mostly characterized by its complexity and flexibility.Since the ritual is practiced quite flexibly in the everyday lives of Ethiopian Orthodox believers, it can be viewed as a cultural toolbox that supports members in different ways adapted to the specific needs of the members.Thus the mahbär benefits both the Church and its members.Despite its multifunctionality and flexibility, membership in the association and practicing of the ritual is declining.We suggest that this is due to the increased pressure on people's finances and time, in addition to the difficulties of recruiting new members.What this means for mahbär as a ritual remains unclear.Changes in gender relations, modernization processes, and economic hardship are putting increased pressure on women's time.Since women are very important in upholding the ritual, its future seems uncertain.Furthermore, we have seen that people are establishing new kinds of social mahbär with fewer ritual and religious elements, and that the Orthodox Täwahïdo Church's practices seem to be leading to greater attendance at church services.
Journal of Religion in Africa 46 (2016) 3-31 In more general terms, Mogues (2006 cited in Wassie and Butterfield 2007:1) states, 'In rural areas, the social networks of traditional support systems are used as coping strategies for problems related to drought, such as food shortages and the loss of assets through the death of domestic animals and crop failure' .24 A study by Emebet (2008) in Addis Ababa found that 19.4 percent of the respondents were members of a mahbär, while 82.6 percent had at least one iddir.As a burial association, iddir is also very important among denominations other than the Orthodox.25 Tabot is a replica of The Arc of the Covenant or the Tablets of Law, present in every Ethiopian Orthodox Church.26 Ge'ez is an old Ethiopian Semitic language used in the liturgy of the Ethiopian Orthodox Täwahïdo Church.27 The references list is written according to Ethiopian academic practice: Ethiopian authors' first names are used for the alphabetical order and non-Ethiopian authors' names are listed according to their surnames.
|
2019-05-10T13:07:07.204Z
|
2016-11-09T00:00:00.000
|
{
"year": 2016,
"sha1": "92b29e1aff359022f998c75dd744e3ab4b7cf224",
"oa_license": "CCBYNC",
"oa_url": "https://brill.com/downloadpdf/journals/jra/46/1/article-p3_2.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "92b29e1aff359022f998c75dd744e3ab4b7cf224",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
8217833
|
pes2o/s2orc
|
v3-fos-license
|
Asymptotically exact solution of a local copper-oxide model
We present an asymptotically exact solution of a local copper-oxide model abstracted from the multi-band models. The phase diagram is obtained through the renormalization-group analysis of the partition function. In the strong coupling regime, we find an exactly solved line, which crosses the quantum critical point of the mixed valence regime separating two different Fermi-liquid (FL) phases. At this critical point, a many-particle resonance is formed near the chemical potential, and a marginal-FL spectrum can be derived for the spin and charge susceptibilities.
hybridization. Then we apply the RG theory [7] to derive flow equations from which a phase diagram is found. There exist two different FL phases: a Kondo phase and a local free FL phase separated by a mixed valence phase, displaying a local MFL behavior. Finally, from the partition function and the effective Hamiltonian, we take the strong coupling limit, and an exactly solved line (analogue of the Toulouse limit in the Kondo problem) can be found, which crosses the quantum critical point of the mixed valence phase where a many-particle resonance is formed between the localized impurity and the conduction electrons. Properties of the above three phases, especially, the mixed valence phase controlled by the critical point can be exactly derived.
The local copper-oxide model is specified by the Hamiltonian [2] H = H h + H s , ǫ k C † k,σ,l C k,σ,l + k,k ′ ,σ,l>0 V l N C † k,σ,l C k ′ ,σ,l (n d − where n d = σ n d,σ , N is the number of the lattice sites, and we have separated the Hamiltonian into hybridizing and screening parts and distinguished parallel-spin and opposite-spin XRE scatterings in the hybridizing channel. The localized impurity hybridizes only with channel l = 0. The chemical potential µ is set to zero, and we are interested in the case when the local impurity level ǫ d is close to zero. The spinless version of this model, i.e. the multi-channel resonant level model, has been solved exactly [8,9], displaying a FL vs non-FL transition as the interaction parameters are varied. In fact, H h is the usual Anderson model plus XRE potential scatterings, while H s is the multi-channel XRE Hamiltonian. It is more convenient to take an infinite U limit as in the usual treatments for the Anderson model (the low-energy physics is kept). Thus, we add a local constraint for the local impurity n d ≤ 1.
First, we use abelian bosonization to handle the XRE singularities of the screening channels, which reduce to a one-dimensional problem with only one Fermi point for each channel and the dispersion is linearized with a cutoff k D [10]. Since the spin degrees of freedom of the screening channel electrons are trivially involved, they can be separated from H s . Moreover, we can assume V l = V s for all l > 0 without loss of generality, so the channel index can be dropped. Thus, all the screening channels are described by a single spinless channel [3]. The resulting form is withṼ s ≡ √ 2N s V s and N s is the number of the screening channels. a † k and a k are the bosonic operators describing the charge degrees of freedom of the screening channel and ρ = (hv F ) −1 is the density of states at the Fermi point. Employing the inverse bosonization, we can transform bosons back to fermions: . The Hamiltonian (2) can be diagonalized through a canonical transformation [8]: where δ s ≈ πρṼ s is the phase shift generated by the XRE scattering of all screening channels at the impurity.
For the hybridizing channel, employing another canonical transformation [8,11]: S = exp{ {k>0,σ} where δ 0 ≈ πρV 0 and δ ′ 0 ≈ πρV ′ 0 are the phase shifts induced by the parallel-spin (σ) and opposite-spin (σ) XRE scattering of the hybridizing channel, respectively, we merge XRE scatterings into the hybridization and obtain where Next, we derive the partition function, dividingH ef f asH ef f =H 0 +H I withH 0 and H I as the free and the hybridization parts of (4), respectively. Paralleling all the strategies of previous studies [6][7][8], we write the partition function in terms of a sum over histories of the impurity. Each history is a sequence of transitions between the three local d states | α >=| 0 >, and | σ >, σ =↑, ↓. The transitions take place at the imaginary time 0 < τ 1 < ... < τ n < β = 1 T : along the Feymann trajectory the local state is at | σ i+1 > from τ i to τ i+1 (i=1 to n). The partition function of (4) is now given by where the bare hybridization strengths are defined as Γ = ρt 2 , while the cutoff factor τ = ρ k D . The effective "magnetic field" reflects the differences of the local state energies: The long-range logarithmic interaction between the flipping events arises from the reaction of the conduction electron bath towards the transition between the local states. The local disturbance on the bath involves two factors: the absorption or emission of the local conduction electrons and the change in the local potential that the conduction electrons experience [8]. Both kinds of disturbance are incorporated in the effective "charge" factor, i.e. the coefficient of the logarithmic function of (5).
The partition function (5) could be obtained directly from the model Hamiltonian (1) without bosonization, using the famous fermion techniques [12] in certain asymptotic limit which, we believe, is also valid here. This alternative derivation would allow us to rectify the phase shifts obtained by the bosonization treatments to the exact expressions: δ s = 2tan −1 ( π 2 ρṼ s ), δ 0 = 2tan −1 ( π 2 ρV 0 ), and δ ′ 0 = 2tan −1 ( π 2 ρV ′ 0 ) so that the corresponding effective Hamiltonian (4) might be used beyond the range of validity for the bosonization method, especially in the following strong coupling limit where the renormalized parameters of the model recover their bare values.
To set up the RG flow equations, we can directly employ the scaling theory proposed by Anderson et al. [7] in the Coulomb gas representation. The RG equations describe the flow behavior as the bandwidth is reduced. They are given by with γ ≡ − 2δ 0 π + ( δ 0 π ) 2 + ( δ ′ 0 π ) 2 + ( δs π ) 2 describing the total interaction strength between the conduction electrons and the local impurity, which should be positive in our case. These equations were derived by assuming Γτ ≤ 1, a rare gas of spin-flips. In the zeroth order, we can construct two invariants (Γ * τ * = 1) : where ǫ d , Γ, and τ 0 are initial (bare) parameters. In terms of these scaling invariants, the running resonance width and impurity level are written as Γ(τ ) = (Γ * ) 1−γ (τ ) −γ and i.e. a trivial limit of γ = 0, the two expressions become Γ(τ ) = Γ * and ǫ d (τ ) = ǫ * d + Γ * π ln(Γ * τ ), which exactly recovers the Haldane's RG results for the standard Anderson model [7]. Moreover, a complete phase diagram can be determined by comparing the invariants ǫ * d and Γ * . In the plane Γ * − γ (Fig.1), there are three phases corresponding to different impurity-occupancies: single-occupancy regime (ǫ * d ≪ −Γ * ) where a singlet state is formed and the Kondo effect shows up; zero-occupancy regime (ǫ * d ≫ Γ * ) where the model corresponds to a local free FL phase, and the mixed valence regime (| ǫ * d |≤ Γ * ) where < n d > fluctuates between 0 and 1 phases. The mixed valence regime separates the < n d >= 1 and the < n d >= 0 phases with crossover lines Γ * ≈ Γ 1−γπ and Γ * ≈ Γ 1+γπ , respectively. Although parameter γ is not renormalized in the zeroth order of Γτ , in its first order it is renormalized to smaller values as τ increases. On the other hand, from Γ * ≈ −γπǫ * d + Γ, Γ * increases when ǫ * d > 0 and decreases, while it decreases if ǫ * d < 0 and decreases in absolute value. Hence the flow directions indicated in Fig.1. In the end, the renormalized γ tends to zero as ǫ * d → 0, Γ * → Γ.
In addition to this RG analysis, a strong coupling limit can be extracted independently from the partition function (5) and the effective Hamiltonian (4). As follows from (4), when opposite-spin XRE scattering in the hybridizing channel renormalizes to zero while the XRE scatterings of the parallel-spin and the screening channel reach their respective unitary limit, i.e. δ 0 = δ s = π and δ ′ 0 = 0 or γ = 0, the phase shifts due to hybridization and parallelspin XRE scattering in the hybridizing channel compensate each other, thus the hybridizing electrons become completely free. In such a strong coupling limit, the effective Hamiltonian becomes with constraint n d ≤ 1 or n d + s † 0 s 0 = 1, which reflects the Friedel sum rule in this limit. It is obvious that the partition function derived from the Hamiltonian (7) is exactly the same as (5) for δ 0 = δ s = π and δ ′ 0 = 0, i.e. γ = 0. In this sense, the strong coupling limit found here is somehow analogous to the Toulouse limit of the Kondo problem [13], although the actual physics involved is quite different. The most essential difference is that the unitary limit has been actually reached in our case. The vanishing of opposite-spin XRE scattering in the hybridizing channel is exactly what is required by the infinite U limit, because any finite hybridization between opposite-spin hybridizing electrons and the local impurity (to compensate XRE scattering) will contradict the single occupancy constraint.
In Fig.1, γ = 0 is a strong coupling limit line of this local copper-oxide model. In (7), only charge of the local impurity α ≡ 1 is coupled to the conduction electrons, while the spin β ≡ 1 2 (d ↑ − d ↓ ) is decoupled except for the constraint. Thus, in this limit, the Hamiltonian is: , where n α + n β ≤ 1 and the hybridizing electrons do not show up explicitly. This Hamiltonian is essentially the same as Eq. (14) of [3]. SinceH T conserves n β , we can calculate physical quantities by taking the trace on its two subspaces n β = 0, 1.
(i). When ǫ d ≫ ǫ dc , a critical value to be defined later, the n α = n β = 0 state is favored in the low-energy regime. All charge fluctuation processes are frozen out and s † 0 s 0 = 1 at the impurity site so that the Friedel sum rule is saturated. The hybridization strength Γ * should be renormalized to zero, and a local free FL behavior is thus displayed [14].
(ii). The opposite case ǫ d ≪ ǫ dc favors n α +n β = 1 in the low-energy regime, and there are two possible configurations: n α = 0, n β = 1; n α = 1, n β = 0. All charge fluctuation processes are also frozen out but s † 0 s 0 = 0. The hybridization strength Γ * should be renormalized to +∞, and the system is scaled to Wilson's strong coupling fixed point of the Kondo problem: a local FL behavior in its unitarity limit [14]. The symmetry of the ground state for the present case is different from that at ǫ d ≫ ǫ dc , so we anticipate a quantum critical point at (iii). When ǫ d → ǫ dc , the localized impurity fluctuates between zero-and singleoccupancy < n α + n β >→ 1 2 , and hybridizes with only part of the screening electron < s † 0 s 0 >→ 1 2 , corresponding to the mixed valence phase. At the special point (γ = 0, Γ * = Γ) along the strong coupling line, the above two different FL states are degenerate.
This special point is just the quantum critical point controlling the physics of the whole mixed valence phase. An analogous quantum critical point was found in the two-channel or two-impurity Kondo problems [15]. At zero temperature, we find ǫ dc ≈ − 3ln2 π Γ and < n α >≈ 1 2 , < n β >≈ 0. Thus, the local impurity level is close to the chemical potential. Using the phase-shift representation of the Friedel sum rule: < n d > + 1 π δ h (µ) + 1 π δ s (µ) = 1, we easily obtain the phase shift of the screening electrons caused by the final hybridization is π 2 at the chemical potential. Since both hybridizing and screening electrons are involved and there is a constraint on the local impurity, the hybridization becomes a many-particle resonance, drawing some weight of the one-particle spectra from higher energies at the scale of the charge transfer gap in the insulating state. Such a many-particle resonance breaks down the Landau correspondence between the low-lying excitations of the interacting and non-interacting fermions [3]. At finite low temperatures, the impurity charge and longitudinal spin susceptibilities χ σ,ρ can also be calculated using the relations σ z = (α † β + β † α), ρ = (α † α + β † β − 1 2 ). It has been found that χ σ,ρ are proportional to Γ −1 ln( Γ T ), as expected from the MFL phenomenology [3]. Thus, the MFL behavior controls the whole mixed valence regime.
In conclusion, we have presented an asymptotically exact solution of the local copperoxide model including the charge fluctuations of the screening electrons. The physical picture of the breakdown of the FL behavior pointed out in Ref. [3] is basically correct. However, the justification of their physical arguments involves several unclear approximations. The crucial point is that in all the previous studies [2,3,6], the parallel-spin XRE scattering in the hybridizing channel was assumed to be zero, i.e. V 0 = 0. However, namely this assumption obscures the physical features of the model and makes the problem much more involved.
Due to the emergence of new relevant variables in their theory, in principle, they can not reach the strong-coupling limit. Since our solution is based on both RG flow analysis and the strong coupling effective Hamiltonian, the MFL behavior should be a universal property of the mixed valence phase. Of course, whether a specific system is at the quantum critical point depends on a special combination of parameters. A more interesting question, whether the chemical potential of a real mixed valence system is pinned at the critical point, requries further studies.
|
2014-10-01T00:00:00.000Z
|
1994-03-16T00:00:00.000
|
{
"year": 1994,
"sha1": "7b1df9699cd9528fb6a997aed1054fc5e981324e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/9403059",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "59b03156c57872686df1ff8c29bd37bb178ac419",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
}
|
219081132
|
pes2o/s2orc
|
v3-fos-license
|
Post-quantum trapezoid type inequalities
: In this study, the assumption of being di ff erentiable for the convex function f in the ( p , q )-Hermite-Hadamard inequality is removed. A new identity for the right-hand part of ( p , q )-Hermite-Hadamard inequality is proved. By using established identity, some ( p , q )-trapezoid integral inequalities for convex and quasi-convex functions are obtained. The presented results in this work extend some results from the earlier research.
Introduction
Quantum calculus or briefly q-calculus is a study of calculus without limits. Post-quantum or (p, q)-calculus is a generalization of q-calculus and it is the next step ahead of the q-calculus. Quantum Calculus is considered an incorporative subject between mathematics and physics, and many researchers have a particular interest in this subject. Quantum calculus has many applications in various mathematical fields such as orthogonal polynomials, combinatorics, hypergeometric functions, number theory and theory of differential equations etc. Many scholars researching in the field of inequalities have started to take interest in quantum calculus during the recent years and the active readers are referred to the articles [2, 3, 7-9, 11, 12, 16, 17, 21, 24, 25, 27-29] and the references cited in them for more information on this topic. The authors explore various integral inequalities in all of the papers mentioned above by using q-calculus and (p, q)-calculus for certain classes of convex functions.
In this paper, the main motivation is to study trapezoid type (p, q)-integral inequalities for convex and quasi-convex functions. In fact, we prove that the assumption of the differentiability of the mapping in the (p, q)-Hermite-Hadamard type integral inequalities given in [12] can be eliminated. The relaxation of the differentiability of the mapping in the (p, q)-Hermite-Hadamard type integral inequalities proved in [12] also indicates the originality of results established in our research and these findings have some relationships with those results proved in earlier works.
Preliminaries
The basic concepts and findings which will be used in order to prove our results are addressed in this section.
Let I ⊂ R be an interval of the set of real numbers. A function f : I → R is called as a convex on I, if the inequality holds for every x, y ∈ I and t ∈ [0, 1]. A f : I → R known to be a quasi-convex function, if the inequality holds for every x, y ∈ I and t ∈ [0, 1].
The following properties of convex functions are very useful to obtain our results.
Definition 2.1. [19] A function f defined on I has a support at x 0 ∈ I if there exists an affine function for all x ∈ I. The graph of the support function A is called a line of support for f at x 0 . Perhaps the most famous integral inequalities for convex functions are known as Hermite-Hadamard inequalities and are expressed as follows: where the function f : I → R is convex and a, b ∈ I with a < b. By using the following identity, Pearce and Pečarić proved trapezoid type inequalities related to the convex functions in [18] and [6]. Some trapezoid type inequalities related to quasi-convex functions are proved in [1] and [9]. Lemma 2.3.
[6] Let f : I • ⊂ R → R be a differentiable mapping on I • (I • is the interior of I), a, b ∈ I • with a < b. If f ∈ L [a, b], then the following equality holds: Some definitions and results for (p, q)-differentiation and (p, q)-integration of the function f : [a, b] → R in the papers [12,22,23]. 3) the q-derivative of the function f defined on [a, b] (see [16,21,25,26]).
Remark 2.1. If one takes a = 0 in (2.3), then 0 D p,q f (t) = D p,q f (t) , where D p,q f (t) is the (p, q)derivative of f at t ∈ [0, b] (see [5,10,20]) defined by the expression [15]) given by the expression the definite q-integral of the function f defined on [a, b] (see [16,21,25,26]). [20,22,23]). We notice that for a = 0 and p = 1 in (2.7) , is the definite q-integral of f over the interval [0, b] (see [15]). Quantum trapezoid type inequalities are obtained by Noor et al. [16] and Sudsutad [21] by applying the definition convex and quasi-convex functions on the absolute values of the q-derivative over the finite interval of the set of real numbers.
Lemma 2.4. Let f : [a, b] ⊂ R → R be a continuous function and 0 < q < 1. If a D q f is a q-integrable function on (a, b), then the equality holds: The (p, q)-Hermite-Hadamard type inequalities were proved in [12].
In this paper, we remove the (p, q)-differentiability assumption of the function f in Theorem 2.5 and establish (p, q)-analog of the Lemma 2.4 and Lemma 2.3. We obtain (p, q)-analog of the trapezoid type integral inequalities by applying the established identity, which generalize the inequalities given in [1,6,9,16,18,21].
Main results
Throughout this section let I ⊂ R be an interval, a, b ∈ I • (I • is the interior of I) with a < b (in other words [a, b] ⊂ I • ) and 0 < q < p ≤ 1 are constants. Let us start proving the inequalities (2.13), with the lighter conditions for the function f . Theorem 3.1. Let f : I → R be a convex function on I and a, b ∈ I • with a < b, then the following inequalities hold: Proof. Since f is convex function on the interval I, by Theorem 2.2 f is continuous on In the proof of the Theorem 2.5 the authors used the tangent line at the point of x 0 = qa+pb p+q . Similarly, using the inequality (3.2) and a similar method with the proof of the Theorem 2.5 we have (3.1) but we omit the details. Thus the proof is accomplished.
We will use the following identity to prove trapezoid type (p, q)-integral inequalities for convex and quasi-convex functions.
Proof. Since f is continuous on I • and a, b ∈ I • , the function f is continuous on [a, b]. Hence, clearly , t 0 is well defined and exists. Since is continuous on [0, 1] and from Definition 2.3.
is well defined and exists. By using (2.7) and (3.4), we get q n p n f q n p n b + 1 − q n p n a q n p n f q n p n b + 1 − q n p n a This completes the proof. We can now prove some quantum estimates of (p, q)-trapezoidal integral inequalities by using convexity and quasi-convexity of the absolute values of the (p, q)-derivatives.
Proof. Taking absolute value on both sides of (3.3), applying the power-mean inequality and by using the convexity of a D p,q f r for r ≥ 1, we obtain We evaluate the definite (p, q)-integrals as follows Making use of (3.7), (3.8) and (3.9) in (3.6), gives us the desired result (3.5). The proof is thus accomplished.
Theorem 3.6. Let f : I • ⊂ R → R be a continuous function on I • and a, b ∈ I • with a < b. If a D p,q f is continuous on [a, b], where 0 < q < p ≤ 1 and a D p,q f r is a quasi-convex function on [a, b] r ≥ 1, Proof. Taking absolute value on both sides of (3.3), applying the power mean inequality and using the quasi-convexity of a D p,q f r on [a, b] for r ≥ 1, we have that (1). If we let p = 1, then: (2). If we take p = 1 and letting q → 1 − , then: where γ 3 (p, q; s) is as defined in Theorem 3.4 and 1 r + 1 s = 1. Proof. Taking absolute value on both sides of (3.3), applying the Hölder inequality and using the quasi-convexity of a D p,q f r on [a, b] for r > 1, we have that (1). If p = 1, then we obtain the inequality proved in [9, Theorem 2]: The authors would like to thank the referee for his/her careful reading of the manuscript and for making valuable suggestions.
|
2020-04-30T09:09:32.846Z
|
2020-04-16T00:00:00.000
|
{
"year": 2020,
"sha1": "af7993b83e41e2427c5fdd7ac33e74a7f42811ca",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3934/math.2020258",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8503f34fb1b6d23f0235f259fa580125a31ac4a5",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
82185431
|
pes2o/s2orc
|
v3-fos-license
|
Estimation of main carcass components by using bootstrapping regression method
Bootstrap resampling methods have emerged as powerful tools for constructing inferential procedures in modern statistical data analysis. This article suggests an algoritm for building a regression model by bootstrap resampling method practically and gives parameter estimates of the model used for estimating main carcass components for Awassi lambs. Special attention is given to the estimation of regression parameters, their standard errors and confidence intervals using by bootstraping regression method, and comparing results with ordinary least squares estimates. As result, so bootstrap regression method generally smaller standard errors and confidence intervals than ordinary least squares regression that the models MC (carcass muscle) = 214.198 + 3.808 MLL (muscle in long leg) + 4.866 MN (muscle in neck), BC (bone in carcass) = 605.904 + 3.641 BLL (bon in long leg) + 3.634 BN (bone in neck) and FC (fat in carcass) = -6283 + 716.8 CW (carcass weight) from bootstrapping regression method for estimation amount of muscle, bone and fat in carcass of fat tail Awassi lambs are more suitable than models from ordinary least squares method respectively.
INTRODUCTION
Partial dissection or sample joint dissection may be a good and simple tool to determine carcass composition, but it is practically inapplicable to commertial clas-sification of carcasses. It is, however, very usefull for the researchers in terms of precission and ease of application (Fisher, 1990). The precission of the method depends on predictors, animal species and breeds. It is, therefore, very important to determine best predictive components of carcass for different animals. In literature there are many attempt to predict carcass composition of western breeds (Kempster et al., 1982) by partial dissection methods, but not enough for fat tail Awassi lambs.
Linear regression method is one of the tools most often used by researchers to fit a model for estimation. They are interested in finding estimates of bias and variance of the estimator β in estimation β. They are also interested in constructing confidence intervals for β and prediction intervals for a future observation with explanatory variables x j . Some major modelling assumptions such as i. the error term has constant variance, ii. the errors are uncorrelated, and iii. the errors are normally distributed are very important with the regression model. Especially assumption iii. is required for hypotesis testing and interval estimation. It should be always considered the validity of these assumptions to be doubtful and conducted analysis to examine the adequacy of the model we have tentatively entertained. Gross violations of the assumptions may yield an unstable model in the sense that a different sample could lead to a totally different model with opposite conclusions (Montgomery and Peck, 1992). There are several methods useful for diagnosing and treating violations of the regression assumptions. Robust estimation strategies and residual diagnostics have improved the usefulness of these techniques. However, they may not provided these assumptions by using these methods. In these cases, the bootstrap adds another dimension to the subject.
In this study it was constituted an algoritm of bootstrapping in regression analysis and estimated parameters of the models which will used for estimating main carcass components muscle, bone and fat. The results were compared with ordinary least squares regression.
Material
Sixty fat-tail Awassi male lamb carcasses obtained in different feeding experiment in Animal Science Department, Faculty of Agriculture, University of Çukurova (Turkey) were used. The lambs were fed ad libitum with total mixed rations (90% concetrate and 10% lucerne straw with 1-2 chop length) containing 2.25-2.50 Mcal ME/kg and 140-180 g CP/kg. Fattening period was 56 days and initial liveweights of the lambs varied from 22 to 26 kg in the studies. The carcass dissection was performed according to Colomer-Rocher et al. (1987). According to the method, amount of carcass tissue were calculated from six sections of carcass, long leg, shoulder, neck, flank, the first five ribs and remaining ribs by summing up the same carcass components from left side of the carcass.
Daily gain, weights of kidney and channel fat, omental fat, fat tail and bone, muscle, intermuscular fat, subcutaneous fat, total fat in each joint together with eye muscle measurements (width of muscle, depth of muscle, depth of subcutaneous fat over muscle and depth of subcutaneous fat over the ventral edge of muscle serratus dorsalis) were used as variables to chose best predictors of carcass bone, muscle, fat.
Methods
The usual linear regression model is where Y = (y 1 ,y 2 ,...y n )' denotes the nx1 vector of the response, and nxk matrix of regressors is X = (x 1 ,x 2 ,...x n )', where the kx1 vector x i denotes the regressors for the i th observation where k is the number independent variables, ε i is an nx1 vector of uncorrelated error terms having mean 0 and variance σ 2 (Cook, 1977;John,1980, 1981). The px1 vector β holds the unknown parameters, for which the ordinary least squares(OLS) estimator is (2) where p is the number of parameters. It follows that . Because σ 2 is not usually known, Var(β) is estimated by (3) where s 2 is the unbiased variance estimator provided by the residuals e i = , I=1,2,...,N. (Atkinson, 1981;Catterjee and Hadi, 1986).
Bootstrapping is a broadly applicable, nonparametric approach to statistical inference that substitutes intensive computation for more traditional distributional assumptions and asymptotic results. Bootstrap aims to draw much of subsamples from sample for obtaining sampling distribution of estimator and to use the distribution for obtaining the better estimator of the population parameters (Mooney and Duval, 1993). Here, the bootstrap method bases similarity between sample and population. In addition, while the ordinary sampling techniques use some assumptions related to the form of the estimator distribution, bootstrap resampling method needn't these assumptions because of thinking sample data as population. That the bootstrapping exploits the central analogy is the population is to the sample as the sample is to the bootstrap samples. Concequently, • the bootstrap observations are analogous to the original observations • the bootstrap mean is analogous to the mean of the original sample • the mean of the original sample is analogous to the unknown population mean • the distribution of the bootstrap sample means is analogous to the unknown sampling distribution of means for samples of size n drawn from the original population. The bootstrap can be used to derive accurate standart errors, confidence intervals, and hypothesis tests for most statistics. It can be also used the bootstrap resampling techniqes for obtaining the regression parameter estimates, their standart errors and confidance intervals, and usually gives better estimates then classical methods needn't above assumptions.
A finite total of n n possible bootstrap samples exist. If it was computed the parameter estimates for each of these n n samples, it would obtain the true bootstrap estimates of parameters but such extreme computation is wasteful and unnecessary in this case (Stine, 1990). The number of bootstrap replications B depends on the application and size of sample. It was suggested the bootstrap replications sufficient to be B = 100 for standard error estimates, for confidence interval estimates B = 1000, for standard deviation estimate 50 ≤ B ≤100 (Efron, 1990;Leger et al., 1992).
It has been pointed in literature two different bootstrap resampling methods can be used in regression analysis. The coise of either methods depends upon the regressors are fixed or random. If the regressors are fixed, the bootstrap uses resampling of the error term. If the regressors are random, the bootstrap uses resampling of pairs of observations (Stine, 1990;Shao, 1996).
Here, it was given an algoritm for bootstrapping regression models based on the resampling observations. This approach is usually applied when the regression models built from data have regressors that are as random as the response. Let the (k+1) × 1 vector the values associated with ith observation. In this case, the set of observations are the vectors (w 1 ,w 2 ,...,w n ). The steps of bootstrapping with random regressor algoritm are: a. draw n sized sample from population randomly . b. draw a n sized bootstrap sample (w 1 *(b) , w 2 *(b) ,..., w n *(b) ) with replacement from the observations giving 1/n probability each w i values (Wu, 1986) and label the elements of each vector , where j = 1,2,...k, i = 1,2,...n.
From these form the vector and the matrix ŞAHINLER S., GÖRGÜLÜ M.
An illustrative study of bootstrap algorithm steps for estimation of β given above is shown in Table 1 by using muscle in carcass data. The variance-covariance matrix of β * from the probability distribution (F(β * )) are calculated by (Liu,1988;Stine, 1990) (8) The bootstrap confidence interval of β * j by normal approach is obtained by (9) where β * j is the j th bootstrap estimator, S e (β * j ) is the standard error of the j th Bootstrap estimator, and t n-p,α/2 is t values with n-p degrees of freedom and α/2 significant level (Diciccio and Tibshirani, 1987).
A nonparametric confidence interval for β * named percentile interval can be constructed from the quantiles of the bootstrap sampling distribution of β *(b) . The 95 % percentile interval is where β *(b) is the ordered bootstrap estimates of regression coefficient from Equation 5, lower = 0.025 B, and upper = 0.975 B.
The skewness of the distribution (F(β * )) of the replicates from step (e) for the β j *(b) can be determined by examining the shape of distribution plots of the β j . These plots show that a histogram of the replicates with an overlaid smooth density estimate. A solid vertical is plotted at the observed parameter value, and a dashed vertical line at the mean of the replicates.
The statistical packages, Excel, S-Plus for windows and SPSS for Windows, were used for the statistical analysis of these data.
RESULTS
The amount of lean, bone and fat tissue in the lamb carcass were related to some other carcass components and measurements. Non-significant variables were identified by inspection variable selection statistics and omitted (Draper and Smith, 1981;Catterjee and Teebagy, 1990). For this purpose in each stage outliers, leverage points and influential points were identified and after checking for some mistakes as entry of data and measurement, the outliers were omitted (Belsley et al., 1980;Şahinler, 2000). At the same time the models were controlled with regard to the assumptions of the Ordinary Least Squares method, autocorrelation and collinearity (Willan and Watts, 1978;Montgomery and Peck, 1992;Şahinler and Bek, 2002).
The parameter estimations, standard error and confidence intervals of the parameter estimations and some related descriptive statistics from ordinary least squares and bootstrapping regression methods for estimation of muscle in carcass, bone in carcass and fat in carcass are given in Table 2. For checking heterocedasticity of error term β β β β β β β β . This means that the distributions of the β j *(b) for all regression models obtained from bootstrapping are normal.
Muscle in carcass
After selecting the independent variables by using stepwise method, muscle in long leg (MLL) and muscle in neck (MN) variables entered the model, and the ordinary least squares fit the data is: MC = 200.043 + 3.83 MLL + 4.81 MN (11) According to the results in Table 2, the regression in Equation 11 is significant (P<0.01) and all of the regression coefficients β 0, β 1 and β 2 are significant (P<0.01). The standard errors of the β 0, β 1 and β 2 are 437.877, 0.378, and confidence intervals are (-676.79 -1076.88), (3.073 -4.588) and (3.051 -6.569), respectively. Durbin Watson test statistic (d) is calculated as autocorrelation diagnostic and found as 1.787. So d = 1.787 grater then d U = 1.65 that there is no autocorrelation problem in error term. VIF statistics are calculated as collinearity diagnostic and found as VIF MLL = 1.875 and VIF MN = 1.875. Thus, so both of VIF j (= 1.875) < 10, that there is no collinearity problem between MLL and MN variables ( Table 2). The studentized deleted residual versus X MN for checking heterocedasticity of error term for model in Equation 11 and it was seen that there is no heterocedasticy problem in muscle in carcass data (Figure 1a). Thus, the model in Equation 11 standard errors and confidence intervals than ordinary least squares regression. Therefore, the model in Equation 12 is more suitable than model in Equation 11 for estimation amount of muscle in carcass of Awassi lambs.
Bone in carcass
Bone in long leg (BLL) and bone in neck (BN) variables entered to the regression model among the all of the independent variables and the ordinary least squares equation for bone in carcass (BC) is fitted as: BC = 606.418 + 3.647 BLL + 3.613 BN The regression in Equation 13 is significant (P<0.01), R 2 =0.866 and all of the regression coefficients β 0, β 1 and β 2 are significant (P<0.01). The standard errors of the β 0, β 1 and β 2 are 142. 714, 0.298 and 0.458, and 95 % confidence intervals are (320.64 -892.20), (3.050 -4.243) and (2.696 -4.531), respectively. According to the Durbin Watson test statistic (d = 2.204) and VIF statistics (VIF BLL =1.24 and VIF BN = 1.24), neither autocorrelation problem in error term nor collinearity problem between BLL and BN variables are not exist ( Table 2). Heterocedasticity of error term for bone in carcass data is not determined from the studentized deleted residual plot versus X BLL (Figure 1b). Thus, the model in Equation 13 The bootstrap standard errors of the β * 0, β * 1 and β * 2 are 120.904, 0.270 and 0.441 respectively. The bootstrap confidence and percentile intervals of the β * 0, β * 1 and β * 2 are (368.93 -842.88), (3.11 -4.17), (2.77 -4.50) and (370.88 -851.52), (3.10 -4.17), (2.69 -4.41), respectively (Table 2). According to these results, bootstrap regression method generally smaller standard errors and confidence intervals than ordinary least squares regression. Therefore, the model in Equation 14 is more suitable than model in Equation 13 for estimation amount of bone in carcass of Awassi lambs.
Fat in carcass
The fitted ordinary least squares equation for fat in carcass: where FC is fat in carcass(g) and CW is carcass weight (kg). For Equation in 15, the regression and coefficients are significant (P<0.01). And R 2 = 0.832. The standard errors and confidence intervals of the β * 0, and β * 1 are -6297, 716.751 and (-7895.5 -(-4699.1)), (632.136 -801.365), respectively. According to the Durbin Watson test statistic (d = 1.695) ( Table 2) and studentized deleted residual plot versus X CW (Figure 1c), neither autocorrelation problem in error term nor heterocedasticity of error term for fat in carcass data are not determined. Thus, the model in Equation 15 The bootstrap standard errors of the β * 0 , and β * 1 are 798.2 and 40.91, respectively. The bootstrap confidence and percentile intervals of the β * 0 , and β * 1 are (-7879.4 -(-4686.6)), (634.98 -798.62) and (-7878.7 -(-4702.1)), (634.08 -797.72), respectively (Table 2). According to these results, bootstrap regression method generally smaller standard errors and confidence intervals than ordinary least squares regression. Therefore, the model in Equation 16 is more suitable than model in Equation 15 for estimation amount of fat in carcass of Awassi lambs.
DISCUSSION
The most important advantages of the bootstrap regression method is to give smaller standard error and to need smaller sample than ordinary least squares method. On the other hand, its practical performance is frequently much better but this is not guaranteed (Hawkins and Olive, 2002). For this reason, it is a mistake to hope that bootstrap regression method always gives confident results. The confidence depends on the structure of the data and distribution function. Moreover, application of resampling methods depends on development of computer technologies.
If the results were examined in Table 2 , it was seen that there is no difference between the regression coefficients obtained from ordinary least squares and bootstrap regression method (P>0.05), except for regression coefficient ( β 2 =1.81 and β * 2 = 4.866) for muscle in carcass. Nevertheless, bootstrap regression method gives regression coefficients which have generally smaller standard errors and confidence intervals than ordinary least squares regression method. Similar result was reported by Efron (1979). But, the bootstrap regression method always might not give smaller standard error than ordinary least squares method as in regression coefficient (S.E.( β 1 ) = 42.271 and S.E.( β * 1 ) = 43.91) for fat in carcass model. Fox (1997) also reported similar results. Therefore, the model in Equations 12, 14 and 16 are more suitable than model in Equation 11, 13 and 15 for estimation amount of muscle, bone and fat in carcass of Awassi lambs, respectively. CONCLUSIONS As a result, it might be considered as the most diagnostic parts of the fat tail Awassi lambs carcass for muscle and bone in carcass are muscle and bone amounts in long leg and neck. The carcass weight is the most diagnostic parts of the fat tail Awassi lambs carcass for fat in carcass.
|
2019-03-19T13:05:50.716Z
|
2003-10-28T00:00:00.000
|
{
"year": 2003,
"sha1": "9159a70a82707b05802f8693ddfa62d5a1d346c8",
"oa_license": "CCBY",
"oa_url": "http://www.jafs.com.pl/pdf-67766-6618?filename=Estimation%20of%20main.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "261c0e08b92780d9b135b26a6d449875b919c9d8",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
209481620
|
pes2o/s2orc
|
v3-fos-license
|
The study of Wilson disease in pregnancy management
Introduction Pregnancy management in women with Wilson disease (WD) remains an important clinical problem. This research was conducted to investigate how to avoid worsening of WD symptoms during pregnancy and increase pregnancy success in women with WD by identifying the best pregnancy management approaches in these patients. Patients and methods The clinical data of 117 pregnancies among 75 women with WD were retrospectively analyzed. Related information of the fetus was also recorded and analyzed. At the same time, regression analysis was performed for data of 22 pregnant women without WD, as normal controls. Results Of a total of 117 pregnancies among the 75 women with WD and 31 pregnancies among the 22 control womenincluded in this study, there were 108 successful pregnancies and 9 spontaneous abortions. Among the 108 successful pregnancies, 97 women a history of copper chelation therapy before pregnancy; all 97 women stopped anti-copper therapy during pregnancy. The nine women with spontaneous abortion had no pre-pregnancy history of copper displacement therapy. The incidence of lower limb edema was higher in the WD group than in normal controls (P = 0.036). Compared with the control group, there was a higher proportion in the WD group of male infants (P = 0.022) and lower average infant birth weight (t = 3.514, P = 0.001). Conclusion It is relatively safe for women with WD patients to become pregnant. The best management method for pregnancy in women with WD may be intensive pre-pregnancy copper chelation therapy and no anti-copper treatment during pregnancy.
Introduction
Wilson disease (WD) is an autosomal recessive genetic disorder that can be treated at present [1]. WD is caused by excessive deposition of ATP7B copper transporter protein, which causes a variety of symptoms in different organs [2]. The most common copper deposition sites are the liver and the brain [1]. Currently, the main treatment method involves using copper chelating agents, such as penicillamine and trientine [3][4][5]. Few studies have reported on pregnancy in patients with WD, and the findings of the existing research vary considerably. It has been reported that symptoms of liver and brain damage are significantly aggravated during pregnancy in women with WD [6][7][8][9], but a large number of successful pregnancies have also been reported in these patients [10][11][12][13][14][15]. Research on pregnancy in WD is of great importance not only to clinicians who treat these patients but also to women with WD who are pregnant or planning to become pregnant [16]. Therefore, a retrospective analysis was conducted of pregnant women with WD in China, focusing on how to avoid the aggravation of disease symptoms during pregnancy and increase the pregnancy success rate of women with WD, to find the best management method for this patient population.
Patients
We collected the clinical data of 75 female patients with WD and with previous pregnancy experience (a total of 117 pregnancies), who were admitted to the affiliated hospital of the Institute of Neurology, Anhui University of Chinese Medicine from January 2014 to December 2018. All patients with WD enrolled in this study met the diagnostic criteria of WD [16,17]. In addition, 22 pregnant women without WD were selected as normal controls, with 31 pregnancies that originated in Hefei Maternal and Child Health Care Hospital; the controls were age-matched with those in the case group (t = 0.352, P = 0.728). Additionally, the subjects who had abnormal examinations before pregnancy were excluded. This study was approved by the ethics committee of Anhui University of Chinese Medicine and all participants signed informed consent.
Research methods
The clinical data of the 117 pregnancies among the 75 women with WD were retrospectively analyzed before, during, and after pregnancy; relevant data of the fetus were also statistically analyzed. In addition, the pregnancy course and pregnancy outcome of the 31 pregnancies in the 22 control patients were also reviewed.
Statistical methods
Quantitative data consistent with a normal distribution are described as mean ± standard deviation (SD), and the pregnancy status of women in the WD and control groups was compared using a t-test of two independent samples. Sample size and percentage (%) were used to describe the classified data, and the chi-square test (or Fisher exact probability method) was used to compare the two groups. All statistical analyses were performed using IBM SPSS 23.0 (IBM Corp., Armonk, NY, USA), and P < 0.05 (two-tailed) was considered statistically significant.
Clinical data of patients with WD
Among the 75 enrolled women with WD, a total of 117 pregnancies were recorded. The age at childbirth of women with WD was 22-34 years, with an average age of 27.72 ± 2.79 years, and the age at hospitalization for symptoms of WD was 24-36 years old, with an average age 30.43 ± 2.99 years. In the 75 women with WD, the total 117 pregnancies resulted in 108 successful pregnancies and 9 spontaneous abortions. Among these, 97 of the 108 women with successful pregnancies had a history of hospitalization and intensive copper displacement therapy before pregnancy; these 97 women stopped taking anti-copperdrugs during pregnancy. The nine women who had spontaneous abortion had no history of copper displacement therapy before pregnancy. Changes were observed in the pregnancy condition of the included women with WD. Among 21 pregnancies, 17 with WD had obvious liver injuries. In addition, 10 women with WD (in 10 pregnancies) had severe neurological symptoms.
Comparison of pregnancy complications between normal patients and those with WD There were 117 and 31 pregnancies among the 75 patients with WD and 22 control patients, respectively, which were matched according to age in the analysis. Postpartum complications and pregnancy complications in the two groups were compared. Lower extremity edema was the main postpartum complication, and the difference was statistically significant (χ 2 = 10.482, P = 0.036), as shown in Table 1.
Childbirth in normal controls and women with WD
The proportion of male infants was higher among women with WD than that among women in the control group (χ 2 = 5.249, P = 0.022). The average infant birth weight in the WD group was lower than that in the control group, and the difference was statistically significant (t = 3.514, P = 0.001). However, there was no statistically significant difference between the two groups in terms of natural delivery and fetal Apgar score, as shown in Table 2.
Discussion
WD is an autosomal recessive hereditary disease that can be treated at present [3,4]. Many patients with WD disease are women of childbearing age who are diagnosed with early-stage disease [17,18]. Some women with WD become pregnant after disease onset and diagnosis; therefore, further research on the management of pregnant patients with WD is urgently needed. This retrospective analysis was conducted, using data of pregnant women with WD in China, focusing on several aspects and comparing choices in pregnancy, to better guide women of childbearing age who have WD.
In the present retrospective analysis of 117 pregnancies in 75 women with WD, there were 108 successful pregnancies (92.3%) and successful deliveries. The vast majority of the patients included in this study had a history of pre-pregnancy hospitalization in which they received treatment with copper chelation therapy and disease evaluation during the first half of their pregnancy. These results are not completely consistent with the literature [15,19]. Therefore, it is strongly recommended that women with WD need systematic evaluation and treatment during pre-pregnancy. Studies have found that the clinical effects of WD on pregnancy outcomes mainly include neurological symptoms in pregnant women, as well as spontaneous abortion [7,8,19]. The data analyzed in the present study showed that neurological symptoms were more frequent among women with WD than spontaneous abortions or liver and bone damage. These findings are consistent with the results of the present analysis, in which all women with spontaneous abortion had no prior history of anti-copper therapy.
Analysis of the present data showed that hospitalized women with WD received copper displacement treatment and disease evaluation before their pregnancy; these patients stopped taking copper displacement medications during pregnancy. This finding is inconsistent with reports in the recent literature [13,14]; additional multicenter studies, are needed to clarify this issue. The authors believe that clinical symptoms in these patients are relatively reduced in pregnancy, which could be related to the normal metabolism of copper by the fetus. This result also found that patients with hospital readmission after childbirth for symptoms of liver injury gradually recovered after copper chelation treatment. However, recovery was slower after anti-copper treatment in patients readmitted to the hospital after childbirth with aggravated neurological symptoms. These findings are in line with those of recent reports [7,19]. The aggravation of neurological symptoms in women with WD during pregnancy has been widely reported [6,7,19], but the specific mechanism has not been further explored. The present data analysis indicated that neurological symptoms were aggravated in 10 pregnant women with WD, and 24-h urine monitoring showed that copper levels were not very high. Therefore, aggravation of neurological symptoms in patients with WD may not be completely consistent with excessive copper deposition in the body, which differs considerably from published reports [6,7,19].
Many researchers have stated that the most important factor influencing the pregnancy outcome of women with WD is continuous copper displacement therapy, and that continuous treatment is the best approach to avoid the aggravation of the disease and increase the success rate of pregnancy [10][11][12][13][14][15]20]. The authors believe that the pre-pregnancy copper displacement treatment and condition assessment, followed by suspension of drug therapy during pregnancy is the best way to avoid aggravated disease symptoms during pregnancy and to increase the success rate of pregnancy in women with WD.
In our study population, women with WD were more likely to have complications of lower limb edema during pregnancy than normal pregnant women, and more women in the WD group had male infants and infants with lower birth weight than women in the control group, there was no difference in the mode of delivery or fetal Apgar score between the patient groups. However,the subjects who had abnormal examinations during pregnancy were excluded may contribute to the limited sample size, especiallythe control group, so whether our findings can comprehensively reflect the pregnancy status of women with WD requires confirmation in future studies with larger sample size.
Conclusions
To sum up, women with WD have complicated issues in pregnancy, and prospective studies in this patient population are lacking; all published reports are retrospective analyses. It is relatively safe for women with WD patients to become pregnant. The best management method for pregnancy in women with WD may be intensive prepregnancy copper chelation therapy and no anti-copper treatment during pregnancy.
|
2019-09-16T23:12:42.961Z
|
2019-07-19T00:00:00.000
|
{
"year": 2019,
"sha1": "eb944436f71961fdee69eb51781872220fb4a72d",
"oa_license": "CCBY",
"oa_url": "https://bmcpregnancychildbirth.biomedcentral.com/track/pdf/10.1186/s12884-019-2641-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ed119bc56716c2609b3f089d2e3f9aafad8850bc",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
207798086
|
pes2o/s2orc
|
v3-fos-license
|
Parents as Coresearchers at Home
This article discusses the use of observational video recordings to document young children’s use of technology in their homes. Although observational research practices have been used for decades, often with video-based techniques, the participant group in this study (i.e., very young children) and the setting (i.e., private homes) provide a rich space for exploring the benefits and limitations of qualitative observation. The data gathered in this study point to a number of key decisions and issues that researchers must face in designing observational research, particularly where nonresearchers (in this case, parents) act as surrogates for the researcher at the data collection stage. The involvement of parents and children as research videographers in the home resulted in very rich and detailed data about children’s use of technology in their daily lives. However, limitations noted in the data set (e.g., image quality) provide important guidance for researchers developing projects using similar methods in future. The article provides recommendations for future observational designs in similar settings and/or with similar participant groups.
Introduction
Current research demonstrates that children are engaging with technology and going online at increasingly younger ages. In the United States, for example, 38% of children under age 2 used a mobile device in 2013, compared to only 10% in 2011 (Common Sense Media, 2013, p. 9). In Australia, a national study found that when compared to children in 25 other countries, Australian children were among the youngest first time users of the Internet, at an average age under 8 on first use (Green et al., 2011, p. 7). Although a number of studies document the devices used by older children and adolescents (e.g., Foss et al., 2012;Large, Beheshti, & Rahman, 2002;Livingstone, 2002;Madden, Lenhart, Duggan, Cortesi, & Gasser, 2013;Rideout, Foehr, & Roberts, 2010), very few include data device access for children under age 8 (Gutnick et al., 2010;see Marsh, 2005 andVandewater et al., 2007, as examples). Similarly, many studies debate the merits and value of media viewing by young children, particularly from a developmental standpoint (e.g., Desmond & Bagli, 2008;Ellis & Blaski, 2004;Schlembach, 2012), but few of these use qualitative approaches to examine young children's experiences, directly.
Overall, the research landscape related to young children's use of information technology-that is, where and how they use tablets, laptops, smartphones, and so on-is nascent, with only a few studies documenting children's activities (e.g., Davidson et al., 2014;Danby et al., 2013;Gutnick et al., 2010;Rideout et al., 2010;Rideout & Hamel, 2006;Spink, Danby, Mallan, & Butler, 2010). Often, studies of preschoolers' experiences with technology are focused on implications for classroom pedagogy and/or curriculum, such as early literacy and numeracy skills (e.g., Burnett & Merchant 2014;Plowman, Stevenson, McPake, Stephen, & Adey, 2011;Zevenbergen & Logan, 2008). Many studies rely on parent or teacher surveys of children's use, rather than naturalistic observations of children's activities, in either classroom or home environments. Although some studies have used qualitative approaches with young children, directly (e.g., Davidson, 2012;McKechnie, 2004;O'Hara, 2008;Plowman et al., 2011;Spink et al., 2010), these types of studies are few in number. Burnett (2010) presents a systematic review of much of this body of literature, where she notes that the focus on formal childcare environments for studies with children under age 8 is based, in part, on ''the perceived difficulty of researching the experience of very young children' ' (p. 254).
This article discusses one recent study that used observational video recordings to document young children's use of technology in their homes. Although observational research practices have been used for decades, often with video-based techniques, the participant group in this study (i.e., very young children) and the setting (i.e., private homes) provide a rich space for exploring the benefits and limitations of qualitative observation. The data gathered in this study point to a number of key decisions and issues that researchers must face in designing observational research, particularly where nonresearchers (in this case, parents) act as surrogates for the researcher at the data collection stage. Before exploring the research design and findings from this study related to observational practice, an overview of the use of observational techniques across disciplines is warranted.
Review of the Literature: Observational Research Methods
Observational research practices have a long and rich history across disciplines, including both quantitative and qualitative techniques for data collection and analysis. Qualitative observational research is intended ''to capture life as experienced by the research participants rather than through categories that have been predetermined by the researcher'' (McKechnie, 2008b, p. 573). Often, qualitative research data are captured in natural settings, so as to document people's experiences in the world as they go about their daily lives. When data are gathered unobtrusively, this can provide a glimpse into individuals' behaviors that a researcher may not otherwise be able to see or document during data collection. The settings for observational research can range from open, public spaces, such as parks, shopping malls or community meeting rooms (e.g., Carey, McKechnie, & McKenzie, 2001;Fisher, Marcoux, Miller, Sánchez, & Ramirez, 2004;Stooke & McKenzie, 2009), to institutional settings such as libraries and schools (e.g., Gross, Dresang, & Holt, 2004;McKechnie, 2004;O'Hara, 2008). Studies in people's homes have also been conducted to document how people organize their living spaces, engage with family and friends, and use technology in their daily lives (e.g., Campos, Graesch, Repetti, Bradbury, & Ochs, 2009;Hartel, 2006;Plowman et al., 2011). In these studies, researchers make a number of decisions about the practice of observational research-from the data collection tools to be used to the level of engagement desired with study participants.
The Research Process
Qualitative observational research takes many forms and is associated with many different methodologies (e.g., ethnography, ethnomethodology, grounded theory, and participatory action research) and data collection techniques (e.g., fieldnotes, video/audio recordings, and document analysis). Where interviews, focus groups, and other methods rely on self-reporting of activity, retrospectively, observational methods allow researchers to gather data in real time, at the moment of engagement. Studies show, for example, that data gathered on what people say they do and what they really do, when observed, can be quite different (e.g., Lee, 2000). For this reason, observational methods are often used in conjunction with other methods to triangulate the sources of data to gain a richer and more complete understanding of people's experiences.
Some observational data are documented in obtrusive ways (i.e., with the participant fully aware of the researcher's presence). For example, a researcher may ''shadow'' an individual as he or she completes a task or engages in an activity, even asking questions or prompting the participant to discuss his or her actions during the investigation (e.g., Allard, Levine, & Tenopir, 2009;Cooper, Lewis, & Urquhart, 2004;Reddy & Spence, 2008). In other cases, unobtrusive observation is the goal (i.e., where the researcher's presence melts into the background, while participants engage in their work). In these studies, the researcher uses various strategies to make their presence less visible. These strategies include wearing similar clothing to those worn by participants, sitting off to the side of the action so as not to interrupt the activities, using very small recording devices that will not be confronting for the participant, and making observations over time to allow participants to become accustomed to the researcher's presence (Lee, 2000;McKechnie, 2008b). In other studies, the design is covert (i.e., participants do not know they are being watched). The researcher and recording devices may be hidden from view (e.g., behind twoway glass) or, the researcher may be present, but not disclosing to the participants that they are being observed (e.g., Becker & Marique, 2013;McKechnie, 2008a;Pettricrew et al., 2007). In all of these cases, the researcher's role in the process is a conscious decision that shapes the research design. The researcher may be a separate and independent observer of the action (i.e., with no interaction with participants) or the researcher may be a full participant in the activities; in some cases, a mix of approaches may be used, with a researcher's role changing as the investigation evolves. Taking on one of these roles is a conscious, planned part of the investigation, with implications for research design and, ultimately, quality of data and analysis.
Research Design
In this project, unobtrusive observation was used to document young children's (i.e., aged 3 to 5) activities with technology in their homes. This method was part of a larger study of preschoolers' use of technology in eight early childhood centers in Queensland, Australia; the project also included surveys of teachers and parents as well as observations in preschool classrooms. Ethics approval was granted by research ethics boards at the two universities involved in the study, with consent and assent provided for the use of children's and family members' images in research publications. The consent process followed the Australian National Statement on Ethical Conduct in Human Research. In keeping with those guidelines, the process involved discussion between researchers and families around the process for data collection, including the fact that they had complete control over which sessions to record, how long, and who was to be on screen. Details were also shared about the importance of gaining consent on an ongoing basis as sessions were recorded to ensure that parents and children were comfortable with the data gathered. A sample of 15 children participated in the homebased observation, with parent volunteers recruited from the eight early childhood centers. Parents were invited to take a digital camera home for a period of 1 week and asked to record instances of a child's ''everyday'' use of information technology. Parents were asked to record their children engaging with computing devices in the home environment as part of their normal, daily activities (i.e., parents were instructed not to set up contrived activities but to record typical, everyday use). Parents were provided with written instructions on camera functionality and also asked to complete an inventory of technologies available in the home. The researchers did not engage with the parents during data collection; the cameras were returned to the researchers at the end of the data collection period. This is in keeping with video-based data gathering techniques used by such researchers as Michael Rich (2008), where participants are provided only with support for the mechanics of video recording to maintain a more direct, participant-driven approach to visual documentation (p. 915). The data set comprised a total of 29 hr of video recording, showing children using laptops, desktop computers, and a range of mobile devices. Individual sessions (i.e., a child engaging with technology in a single sitting) ranged from less than 2 min long to more than 80 min long. The total number of sessions ranged from a low of two per child in a single day, across the week of data collection, to a high of 29 sessions per child (i.e., where use was recorded on every day of the data collection period).
A detailed descriptive analysis was conducted, using a modified ''seating sweeps'' (Given & Leckie, 2003) approach; this allowed the researchers to code the videos as though they were observing in the space, in real time, to document details of the types of technology, engagement activities, and people using the devices. The data were also analyzed using an inductive, thematic approach to explore emergent themes related to young children's everyday technology activities in the home. Complete results of these analyses are published elsewhere (e.g., Given et al., 2014). This article reports findings related to the use of the observational method, with a particular focus on the impact of video recording by parents on the types and quality of data gathered. Implications for the design of similar studies, in future, are explored.
Setting the Stage for Research: Parents and Children as Coresearchers
Parents used various strategies to incorporate the research camera into the activity spaces where their young children were using technology. At times, they placed the camera in a fixed location (e.g., on a bookshelf, on the edge of a desk), with the camera angled toward the child and/or the technology device being used. In other cases, parents held the camera themselves, recording the children's engagement with the devices during the sessions. The video recordings show the child participants' general awareness of the camera in the space. Henry, for example, altered the fixed position of the camera to access a document on the printer; once the document was retrieved, he moved the camera back to its original position. Jordie warned his sibling about the presence of the camera when she tried to grab an USB stick from the computer, so that he would not knock the camera over. Oliver moved out of the camera's frame and (a few seconds later) was told, by his mother, not to touch the camera. Figure 1 is an example of a screen shot taken with a fixed camera angle; here, mother and child engage with the computer together, with the computer screen visible in the background. Lighting issues, people blocking the camera, and the distance from the camera to the computer screen (i.e., limiting visibility of on-screen text) were quite common challenges in the home data recordings. In some cases (like this one), the child's face was not fully evident on screen; in others, the camera showed a close-up of the child's face, only, without showing the computing device or any other contextual information visible in the frame. Figure 2 shows an example of a fixed camera shot providing a view of the laptop's keyboard; neither the computer screen (on the left) nor the child (on the right) are fully visible to the camera. Unfortunately, these types of camera angles limit the level of analysis possible, particularly if there are no sounds or actions to be heard or seen on camera. Although the data can be analyzed for what is in view (e.g., children's bodies in relation to technology), other contextual data (e.g., what the child is watching) cannot be incorporated into that analysis. It was also difficult, at times, to know where in the house the activity was taking place (e.g., kitchen, office, bedroom), due to the lack of contextual details captured in close-up camera shots.
In some cases, the camera captured footage of another part of the room, entirely. Figure 3 shows clothing on a table in one home; this was the focus of the recording for 12 min when the child participant moved the camera away from the computer activity.
Despite these kinds of challenges, parents were generally very adept at ensuring that the camera focused on the child and the technology, so that the data gathered were related to the project. In one session, for example, Oliver stood up and his mother then asked him to kneel, so that ''the ladies'' (i.e., the researchers) could see him on the screen. This is one, of many, examples where parents staged the child or elements in the space, physically, to ensure that activities would be captured on screen. Rosielyn was lying on her bed using her iPad, while her mother held the camera and recorded the activity. At one point, the mother repositioned the iPad to be seen on camera and also asked the child to move down further on the bed to be better positioned in the frame. This direct staging of the activity lead to the following exchange: In addition to staging, this example also demonstrates that children were not always interested in using the technology devices at hand. At times, it is clear from the commentary between children and parents that some activities were being filmed to fulfill the goals of the study, rather than to capture activities in the moment that children chose to engage with technology on their own terms. Although the goal of naturalistic observation is to capture activities ''in the moment'' and ''as they happen,'' leaving control of the video recording in coresearchers' (i.e., parents') hands means that some data gathered are the result of constructed activities designed to meet the project's needs. This is simply a limitation of the method that must be acknowledged by researchers, since the use of hidden cameras or stop-action data collection techniques (i.e., where recording is prompted by an activity's start and end) may not be practical in the home setting.
Rosielyn and her mother's interaction is also an interesting example of a potential lost opportunity in the data collection process, where the research tool (i.e., the camera) could have served as an educative tool or another object of play for the child or have some other purpose. What might have happened in this exchange if Rosielyn's mother had shown her daughter how the camera worked? What if the mother had turned the camera around to take a picture of her, as Rosielyn asked? Although the parent in this case is clearly ''on task'' with the observational activity and attempting to capture her child using a computing device, the scope of the data gathered is somewhat limited. If the parents in this study had been prompted to define the use of digital technologies more broadly, the data gathered may have provided a richer picture of children's understandings of the integration of these technologies alongside (and into) computing devices themselves. Although a parent may make a distinction between computers and cameras, and record data that he or she judges to be appropriate for the project, a researcher may envision a number of other possibilities that may have arisen on-screen if the camera were in the researcher's own hands. This is something to consider when crafting explanatory materials and/or comments to share with coresearchers during the orientation to data collection; researchers need to carefully consider how they describe relevant, potential data to the coresearchers, to ensure that emergent design within the scope of the research problem can be facilitated.
It is also noteworthy that, across the data set, many children discussed the camera's presence and purpose with their parents and siblings. Although the children had been informed of the recording activity as part of the informed consent process, their comments and questions demonstrate an ongoing learning process about the role of the camera in the home and the type of information being recorded. Jordie asked his father if the camera was taping what he was doing on the computer. Helen asked about the camera and her mother asked her to use her iPad; Helen responded by complaining that she did not have anything to do on the iPad and chose to watch her sister play with another device. Rory was aware of the camera's presence during one recorded session, when his parents discussed what should be recorded. In this excerpt from their conversation he provides information about the types of recordings being made in the preschool (i.e., ''kindergarten'') to provide guidance on what to record in the home: Rory: [ This exchange is an interesting one, as it demonstrates both the child's and his parents' understanding of the research activity, generally, as well as their conscious desire to provide appropriate data to the research team (see Danby & Farrell, 2004). Rory, whose activities were also being recorded in the preschool classroom environment, could serve as a bridge between the research team and the parents. He provided valuable context for his parents about what could be recorded, making clear that ''everything'' is valuable, not just his internet searching activities, as his mother first believed.
Repositioning Research Technology: Engaging With the Camera as an Object of Play
In addition to seeing the research camera as a tool for data collection, a number of children also involved the camera directly in their activities. Children danced, laughed, and stared at the camera, putting themselves at the center of the action on screen. Cody and his younger sister, for example, jump on the bed, smile, and look at the camera while saying ''Cheese!'' (i.e., an acknowledgement that their picture is being taken). In Figure 4, Lara waves at the camera while keeping one hand on the computer. Here, the angle of the camera is positioned to capture the computer screen as well as the child's hands and side profile during searching activities. This type of placement allows the researcher to view some of the content of the searching activity, although the screen size prohibits access to details (such as specific search terms entered). The child's hands are in the frame, so can be viewed using the keyboard or moving the cursor onscreen, and some of the child's expressions are visible to the camera.
In other cases, the participants used the research camera to create videos of their own making, rather than being the subject viewed on camera, themselves. In Figure 5, for example, Jordie is seen performing martial arts/dance moves in front of the camera; here, the computer or other digital technologies are not in the camera's frame, at all. It could be that parents recorded a range of activities in order to make the recording device a part of overall everyday activities; this is an area that requires further research as to the role of data collection tools in the home environment, generally. Tina provides another example of the child as a producer of on-screen action; this participant used a dolphin-shaped cookie cutter and a Barbie doll to extend her play with the Barbie in a Mermaid's Tale video on the computer screen. She held each of these items up to the camera, having the dolphin and the Barbie swim along with the music in the frame (see Figure 6); at times, these actions mirrored the activities that characters engaged in on the video itself.
These examples are important pieces of data, because they demonstrate how technology is positioned within the child's broader play environment. First, while Rosielyn believed that the video camera was simply a still-image camera, Tina and Jordie's use of the research tool demonstrates an understanding of video technology. As these children perform for the camera, and as they engage in video production in their own way, they are using all the available technology in the room to their advantage. While Jordie's computer is removed from the frame of action, with the camera focused on the child, himself, Tina integrates her use of the computer with the camera activity.
Although researchers such as Conrad (2008), Norris (2009), andGibson (2005) discuss ways to design studies and analyze data using performative aspects as points of analysis, these were not the focus or intent of our project. In addition, some research designs with very young children may preclude the level of conscious cocreation of data that would be relevant to performative design; however, this would be an interesting avenue for research exploration in future. Although one could argue that parents are cocreators of the performative aspects in this study, most of the data gathering was neither staged nor consciously constructed to facilitate performative analysis. For example, a camera that is perched on a table and aimed at a computer, with no parent in the room to direct the activity, results in a very different data set than one where parents coengage with children in onscreen activities. There is a great deal of potential for future research designs where researchers can create opportunities for performative modes of engagement. In the current study, the performative aspects of the data set are emergent analytic findings, which point to the need for additional data and a changed research design to provide a fulsome analysis using a performative lens.
Implications For Research Design
Despite the rich data gathered using this approach, it raises a number of questions around what has not been recorded and what analytic opportunities may have been missed. Unfortunately, the project design does not allow for further investigation of these issues because additional data collection would be required. For example, interviews with parents about their video-recording strategies, including the choices they made about what data to include and exclude, would add a level of depth to the available data. Researchers need to consider the limitations of their project design and think about lessons learned; the sections that follow explore the limitations and strategies for future research design that have emerged from our experience in this project.
Limitations. There are a number of limitations in the data set that provide important guidance for development of future projects using similar methods. Some of these limitations relate to the quality of the images captured due to technical problems; low and indirect lighting, as well as poor camera angles, limited some of the data captured for analysis. Similarly, the recorder's in-built microphone did not always capture complete conversations (e.g., if a parent was speaking to the child from another room). In other cases, data limitations were due to the timing or context of the data collection process. As the camera was not capturing data 24 hr per day but was only turned on during a technology-related activity, the data set does not provide a complete picture of the context in which the activity occurred. Although we initially asked parents to provide written contextual details of children's recorded activities (using a template document), very few were received and these provided little content to guide analysis. This leaves a number of additional questions unanswered (e.g., What was happening just prior to the activity? What conversations did the family have, later in the day or week, about the child's technology use?). By extending the timeframe of data collection during the week, additional context would be captured that may provide new insights into children's activities. Also, although some of the camera angles provided data on the types of websites or iPad applications used, consistent and complete data on the tools and software children used would be useful. That said, it is important to recognize that extended timeframes and/or additional requests made of families may be inconvenient or simply inappropriate; as with all studies, there are trade-offs between a researcher's ideal data collection goals and what is reasonable or appropriate in a given research setting.
Strategies for future research design. There are a number of concrete strategies that could be implemented to resolve some of the issues noted previously, in future. These include: Figure 6. A child positions her Barbie doll in front of the camera, moving the doll to the music that is playing in the video on the computer screen.
providing a workshop or detailed brochure for families on details related to videography (e.g., lighting and camera placement); using screen capture software (such as Camtasia) on home computing devices to capture information on the sites visited and tools used; using a secondary, omnidirectional microphone to extend the range of audio recordings; and, setting multiple cameras in the space, with extended recording capabilities, to provide additional context related to the activity.
Of course, many or all of these suggestions may not be practical or appropriate to implement in the home environment. Although Sarah Pink, for example, in her work Doing Visual Ethnography (2007), discusses many of the ''technical procedures'' required for video success, some of these strategies were not appropriate for our research design. She notes, for example, ''when I interviewed people with video in their homes, I often collaborated with my interviewees to arrange that lights are strategically placed and switched on as we moved around video-recording'' (p. 105). In our study, the research team wanted to ensure that coresearchers had a high level of the control to record at times and in places that best suited their activities. Using a range of cameras and extending the timeframe across the week may also prove too intrusive. Similarly, installing screen capture software on home computers would require that parents be trained to ensure that the software was enabled only for children's activities and for the computer to have sufficient memory and hardware capacity to use a program such as Camtasia. Researchers need to explore these kinds of tensions in designing these studies to explore what will be the best fit.
In their book Researching the Visual, Michael Emmison and Philip Smith (2000) discuss strategies for gathering visual data on what they term, ''the most ubiquitous but least self-evident manifestation [of visual data]: the activities of people in everyday interaction' ' (p. 190). However, like many visual methods texts, Emmison and Smith discuss strategies for gathering data in the public domain or where the research team is present to observe. There is less guidance for researchers aiming to conduct research in the home environment, particularly in ways that will balance the need to gather visual data with the respect for privacy in that space. These issues have a range of research ethics implications, as well, which must suit both the researchers' and the families' needs. However, the use of dedicated training materials or workshops to educate families on the use of the cameras may enhance the quality of some data without adding an undue burden on either the coresearchers or participants. It is important for researchers to consider these various elements when designing projects to ensure that the best quality data are captured, while respecting families' privacy, as well as the time commitment involved in data collection. The families involved in this study were very generous with their time and did their best to capture relevant, useful data on behalf of the research team. The end result is a rich and engaging data set that provides a rare window into young children's engagement with technology in the home.
Conclusion
Overall, the involvement of parents and children as research videographers in the home resulted in very rich and detailed data about children's use of technology in their daily lives. In particular, it is noteworthy that the parents recorded interactions with their children that researchers would typically never observe or record in other (obtrusive) data collection settings. The type of personal and informal engagement observd between a parent and a child searching online, in the comfort of their home (e.g., Tina and her father, who were laying on the parents' bed with a younger sibling nearby; see Danby et al., 2013), would not be possible to replicate in a lab or a public space. The use of recording devices in the home allows researchers to capture these moments, as they happen, with little interference in the everyday activities of those involved. This type of recording provides powerful insights into an intimate family moment where digital technology is central to rich interactions between the participant and other family members. The use of appropriate, fixed camera angles provided clear images of both the technological devices and the children (e.g., hand placement on keyboard and computer mouse), while the dialogue captured between participants and their parents/ siblings provided useful context for the observed activities (e.g., scaffolding and support provided for online searching). Further, the ability to derive still images from the video data set has proven very useful for the data analysis and writing process.
|
2019-08-17T02:15:05.316Z
|
2016-04-19T00:00:00.000
|
{
"year": 2016,
"sha1": "99921af6d47c2c0adb0cc23cf42208083c5bc18e",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1609406915621403",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "66b40d4743fa8efca5d3fa2e9d7a1b254367a748",
"s2fieldsofstudy": [
"Education",
"Sociology"
],
"extfieldsofstudy": []
}
|
80828648
|
pes2o/s2orc
|
v3-fos-license
|
DEVELOPMENT AND EVALUATION OF OSMOTICALLY CONTROLLED ORAL DRUG DELIVERY SYSTEM
Conventional drug delivery systems have slight control over their drug release and almost no control over the effective concentration at the target site. This kind of dosing pattern may result in constantly changing, unpredictable plasma concentrations. Drugs can be delivered in a controlled pattern over a long period of time by the controlled or modified release drug delivery systems. They include dosage forms for oral and transdermal administration as well as injectable and implantable systems. For most of drugs, oral route remains as the most acceptable route of administration. Certain molecules may have low oral bioavailability because of solubility or permeability limitations. Development of an extended release dosage form also requires reasonable absorption throughout the gastro-intestinal tract (GIT). Among the available techniques to improve the bioavailability of these drugs fabrication of osmotic drug delivery system is the most appropriate one. Osmotic drug delivery systems release the drug with the zero order kinetics which does not depend on the initial concentration and the physiological factors of GIT. This review brings out new technologies, fabrication and recent clinical research in osmotic drug delivery. Osmotically controlled drug delivery systems use osmotic pressure for controlled delivery of active agent(s). Drug delivery from these systems, to a large extent, is independent of the physiological factors of the gastrointestinal tract.
Conventional drug delivery systems have slight control over their drug release and almost no control over the effective concentration at the target site. This kind of dosing pattern may result in constantly changing, unpredictable plasma concentrations. Drugs can be delivered in a controlled pattern over a long period of time by the controlled or modified release drug delivery systems. They include dosage forms for oral and transdermal administration as well as injectable and implantable systems. For most of drugs, oral route remains as the most acceptable route of administration. Certain molecules may have low oral bioavailability because of solubility or permeability limitations. Development of an extended release dosage form also requires reasonable absorption throughout the gastro-intestinal tract (GIT). Among the available techniques to improve the bioavailability of these drugs fabrication of osmotic drug delivery system is the most appropriate one. Osmotic drug delivery systems release the drug with the zero order kinetics which does not depend on the initial concentration and the physiological factors of GIT. This review brings out new technologies, fabrication and recent clinical research in osmotic drug delivery. Osmotically controlled drug delivery systems use osmotic pressure for controlled delivery of active agent(s). Drug delivery from these systems, to a large extent, is independent of the physiological factors of the gastrointestinal tract. Because of their unique advantages over other types of dosage forms, osmotic pumps form a class of their own among the various drug delivery technologies, and a variety of products based on this technology are available on the market.
INTRODUCTION
Many conventional drug delivery systems have been designed by various researchers to modulate the release a drug over an extended period of time and release (1). The rate and extent of drug absorption from conventional formulations may vary greatly depending on the factors such as physico-chemical properties of the drug, presence of excipients, physiological factors such as presence or absence of food, pH of the gastro-intestinal tract (GI) and so on (2). However, drug release from oral controlled release dosage forms may be affected by GI motility and presence of food in the GI tract (3). Drugs can be delivered in a controlled pattern over a long period of time by the process of osmosis. Drug delivery from this system is not influenced by the different physiological factors within the gut lumen and the release characteristics can be pre-dicted easily from the known properties of the drug and the dosage form (4). Osmosis can be defined as the spontaneous movement of a solvent from a solution of lower solute concentration to a solution of higher solute concentration through an ideal semipermeable membrane, which is permeable only to the solvent but impermeable to the solute. The pressure applied to higher concentration side to inhibit solvent flow is called the osmotic pressure. The first osmotic effect was reported by Abbe Nollet in 1748. Later in 1877, Pfeffer performed an experiment using semipermeable membrane to separate sugar solution from pure water. He showed that the osmotic pressure of the sugar solution is directly proportional to the solution concentration and the absolute temperature. In 1886, Vant Hoff identified an underlying proportionality between osmotic pressure, concentration and temperature. He revealed that osmotic pressure is proportional to concentration and temperature and the relationship can be described by following equation. π = n2 RT Where, π = osmotic coefficient n2 = molar concentration of solute in the solution R = gas constant T = Absolute temperature Osmotic pressure is a colligative property, which depends on concentration of solute that contributes to osmotic pressure. Solutions of different concentrations having the same solute and solvent system exhibit an osmotic pressure proportional to their concentrations. Thus a constant osmotic pressure, and thereby a constant influx of water can be achieved by an osmotic delivery system that results in a constant zero order release rate of drug (5).
BASIC COMPONENTS OF OSMOTIC SYSTEMS Drug
Metoprolol succinate Which have short biological half-life and which is used for prolonged treatment are ideal candidate for osmotic systems.
Semipermeable membrane
An important part of the osmotic drug delivery system is the semipermeable membrane housing. Therefore, the polymeric membrane selection is key to the osmotic delivery formulation. The membrane should possess certain characteristics, such as impermeability to the passage of drug and other ingredients present in the compartments. The membrane should be inert and maintain its dimensional integrity to provide a constant osmotic driving force during drug delivery (6).
Osmotic agent
Osmotic agents maintain a concentration gradient across the membrane. They also generate a driving force for the uptake of water and assist in maintaining drug uniformity in the hydrated formulation. Osmotic components usually are ionic compounds consisting of either inorganic salts or hydrophilic polymers. Osmotic agents can be any salt such as sodium chloride, potassium chloride, or sulfates of sodium or potassium and lithium. Additionally, sugars such as glucose, sorbitol, mannitol or inorganic salts of carbohydrates can act osmotic agents.
Coating solvent
Solvents suitable for making polymeric solution that is used for manufacturing the wall of the osmotic device include inert inorganic and organic solvents that do not adversely harm the core, wall and other materials. The typical solvents include methylene chloride, acetone, methanol, ethanol, isopropyl alcohol, butyl alcohol, ethyl acetate, cyclohexane, carbon tetrachloride, water etc. The mixtures of solvents such as acetone-methanol (80:20), acetone-ethanol (80:20), acetone-water (90:10), methylene chloride-methanol (79:21), methylene chloride-methanol-water (75:22:3) etc. can be used (7). Plasticizers Different types and amount of plasticizers used in coating membrane also have a significant importance in the formulation of osmotic systems. They can change visco-elastic behavior of polymers and these changes may affect the permeability of the polymeric films (8). Some of the plasticizers used are as below: Polyethylene glycols Ethylene glycol monoacetate; and diacetate-for low permeability Tri ethyl citrate
Pore forming agent
These agents are particularly used in the pumps developed for poorly water soluble drug and in the development of controlled porosity or multiparticulate osmotic pumps. These poreforming agents cause the formation of microporous membrane. The microporous wall may be formed in situ by a pore-former by its leaching during the operation of the system. The pore formers can be inorganic or organic and solid or liquid in nature. For example, alkaline metal salts such as sodium chloride, sodium bromide, potassium chloride, potassium sulphate, potassium phosphate etc., alkaline earth metals such as calcium chloride and calcium nitrate, carbohydrates such as sucrose, glucose, fructose, mannose, lactose, sorbitol, mannitol and, diols and polyols such as polyhydric alcohols and polyvinyl pyrrolidone can be used as pore forming agents.
Advantages of osmotic drug delivery system 1. The delivery rate of zero-order is achievable with osmotic systems. 2. The release from osmotic systems is minimally affected by the presence of food in gastrointestinal tract. 3. A high degree of in vivo-in vitro correlation (IVIVC) is obtained in osmotic systems. 4. For oral osmotic systems, drug release is independent of gastric pH and hydro-dynamic conditions.
Limitations of osmotic drug delivery system 1. Special equipment is required for making an orifice in the system. 2. Residence time of the system in the body varies with the gastric motility and food intake. 3. It may cause irritation or ulcer due to release of saturated solution of drug.
TYPES Elementary osmotic pump (EOP)
The was introduced in 1970s to deliver drug at zero order rates for prolonged periods, and is minimally affected by environmental factors such as pH or motility. The tablet consists of an osmotic core containing the drug surrounded by a semipermeable membrane laser drilled with delivery orifice. Following ingestion, water in absorbed into system dissolving the drug, and the resulting drug solution is delivered at the same rate as the water entering the tablet. The disadvantages of the elementary pump are that it is only suitable for the delivery of water soluble drugs (10,11).
Push-Pull Osmotic Pump (PPOP)
Push pull osmotic pump is a modified elementary osmotic pump through, which it is possible to deliver both poorly watersoluble and highly water soluble drugs at a constant rate. The push-pull osmotic tablet consists of two layers, one containing the drug and the other an osmotic agent and expandable agent. A semipermeable membrane that regulates water influx into both layers surrounds the system. While the push-pull osmotic tablet operates successfully in delivering water-insoluble drugs, it has a disadvantage that the complicated laser drilling technology should be employed to drill the orifice next to the drug compartment (12). Osmotic bursting osmotic pump This system is similar to an EOP expect delivery orifice is absent and size may be smaller. When it is placed in an aqueous environment, water is imbibed and hydraulic pressure is built up inside until the wall rupture and the content are released to the environment. Varying the thickness as well as the area the semipermeable membrane can control release of drug. This system is useful to provide pulsated release (13).
OROS-CT
OROS-CT (Alza corporation) is used as a once or twice a day formulation for targeted delivery of drugs to the colon. The OROS-CT can be a single osmotic agent or it can be comprised of as many as five to six push pull osmotic unit filled in a hard gelatin capsule. After coming in contact with the gastric fluids, gelatin capsule dissolved and the enteric coating prevents entry of fluids from stomach to the system as the system enters into the small intestine the enteric coating dissolves and water is imbibed into the core thereby causing the push compartment to swell. At the same time flowable gel is formed in the drug compartment, which is pushed out of the orifice at a rate, which is precisely controlled, by the rate of water transport across the semi permeable membrane. Incorporation of the cyclodextrin-drug complex has also been used as an approach for delivery of poorly water soluble drugs from the osmotic systems. Ex. Sulfobutylether-Bcyclodextrin sodium salt serves as a solubilizer and osmotic agent (1).
Materials
Metoprolol
Preparation of the Core Tablets
The controlled release matrix tablets of Metoprolol succinate was prepared by direct compression method. Table No. 2 shows the composition of each matrix formulation. The formulation of each controlled release matrix tablets of Metoprolol succinate is composed of Mannitol and NaCl as a osmogents in various concentrations. The other excipients used were Tablettose, Magnesium stearate, Aerosil & talc. The weight of tablet was adjusted to 300mg and each tablet contained 47.5 mg of batches (F1-F3) were Metoprolol succinate prepared. All the excipient as per given in weighing record was dispensed and sifted through 40 mesh. Drug, Tablettose 100,osmotic agent was mixed and blended in polybag for 10 min, magnesium stearate and colloidal silicon dioxide were added to blend again for 5 min in polybag, and tablets were compressed with 8 mm round concave punch to produce the desired tablets. Evaluation uncoated/core tablet. Hardness Although hardness test is not an official, tablet should have sufficient handling during packing and transportation. Hardness of tablet was measured using Monsanto hardness tester. It is the pressure required to fracture diametrically placed tablets by applying the force. The hardness of 3 tablets, from each batch was determined and mean hardness was taken into account, which was expressed in Kg/cm 2 Weight Variation Test 20 tablets were weighed individually, average weight was calculated and individual tablet weight was compared to the average USP weight variation test.
Friability
Friability test was performed to assess the effect of friction and shocks, which may often cause tablet to chip, cap or break. Roche Friabilator was used for the purpose. Compressed tablets should not more than 1% of their weight. (14,15) The percentage friability was measured using the formula, % F = {1-(Wo/W)} × 100 Where, % F = friability in percentage Wo= Initial weight of tablet W= weight of tablets after revolution
Thickness
The thickness of the tablet was measured using Vernier caliper. Thickness of three tablets from each batch was measured and mean was calculated. (14,15) Excipients as per given in Table No-3 was dispensed cellulose acetate and triacetin were mixed in solvent with continues stirring with mechanical stirrer at 500 rpm stirred the solution for 45 min. upto complete clear solution forms.
Coating of the tablet
Coating was done by using conventional coating machine, 1 liter coating pan used for coating of the tablet. Rotation speed of the pan was 25 rpm and solution spray on tablets using spray gun from distance of 15 cm and tablets were dried by using hot air dryer.
Fig.2 Coated tablet.
Drilling of the tablet Coated tablet was drilled by using mechanical micro drill machine by using 0.1 mm and 0.2 mm, 0.7 mm beats.
In-vitro Dissolution Stdy of Coated Tablet
The In-vitro dissolution test for press coated tablets were performed in triplicate using an eight-station USP type Π (paddle) apparatus (Electro lab) at 37°C ± 0.5°C and 50 rpm speed in 500 ml each pH 6.8 phosphate buffer for rest of time were used as dissolution media. Aliquots of 5 ml dissolution fluid were removed at specified time intervals of one hour and an equivalent amount of fresh dissolution fluid equilibrated at the same temperature was replaced. Aliquots were filtered through Whatman filter paper, suitably diluted using dissolution medium and analyzed for the amount of Metoprolol succinate released by a spectrophotometer (UV1700, Shimadzu, Japan) at a wavelength 225.0 nm respectively. The amounts of drug present in the samples were calculated with the help of appropriate calibration curve constructed from reference standard. Cumulative percentage drug release was calculated.(16)
Kinetics of drug release Zero-order model
If drug release from controlled release formulation is stable in fluid at the absorption site, has similar absorption sites and it absorbed rapidly and completely after its release then, its rate of appearance in plasma will be governed by its rate of release from the controlled release formulation. Thus, when the drug release follows zero-order kinetics, absorption will also be a zero-order process and concentration of drug in plasma at any given time can be given by equation (17): On the basis of Micromeretic properties it was confirmed that the drug Metoprolol succinate possessed sufficient flowability to be used for direct compression.
FTIR spectra
Dry sample of drug and potassium bromide was mixed uniformly and filled into the die cavity of sample holder and an IR spectrum was recorded using diffuse reflectance FTIR spectrophotometer (Agilent cary 630). Preparation of pH 6.8 phosphate buffer Phosphate buffer pH 6.8 was prepared by using 28.80gm of disodium hydrogen phosphate and 11.45 gm of potassium hydrogen phosphate in sufficient water to produce 1000 ml.
Calibration curve of Metoprolol succinate in pH 6.8 phosphate buffer
Various drug concentrations (2-18µg/ml) in intestinal fluid were prepared and the absorbance was measured at 225nm. The results are shown in Table 8
Bulk density
It has been stated that the bulk density values less than 1.2g/cm 2 indicate good packing and values greater than 1.5g/cm 2 indicate poor packing. The loose bulk density and tapped bulk density values for all the formulation varied in range 0.43±0.01g/cm 3 to 0.46±0.02g/cm 3 respectively. The values obtained lies within the acceptable range. These results may further influence property such as compressibility and tablet dissolution.
Compressibility index
The percent compressibility of granules was determined by Carr's index. The percent compressibility for all formulation lies within the range of 8.41±0.01 % to 11.35±0.02 % indicates acceptable flow property.
Hardness
Tablet hardness was determined by using Monsanto hardness tester. Hardness of three tablets of each tablet was determined. Hardness values of the formulation ranged from 4.2 to 4.5kg/cm 2 , which indicate good strength of tablet.
Friability
Tablet friability was determined by Roche friabilator and weight loss was calculated and represented in the terms of percent friability. Friability values of all the formulation were less than 1%, indicating good strength of tablet.
Weight variation
In weight variation test, the Pharmacopoeial limit for percent of deviation for tablets weight was less than 324 mg is not more than 2.5 %.The average percent deviation of all tablets was found to be within the limit and hence all formulation passes the weight variation test.
Thickness
Examination of tablets from each batch showed flat circular shape with no cracks having white colour. The thickness of tablets was determined using Vernier caliper. The thickness of tablets ranged from 4.65±0.01 to 4.67±0.08. All formulations showed uniform thickness.
Drug content
The drug content was found to be 98.19 to 99.32 % Vol 7, Issue 09, 2017.
S. K Ghate et al. ISSN NO: 2231-6876
In-vitro drug release studies The dissolution rate was studied using 500 ml of pH 6.8 phosphate buffer for 1hrs, 4hr, 8hrs, 20hrs using USP dissolution apparatus type I. The theoretical release profile calculation is important to evaluate the formulation with respect to release rates and to ascertain whether it releases the drug in predetermine manner. .9 In-vitro dissolution profile of F2I, F2II, and F2III formulation.
All the formulations were subjected to in-vitro dissolution studies and results are shown in table no.9 and Fig No.8.The results reveal that release profiles of Metoprolol succinate tablets containing varying proportion of Mannitol and NaCl i.e. batch F1, F2 & F3 showed drug release as given in table No. 9 in 6.8 pH phosphate buffer, drug release is slow which may be due to osmotic pressure generate inside the tablet. The drug release occurs when solvent penetrates through dry matrix, then dissolution and diffusion of drug through the resultant orifice causing release of drug through orifice. This shows that concentration of osmotic agent controls the drug release and size of the orifice.
In-vitro release studies of all the formulations were also compared and evaluate. The results showed that the drug release profile of formulation F2 resembles formulation drug release as per the USP monograph of metoprolol succinate. Hence formulation F2 containing Mannitol and sodium chloride in the concentration of 1:1 ratio was considered as optimized formulation and used for further study.
Optimized formulation was again evaluated by using the different size of orifice 0.1mm, 0.3 mm, 0.7 mm., On the observation of the release of the drug from tablet vary with the size of orifice.
All the formulations were subjected to in-vitro dissolution studies and results are shown in table no.9,10 and Figure no.8,9. The results reveal that release profiles of Metoprolol succinate tablets containing varying proportion of osmotic agent and orifice size i.e. batch F1, F2, F3, F2I, F2II, and F2III. Drug release from the tablet was constant with time which follows Zero order, which may be due to osmotic pressure inside the tablet due to osmotic agent. The drug release occurs when solvent penetrates through orifice, gelation of polymer and then dissolution and diffusion of drug through the resultant layer causing hydration of osmotic agent. This shows that as the concentration of mannitol and NaCl effect on the rate of drug release.
CONCLUSION
It can be concluded from the present study and results obtained that the osmotic pump for release of Metoprolol succinate can be developed as a once daily dosage form. (O. D.) Zero order release rate can be obtained by using cellulose acetate as a polymer and triacetine as a plasticizer.
|
2019-03-18T13:58:31.913Z
|
2017-12-24T00:00:00.000
|
{
"year": 2017,
"sha1": "42fbcc0371722c3fa8dba0e3298a9251c30a210b",
"oa_license": "CCBY",
"oa_url": "https://zenodo.org/record/1036415/files/30.pdf",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "baf4c18de5357304b498e19c7117027276c978ed",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
51641675
|
pes2o/s2orc
|
v3-fos-license
|
Immunosuppressive property of submandibular lymph nodes in patients with head and neck tumors: differential distribution of regulatory T cells
Objective Different sensitizations and immune responses are thought to be induced in response to antigens at different mucosal sites between the oral floor and nose. The aim of this study was to investigate differences in the distributions of lymphocyte subsets in the submandibular (SM) and upper jugular (UJ) lymph nodes (LNs), which are supposed to be regional LNs of the oral floor and nasal mucosa, respectively. SMLNs and UJLNs were collected from patients with head and neck tumors who underwent surgical resection. The populations of T cells, Natural Killer (NK) cells, Natural Killer T (NKT) cells, regulatory T cells (Tregs) and dendritic cells (DCs) in LNs without metastasis were analyzed by flow cytometry. The high-affinity IgE receptor (FcεRI) expression of LN cells were also evaluated. Results The proportions of CD4+CD25+Foxp3+ Tregs, CD4+CD45RA−Foxp3high effector Tregs and FcεRIα+CD33+CD11c+ DCs were significantly larger in SMLNs compared with UJLNs, while those of CD3+ T cells, CD3−CD56+ NK cells, CD3+Vα24+Vβ11+ NKT cells, and CD123+CD303+ DCs did not show any significant differences between SMLNs and UJLNs. The differential distributions of CD4+CD25+Foxp3+ Tregs were observed regardless of tumor region, LN metastasis and clinical staging. These data indicate that SMLNs may have immunosuppressive properties compared with UJLNs. Electronic supplementary material The online version of this article (10.1186/s13104-018-3587-z) contains supplementary material, which is available to authorized users.
Introduction
The nasal mucosa and oral mucosa are located at the entrance of the respiratory tract and gastrointestinal tract, respectively, and are constantly exposed to various antigens. However, different sensitizations and immune responses are induced in response to antigens at these different mucosal sites. When the nasal mucosa is exposed to allergens, specific IgE production is evoked, and following repeated exposure to the allergen, typical nasal symptoms of allergic rhinitis (AR) are induced. In contrast, the oral cavity is exposed to various foreign substances, such as foods, bacteria, and viruses, but excessive immune responses, such as oral allergy syndrome, are normally restrained. Sublingual immunotherapy (SLIT), in which an allergen is applied to the oral floor in patients with various allergic diseases, attenuates allergic reactions [1,2]. Conversely, the administration of a viral vaccine, such as the influenza vaccine, to the nasal mucosa enhances immune responses against the virus [3,4]. It is therefore known that there are differential immune responses between the nasal mucosa and oral mucosa following antigen exposure.
Dendritic cells (DCs) capture antigens exposed to the mucosa and migrate to regional lymph nodes (LNs), where they present the antigen to lymphocytes [5]. Differential characteristics of DCs between the oral and nasal mucosa have been previously reported [6]. The different immunocompetent cells at various mucosal sites are expected to induce different immune responses.
It has also been reported that cultured DCs administered to the nasal submucosa quickly migrate to upper jugular lymph nodes (UJLNs), while those administered to the oral floor mucosa migrate to submandibular lymph nodes (SMLNs) [8,9]. A significant increase in the number of peripheral Natural Killer T cells (NKT cells) has been observed following the nasal submucosal administration of DCs pulsed with α-galactosylceramide (αGalCer), a specific ligand for invariant NKT cells, while this response was not observed following administration in the oral floor mucosa [8]. These results suggest that the differential immunological responses observed between the nasal and oral mucosa depend upon differences in each type of mucosa, including sites involving DCs as well as differences in draining LNs.
In this study, we investigated the differential distribution of lymphocyte subsets between SMLNs and UJLNs collected from the surgical specimens of patients with head and neck tumors.
Patient samples
30 patients between 36 and 84 years old with head and neck tumors were enrolled in this study (Additional file 1: Table S1). All patients underwent surgery in the Department of Otorhinolaryngology and Head and Neck Surgery, Chiba University Hospital. We sorted the dissected LNs into different LN regions (SMLNs and UJLNs) immediately after the operation. A representative LN ranging from 7 to 10 mm in diameter from SMLN and UJLN regions of each patient was collected, and split samples of approximately 5 mm 3 from these lymph nodes were used for analysis. Approximately 1 × 10 7 cells were collected from the segments, and other segments were submitted for pathological diagnosis. LNs without metastasis were confirmed by pathological examination. The study was approved by the Ethics Committee of Chiba University. Written informed consent was obtained from each patient prior to participation in the study.
Lymph node mononuclear cells (LNMCs)
The collected LNs were placed in complete RPMI1640 medium, and then homogenized to generate LNMCs, which were filtered through a nylon mesh and washed twice in complete RPMI1640 medium.
Statistical analysis
Statistical analyses were performed using the paired and unpaired t test. A value of p < 0.05 was considered statistically significant.
Increased CD4 + CD25 + Foxp3 + Tregs in SMLNs independent of clinical features
The proportions of CD4 + CD25 + Foxp3 + cells among LNMCs from SMLNs were significantly larger than those of UJLNs collected from patients with both oral cancer (p < 0.01; Fig. 2a) and cancer at other sites (p < 0.01;
Discussion
In this study, the immunological differences of LNMCs between SMLNs and UJLNs were investigated, and our data suggested that SMLNs had more immunosuppressive properties than UJLNs. We first examined the proportion of lymphocytes between SMLNs and UJLNs, but no significant differences in CD3 + T cells, CD3 − CD56 + NK cells or CD3 + Vα24 + Vβ11 + NKT cells were observed. However, significantly higher proportions of CD4 + CD25 + Foxp3 + Tregs were detected in SMLNs than in UJLNs. Recently, CD4 + Foxp3 + Tregs were reported to be further subdivided into functionally distinct subpopulations based on CD45RA and Foxp3 expression [10]. CD45RA − Foxp3 high Tregs expressed more CTLA-4 and IL-2 (CD25) receptor on their cell surfaces and had potent immunosuppressive activities for T cells, and were thus called effector Tregs [11,12]. In this study, a significantly higher proportion of CD4 + CD45RA − Foxp3 high effector Tregs was found in SMLNs compared with UJLNs.
The LNMCs from the SMLNs and UJLNs used in this study were collected from surgically resected specimens from patients with head and neck tumors. The frequency of Tregs in the peripheral blood has been reported to be elevated in patients with cancer, including those with head and neck cancer [13,14]. These results might reflect the increased regulatory T cells in LNs with metastasis and primary tumors in cancer patients which were associated with clinical stage and the presence of lymph node metastasis [13]. Therefore, we examined the influence of tumor-related UJLNs. However, significant differences in the proportion of CD4 + CD25 + Foxp3 + Tregs between SMLNs and UJLNs remained, regardless of the clinical features examined. This suggested that increased Tregs were characteristic of SMLNs compared with UJLNs. In a previous study, DCs administered to the nasal mucosa migrated to UJLNs, while those administered to the oral floor mucosa migrated to SMLNs [8,9]. Additionally, a significant increase in the number of peripheral NKT cells has been observed after administering DCs pulsed with the NKT cell ligand αGalCer into the nasal mucosa. However, these activities were not detected after administering DCs into the oral floor mucosa [8]. Although the mechanisms remain unknown, it has been suggested that the NKT cell activation by DCs that migrated from the oral floor mucosa might be inhibited by the increased Tregs in SMLNs, which are the draining LNs of the oral floor mucosa.
The induction and expansion of Tregs have been shown to be controlled by DCs [15,16]. Human DCs have two major subtypes, conventional DCs (cDCs) and plasmacytoid DCs (pDCs) [17]. cDCs have can stimulate T cells, evoking Th1-or Th2-responses, depending on the inflammatory environment [18][19][20]. pDCs produce type I interferon when there is an infection [18], and it has been suggested that they induce T cell tolerance [18,20]. In this study, although we analyzed CD123 + CD303 + pDCs and CD33 + CD11c + cells which included cDC, macrophage and monocyte populations, no differences were found between SMLNs and UJLNs.
The oral mucosa is known to have a lot of DCs that express FcεRI [7,21], and allergens can be taken up by IgE molecules bound to the FcεRI expressed on oral DCs [22]. In this study, a larger proportion of FcεRIexpressing CD33 + CD11c + cells was observed in submandibular LNMCs compared with upper jugular LNMCs. These FcεRI-expressing cDCs might migrate to the SMLNs from the oral mucosa, where they induce Treg and immune tolerance to various commonly-exposed antigens or to allergens administrated by sublingual immunotherapy [7,23].
Conclusion
In this study, we identified a differential distribution of lymphocyte subsets between SMLNs and UJLNs. The proportions of CD4 + CD25 + Foxp3 + Tregs, CD4 + CD45RA − Foxp3 high effector Tregs and FcεRIexpressing CD33 + CD11c + cells in SMLNs were larger than those in UJLNs. SMLNs may have more potent immunosuppressive properties than UJLNs.
Limitations
In this study, we examined the difference in the distribution of immune cells in two LN regions. However, a limitation of this study is that the samples were restricted to tumor patients who underwent surgery because it is unethical to collect LNs from healthy subjects. It is possible that the distribution of immune cells in LNs differed between patients with head and neck tumors and healthy subjects, but it is difficult to compare immune cell profiles between these groups. Additionally, the detailed interactions between Tregs and FcεRI-expressing CD33 + CD11c + cells, and their suppressive functions and effects on immune cells in lymph nodes needs further investigation.
|
2018-07-18T04:33:11.972Z
|
2018-07-16T00:00:00.000
|
{
"year": 2018,
"sha1": "26454d3d69bd3ed05c18c9c37fe5dd646eba52fd",
"oa_license": "CCBY",
"oa_url": "https://bmcresnotes.biomedcentral.com/track/pdf/10.1186/s13104-018-3587-z",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d856f29945cd503f44071defc4a553f8e2c44bc2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235627424
|
pes2o/s2orc
|
v3-fos-license
|
Graph theoretical analysis of evoked potentials shows network influence of epileptogenic mesial temporal region
Abstract It is now widely accepted that seizures arise from the coordinated activity of epileptic networks, and as a result, traditional methods of analyzing seizures have been augmented by techniques like single‐pulse electrical stimulation (SPES) that estimate effective connectivity in brain networks. We used SPES and graph analytics in 18 patients undergoing intracranial EEG monitoring to investigate effective connectivity between recording sites within and outside mesial temporal structures. We compared evoked potential amplitude, network density, and centrality measures inside and outside the mesial temporal region (MTR) across three patient groups: focal epileptogenic MTR, multifocal epileptogenic MTR, and non‐epileptogenic MTR. Effective connectivity within the MTR had significantly greater magnitude (evoked potential amplitude) and network density, regardless of epileptogenicity. However, effective connectivity between MTR and surrounding non‐epileptogenic regions was of greater magnitude and density in patients with focal epileptogenic MTR compared to patients with multifocal epileptogenic MTR and those with non‐epileptogenic MTR. Moreover, electrodes within focal epileptogenic MTR had significantly greater outward network centrality compared to electrodes outside non‐epileptogenic regions and to multifocal and non‐epileptogenic MTR. Our results indicate that the MTR is a robustly connected subnetwork that can exert an overall elevated propagative influence over other brain regions when it is epileptogenic. Understanding the underlying effective connectivity and roles of epileptogenic regions within the larger network may provide insights that eventually lead to improved surgical outcomes.
However, there is a growing appreciation that to be effective, these therapies must be guided not only by localization of the ictal onset zone, but also by a more comprehensive map of its effective connectivity with sites in a broader epileptogenic network responsible not only for seizure propagation, but also for ictogenesis.
Single-pulse electrical stimulation (SPES) has been increasingly used to estimate connectivity between sites recorded with intracranial electrodes for the surgical management of intractable focal epilepsy.
SPES elicits electrophysiological responses in regions that are effectively connected to the stimulation site. Compared to other connectivity measures such as resting functional magnetic resonance imaging and diffusion tensor imaging, SPES provides additional directional information about the corticocortical or subcortical interactions. Prior studies demonstrate that SPES can delineate epileptogenic networks by evoking spontaneous delayed responses (Valentín et al., 2005) or eliciting large amplitude evoked potentials (Enatsu, Jin, et al., 2012;Iwasaki et al., 2010;Parker et al., 2018) and time-frequency single pulse-evoked fast ripples (van 't Klooster et al., 2011). A few investigators have applied graph analytics to quantitatively interpret the directed connections of the seizure networks (Keller et al., 2014;Parker et al., 2018;van Blooijs, Leijten, van Rijen, Meijer, & Huiskamp, 2018;Zhao et al., 2019) with highly variable findings, most likely due to the heterogeneous spatial sampling and limited stimulation in a small number of subjects.
In the present study, we utilize SPES to explore alterations of network effective connectivity of eighteen patients with intractable mesial temporal lobe epilepsy (mTLE), the most common type of surgically remediable epilepsy (Engel Jr, 2001). By employing a graph theoreticalbased approach to analyze the causal interactions mapped by SPES, we aimed to understand the role of mesial temporal structures within a broader epileptogenic network, which we assumed could change with varying degrees of epileptogenicity across patients and across hemispheres within the same patient. We propose that identifying how epileptogenic and non-epileptogenic mesial temporal structures differ in their influence within the effective network can help characterize how electrophysiological interactions are altered between epileptogenic regions and rest of the brain. This network-based understanding can offer insight into mechanisms underlying seizure generation and propagation, and potentially provide a mechanistic framework for a targeted treatment approach in determining the importance of regions within the context of the epileptogenic network.
| Patients
We studied eighteen patients who were undergoing intracranial EEG monitoring prior to surgery for treatment of drug-resistant epilepsy at the Johns Hopkins Epilepsy Center from January 2016 through March 2020. All patients had EEG implantation and stimulation that included at least one mesial temporal lobe. None of the patients had visible lesions seen on the mesial temporal structures such as a tumor, dysplasia, or mesial temporal sclerosis. The study was approved by the Johns Hopkins School of Medicine Institutional Review Board (IRB 00247294) and was conducted using guidelines established in accordance with the Code of Ethics of the World Medical Association (1964, Declaration of Helsinki).
| Electrode placement
Patients were implanted with surface electrocorticography (ECoG) electrode grids and/or stereoelectroencephalography (S-EEG) electrodes. The type, number, and location of the electrodes were determined by the suspected location of the epileptogenic zone in each patient according to noninvasive tests including clinical seizure history, neuroimaging, neuropsychology, and scalp EEG recordings.
Patients underwent the S-EEG procedure if: (a) the suspected seizure onset zone (SOZ) was in deep-seated locations such as the mesial structures of the temporal lobe but imaging (MRI or PET) was nonlesional, (b) failure of previous subdural invasive studies to clearly outline the exact location of SOZ, (c) there was a need for bilateral exploration for possible bilateral independent seizure onset, or (d) there was concern for dual pathology or multifocal epilepsy.
(Chanhassen, MN)]. S-EEG depth electrodes (AdTech, Racine, WI) were implanted stereotactically using ROSA robotic assistant device (Medtech, Montpellier, France) as part of standard patient care. These depths were multi-contact and consisted of 6-10 cylindrical 2.3 mm long platinum contacts separated by 5 mm between centers of adjacent electrodes of the same bundle.
| Electrode localization
Final electrode locations were obtained by combining information from post-implantation CT and brain MRI using BioImage Suite (Duncan et al., 2004). Using FreeSurfer (Fischl, 2012) parcellation and visual verification with post-implant MRI, electrodes within amygdala, hippocampus, entorhinal cortex, or parahippocampal gyrus were identified as our region of interest within the mesial temporal lobe and will be referred to here as the mesial temporal region (MTR).
| EEG recording and determination of seizure onset
ECoG and S-EEG recordings for clinical review were performed with a Neurofax EEG-1100 amplifier (Nihon Kohden, Tokyo, Japan), digitized at 2 kHz with 16-bit resolution, and 0.016-300 Hz band-pass filtered.
All patients in the current sample had at least one typical seizure captured during their stay in the Epilepsy Monitoring Unit. Ictal EEG data was reviewed by at least two board-certified epileptologists (NEC, JYK) to identify the SOZ. Patients were divided into three groups based on the involvement of the amygdala, hippocampus, entorhinal cortex, and parahippocampal gyrus in the SOZ: (a) focal epileptogenic MTR: SOZ is isolated to unilateral MTR, (b) multifocal epileptogenic MTR: unilateral MTR is involved in seizure onset but SOZ extends beyond unilateral MTR, (c) non-epileptogenic MTR: MTR is not involved in seizure onset but is ipsilateral to the SOZ.
| SPES data acquisition and pre-processing
ECoG and S-EEG signals for SPES analysis were recorded using a NeuroPort amplifier (Blackrock Microsystems, Salt Lake City, UT), filtered (analog Butterworth antialiasing filters: first-order high-pass at 0.3 Hz, third-order low-pass at 7500 Hz), digitized at 16-bit resolution, and down-sampled to 1 kHz with a digital antialiasing filter. SPES with a CereStim R96 (Blackrock Microsystems, Salt Lake City, UT) was applied in a bipolar manner to adjacent electrodes. At each stimulation site, 50 biphasic pulses with 0.3 ms pulse width were applied ( Figure 1a) with an interstimulus interval (ISI) of 1 or 2 s. Both intervals provided sufficient time for response channels to return to baseline.
Stimulation current intensity was first set using a titration procedure, titrating 0.5-1 mA increments while watching for after-discharges, until evoked potentials (EP) were seen consistently during real-time visualization, up to a maximum of 10 mA with charge density well within safety limits (<50 μC/cm 2 ) (Gordon et al., 1990). The mean and variance of the final current intensities of stimulations, separated by location with respect to MTR and SOZ in each patient, are detailed in Table S1.
Following rejection of channels with excessive noise, electrode channels were re-referenced using a bipolar montage. An artifact removal procedure that preserves the time-frequency composition of the surrounding signal (Crowther et al., 2019;Huang et al., 2019) replaced artifactual data from −5 to 10 ms relative to stimulus with reversed, tapered copies of the signals surrounding this period.
F I G U R E 1 Experimental methods. (a) Single-pulse electrical stimulation (SPES) using a biphasic 0.3 ms pulse is applied in a bipolar manner to adjacent electrodes for 50 trials at an interstimulus interval (ISI) of 1 or 2 s. Example pulse waveforms applied to the stimulating electrodes are pictured. (b) Raw signal recordings of each channel are pre-processed before response analysis. After re-referencing using a bipolar montage, stimulation artifacts within −5 to 10 ms relative to stimulus are removed, and a 50 Hz low pass filter is applied. Trials are selected using analysis windows of −500 to 1,500 ms for 2 s ISI (−250 to 750 ms for 1 s ISI). (c) Trials are centered using baseline (−500 to −5 ms for 2 s ISI, −250 to −5 ms for 1 s ISI) mean and averaged. The average response is normalized by the baseline standard deviation, and amplitude of the N1 peak within 10 to 50 ms is used to quantify the signal's Z-score. (d) For each stimulated pair of electrodes, average responses with a Z-score greater than 6 are considered significant evoked potentials, representing a causal electrophysiological relationship between the stimulation and response sites. Significant responses to one stimulation are shown here as an example, colored according to the magnitude of the Z-score. (e) With multiple pairs of electrodes stimulated, the Z-score responses of all channels from each stimulation block becomes one row in the adjacency matrix of the weighted, directed graph of effective connectivity. The magnitude of the causal relationship between a stimulating site and response site is quantified by the Z-score at the row of the stimulating site and the column of the response site, shown here by color intensity. (f) Subnetworks are grouped based on location of stimulating and response sites with respect to mesial temporal region (MTR), either within (stimulate MTR, response in MTR), in (stimulate outside MTR, response in MTR), out (stimulate MTR, response outside MTR), or outside (stimulate outside MTR, response outside MTR). The properties of each subnetwork are calculated and compared to that of the others. MTR, mesial temporal region Following artifact removal, the signals were low-pass filtered (50 Hz).
All preprocessing and analysis of SPES results were performed using custom scripts in Matlab (R2019b, MathWorks, Natick, MA).
| Evoked potential calculations
For each pair of electrodes stimulated, an analysis window timelocked to the stimulus (−500 ms to 1,500 ms for 2 ISI, −250 ms to 750 ms for 1 s ISI) was used to compute the average responses ( Figure 1b). The mean of the pre-stimulus baseline (−500 ms to −10 ms for 2 s ISI, −250 to −10 ms for 1 s ISI) was subtracted to baseline-center each response before averaging across all responses for each channel. The average response was divided by the standard deviation of the pre-stimulus baseline, to obtain a normalized average response. The typical morphology of EPs comprises early (N1) and late (N2) negative deflections, typically occurring between 10 to 50 ms and 70 to 300 ms post-stimulus respectively (Matsumoto et al., 2004).
Peak detection was used to identify the latency, magnitude, and polarity of N1 and N2 potentials in the normalized average response for each channel (Figure 1c). Since the N1 potential is considered to represent direct excitatory connectivity, we used the absolute value of the N1 peak amplitude to quantify the magnitude of the evoked response and defined this as the channel's Z-score. Responses with a Z-score greater than 6 were considered significant (Keller et al., 2011), and we used these normalized response amplitudes to quantify the effective connectivity between the stimulation and response sites ( Figure 1d). The Z-scores between all stimulation and response sites were used as edge weights between the electrodes as nodes in weighted, directed networks for graph theoretical analysis ( Figure 1e).
| Evoked potential distribution, strength, and density
To test how epileptogenicity of the MTR affects the distributions of its effective connections, we compared the ratio of responses inside and outside the MTR and the proportion of possible responses in each region when stimulating from within the MTR ipsilateral versus contralateral to seizure onset. This was done for each bilaterally stimulated patient, and statistical significance was calculated using Fisher's exact test. Pooled responses over each patient group were also used to compare the MTR ipsilateral versus contralateral to seizure onset at the group level, using chi-squared tests.
Stimulation-response pairs were classified into four different categories, based on the location of stimulation and response sites: within, stimulation and response both inside MTR; out, stimulation inside MTR and response outside MTR; in, stimulation outside MTR and response inside MTR; outside, stimulation and response both outside MTR (Figure 1f). Connections involving the SOZ outside of MTR were not included, so that the only variance in epileptogenicity occurs within MTR. For bilaterally stimulated patients, connections were also grouped into these categories with respect to the contralateral non-epileptogenic MTR. We analyzed the Z-scores and weighted density of the subnetworks formed by each connection type described above.
Weighted density is calculated as the sum of the connection weights (Z-scores) divided by the number of possible connections given the number and location of stimulations and can be considered a measure of the average Z-score of each subnetwork.
| Centrality measures
Graph theoretical measures of centrality can quantify the importance of nodes within the context of a larger network, and each type of centrality can inform a different aspect of each node's network properties. Degree centrality has been commonly used to characterize graphs derived from EPs (Keller et al., 2014;Parker et al., 2018;van Blooijs et al., 2018;Zhao et al., 2019). The two variants, indegree and outdegree, are calculated as the sum of incoming or outgoing edge weights, respectively, for each node in the network. Since the edge weights in effective networks are the EP magnitudes, these measures can capture both the distribution and strength of the direct electrophysiological connections of each node. Hyperlink-induced topic search (HITS) centrality (Kleinberg, 1999) gives nodes authority and hub scores, where important authorities receive projections from important hubs, and important hubs project to important authorities. While originally designed to find a small number of authoritative sources and hubs of information, these measures may be applied to effective networks to identify a small number of nodes that are particularly responsible for projecting or receiving the largest EP. Katz centrality (Katz, 1953) factors in edge weights between nodes more than one edge away and gives nodes more importance for having connections with other important nodes. This can provide a unique characterization of the effective network because it allows indirect connections to influence a node's centrality. The two forms, Katz-receive and Katz-broadcast, rely on incoming and outgoing connections, respectively, to quantify a node's ability to receive or broadcast information.
Each centrality measure was calculated for every electrode node in each patient's network and normalized prior to comparison across patients. Degree centralities were normalized by the number of possible connections each node could have received or projected, and HITS and Katz centralities were each normalized to unit vectors. For comparisons, electrodes for each patient were grouped by location: within the MTR ipsilateral to seizure onset, within the non-epileptogenic MTR contralateral to the seizure onset, or in non-epileptogenic regions outside of both MTRs. Since nodes that were not stimulated will have insignificant outgoing centrality, only stimulated electrode centrality measures were included.
| Statistical analysis
To investigate how epileptogenicity affects the effective connectivity of the MTR with the rest of the brain, we compared Z-scores and weighted density across the factors patient group and connection type (and laterality for bilateral patients), and centrality across the factors patient group and electrode location. The Z-scores of every significant response and the centrality measures of every stimulated node were each pooled within patient group and compared using non-parametric tests due to the non-normality of the data. This analysis also allows for analogous comparisons to similar studies that used non-parametric methods on pooled patient evoked response amplitudes and node centrality (Parker et al., 2018;van Blooijs et al., 2018). Grouped Kruskal-Wallis tests were used to determine the presence of simple effects for each factor, followed by post hoc Dunn's tests for pairwise differences in medians when significant simple effects were observed.
A similar procedure was used for analyzing density and averaged centrality values. For each metric, the data was fit with a linear mixed effects model with fixed effects of patient group and electrode location for average centrality, and fixed effects of patient group and connection type (and laterality) for the densities, both with patient number as a random effect. The residuals of each model were checked for reasonable normality using Shapiro-Wilk tests. Interaction and main effects were tested using likelihood ratio tests (LRT) comparing full models to reduced models. Because of our a priori interest in how each patient group and location affects each connectivity metric, tests for simple effects using LRTs were conducted regardless of a statistically significant interaction.
If a significant simple effect was observed, post hoc pairwise comparisons were performed to investigate differences between groups. These were calculated with two-tailed t-tests using the pooled estimate of standard error from the model and containment degrees of freedom.
| Evoked potential distribution
For every bilaterally stimulated patient, the ratio of responses observed within and outside the MTR was not significantly different when stimulating the MTR ipsilateral versus contralateral to seizure onset (Fisher's exact tests, p > .05) (Table S2). This was also seen when pooling over patient group (chi-squared tests, p > .05) ( (Table S3). While these differences may be attributed to heterogeneous spatial sampling, altogether these results largely indicate that the relative distribution of effective connections produced from stimulating MTR is independent of the epileptogenicity of the MTR.
| Evoked potential strength
Among each patient group, median Z-score differed significantly across connection type (Kruskal-Wallis tests, p < .001), and connections within the MTR of each group had greater median Z-scores than that of the other connection types (Dunn's tests, p < .001) (Figure 2a, Table S4).
This is also seen in contralateral non-epileptogenic MTR (Dunn's tests, p < .05) (Figure 2b, Table S4). Since the relative strength of the effective connections within every observed MTR persists regardless of epileptogenicity, these responses may be indicative of the high physiological connectivity between mesial temporal structures.
There was also a significant difference in median Z-scores across Table S4). Since no significant differences were observed across the contralateral non-epileptogenic MTR, this may suggest that the variance in Z-scores across the MTR ipsilateral to seizure onset was due to the varying epileptogenicity across patient groups.
| Evoked potential weighted density
Similar to the Z-score results, there was a significant difference in weighted density across connection type within each patient group (LRT, p < .001), and connections within the MTR had significantly greater weighted densities than all other connection types for each patient group (t-tests, p < .01) (Figure 3a, Table S4). This is also true of the contralateral non-epileptogenic MTR (t-tests, p < .05) ( Figure 3b, Table S4), which fits with the Z-score results of a highly dense and strong effective network within MTR that persists across epileptogenicity.
The weighted densities of connections out of focal epileptogenic MTR were significantly greater than that of connections out of both Table S4). The densities of the contralateral non-epileptogenic MTR did not significantly differ across groups, again suggesting that the differences across group in the ipsilateral case were due to the epileptogenicity of the MTR.
| Pooled network centrality
A significant difference in the median pooled centrality of nodes within versus outside mesial temporal structures was observed in the focal epileptogenic MTR group for every centrality measure and in the multifocal epileptogenic MTR group for every measure except Katzreceive (Kruskal-Wallis tests, p < .05) (Figure 4a, Table S4). In each case, the median centrality of electrodes within the MTR was significantly greater than those outside (Dunn's tests, p < .05) and the difference was greater for the focal group than for multifocal group. The median centrality of electrodes within the MTR differed significantly across patient group for outdegree, authority, and hub (Kruskal-Wallis tests, p < .05). The median centrality of electrodes within the focal epileptogenic MTR was significantly greater than that within the F I G U R E 3 Weighted density comparisons of effective connectivity subnetworks. The weighted densities (sum of significant connections' Zscores divided by total possible connections) of effective connectivity subnetworks within, out, in, and outside relative to MTR are calculated for each patient and fit with linear mixed effects models to compare effects of connection type and patient group, followed by post hoc tests for pairwise comparisons. Table S4).
When the contralateral non-epileptogenic MTR was included for the bilaterally stimulated patients, there was still a significant effect of location on centrality for the focal epileptogenic MTR group in every centrality measure (Kruskal-Wallis tests, p < .05) (Figure 4b, Table S4).
In each measure, the median centrality within the ipsilateral MTR was still greater than the median centrality outside, while only the median authority and Katz-receive centrality within the focal epileptogenic MTR was significantly greater than the contralateral MTR (Dunn's tests, p < .05). Unexpectedly, in Katz-receive and Katz-broadcast centrality, the contralateral non-epileptogenic MTR within the multifocal patient group was greater than the outside non-epileptogenic electrodes. There was a significant difference in median centrality across patient group among the MTR ipsilateral to seizure onset for every centrality except Katz-broadcast (Kruskal-Wallis tests, p < .01), and the electrodes within the focal epileptogenic MTR had greater median centrality than those in both the multifocal and non-epileptogenic MTR ipsilateral to seizure onset (Dunn's tests, p < .01) (Figure 4b, Table S4). Table S4). Centrality within MTR was significantly different across patient group for outdegree and hub centrality (LRT, p < .05). Accordingly, average centrality within focal epileptogenic MTR was significantly greater than that within multifocal and non-epileptogenic MTR for both outdegree and hub [outdegree t(15) = −3.30, p = .0085 and t(15) = −3.23, p = .0085; hub t(15) = −2.52, p = .0355 and t(15) = −2.52, p = .0355].
While similar trends were seen for other centrality measures, they did not reach statistical significance. Katz-receive and Katz-broadcast F I G U R E 5 Graph centrality measures of nodes averaged for each patient. The centralities of nodes are averaged for each patient by location (inside MTR ipsilateral to seizure onset, inside MTR contralateral to seizure onset, outside MTR in non-epileptogenic tissue) and fit with linear mixed effects models to compare effects of location and patient group, followed by post hoc tests for pairwise comparisons. (a) Analysis using singular MTR ipsilateral to seizure onset zone across all patients (n = 18). (b) Analysis using MTR ipsilateral and contralateral to seizure onset zone in bilaterally stimulated patients (n = 10). *p < .05; **p < .01. MTR, mesial temporal region centrality within each group were significantly different across location (LRT, p < .05), but post hoc analysis did not result in significant pairwise differences.
| Outcome prediction
To see how these connectivity metrics of the effective network could offer insight into the role the MTR plays in seizure onset, we looked at outcomes of patients in the multifocal and non-epileptogenic MTR groups whose centrality measures most resembled those seen in the focal epileptogenic MTR group. For the patients in the multifocal group, this included P1, P2, P7, and P11. While P7 and P11 have not had surgery yet, P1 had an amygdalohippocampectomy with a one year outcome of Engel Class II and P2 had an amygdalohippocampectomy and temporal lobectomy with a one year outcome of Engel Class I, suggesting that the MTR was more important within the multifocal SOZ and larger epileptic network. The patient in the non-epileptogenic group whose centrality most resembled this pattern is P4, and while the SOZ did not include electrodes within mesial temporal structures, this patient's surgery ablated the amygdala in addition to the initial clinically-labeled SOZ in temporal pole.
| DISCUSSION
In summary, our data indicate that there are at least three different seizure networks that can be defined by the strength of inward and outward connectivity with respect to the mesial temporal region: focal, multifocal, and non-epileptogenic networks. Epileptogenicity appears to be associated with stronger and denser inward and outward connectivity. Specifically, an MTR is that is involved in focal seizure onset can better synchronize epileptiform discharges and has stronger outward influence on the network compared to an MTR that is not involved in seizure onset or included in a broader seizure onset.
We applied graph theoretical analyses to demonstrate that the epileptogenic MTR that is involved in focal seizure onset has more effective connectivity with the rest of the brain when compared to nonepileptogenic MTRs, potentially indicating an elevated propagative influence over the network. Overall, our findings suggest that the focal epileptogenic MTR plays a critical role in ictogenesis and seizure propagation primarily due to the density of its connections with the remainder of the brain, with increased susceptibility to network perturbations and widespread influence over the effective network.
Although the use of SPES to understand large scale epileptogenic networks is limited, previous studies investigating evoked responses in mesial temporal structures have revealed robust effective connectivity between these structures and functionally related regions in temporal neocortex and limbic structures (Catenoix, Magnin, Mauguière, & Ryvlin, 2011;David et al., 2013;Enatsu et al., 2015;Lacruz, García Seoane, Valentin, Selway, & Alarcón, 2007;Lacuey et al., 2014;Mégevand et al., 2017;Novitskaya, Dümpelmann, Vlachos, Reinacher, & Schulze-Bonhage, 2020; Wilson, Isokawa, Babb, & Crandall, 1990). We similarly observed that the evoked potentials within the mesial temporal structures (amygdala, hippocampus, entorhinal cortex, and parahippocampal gyrus) consistently showed greater magnitude and density compared to connections throughout the rest of the brain, regardless of the epileptogenicity of the MTR. This is consistent with our understanding that these structures are part of a larger limbic circuit; indeed, coherent function of MTR is necessary for tasks such as memory formation and olfactory processing (West & Doty, 1995). Matsumoto, Kunieda, and Nair (2017) proposed that the epileptic condition may increase the strength of functional connections without altering the distribution of these connections. Several studies support similar conclusions; greater evoked responses can be produced from stimulating within the seizure onset zone and higher amplitude responses are evoked within seizure onset zone from stimulation outside (Enatsu, Jin, et al., 2012;Iwasaki et al., 2010;Parker et al., 2018). Accordingly, we found that the response amplitude and weighted density of connections within focal and multifocal epileptogenic MTR was generally greater than that within nonepileptogenic MTR. However, we observed that the relative distribution of responses within and outside the MTR did not vary significantly when comparing stimulation of the epileptogenic MTR to the contralateral non-epileptogenic MTR within the same patient. This is congruent with previous studies that have shown similar connection distributions between epileptogenic and non-epileptogenic regions (Lacruz et al., 2007;Wilson et al., 1990). While we also noted that epileptogenicity may alter the proportion of possible responses outside the MTR, future analysis incorporating spatially homogenous sampling is needed to confirm this preliminary finding.
There is now widespread acceptance that focal epilepsy is not limited to a small region of the brain, but is a phenomenon rising from aberrant large-scale connectivity. Spencer (2002) proposed in her seminal paper the following concept: "vulnerability to seizure activity in any one part of the network is influenced by activity everywhere else in the network, and that the network as a whole is responsible for the clinical and electrographic phenomena that we associate with human seizures". Based on the graph theoretical analysis of network centrality, we conclude that the MTR is both a site of most relevant propagation of activity while also acting, to a lesser degree, as a receiver of activity within the network. While we did observe larger magnitude and density of responses within focal epileptogenic MTR in some cases, we also noted that the centrality measures that quantify the ability to receive activity (indegree, authority, Katz-receive) were not as significantly different across different levels of epileptogenicity. However, for each centrality measure that quantified the ability to propagate activity (outdegree, hub, Katz-broadcast), only the focal epileptogenic MTR had significantly greater average centrality than the outside non-epileptogenic tissue. The pooled analysis showed a similar difference for multifocal epileptogenic MTR but at a lesser magnitude. This suggests that epileptogenicity of the MTR is associated with an elevated propagative influence over the effective network that can increase as the seizure onset zone is more localized to mesial temporal structures. The increased role in both receiving and propagating activity makes the epileptogenic MTR an unstable node in the network, because not only can it respond to perturbations in the network due its greater excitability, it is also more capable of propagating this activity throughout the brain. Our finding is consistent with previous studies that have shown nodes within the seizure onset zone to have both greater indegree and outdegree (Parker et al., 2018;van Blooijs et al., 2018) or as being highly bidirectionally connected, acting as a receiver and activator of evoked potentials (Boido et al., 2014).
| Strengths and limitations
This study is unique in comparing epileptogenic MTR not only to nonepileptogenic MTR, but also to the contralateral non-epileptogenic MTR within the same patient. This direct comparison offered additional insight supporting the proposal in Matsumoto et al. (2017) that the relative distributions of effective connections is not affected by epileptogenicity. However, we did not observe many significant differences in the connectivity metrics when directly comparing epileptogenic MTR to the contralateral non-epileptogenic MTR, perhaps due to limited spatial sampling in contralateral non-epileptogenic MTR structures. Yet, we did see that the significant differences in response magnitude, density, and centrality across patient groups in the MTR ipsilateral to seizure onset were almost never observed in the contralateral non-epileptogenic MTR. This indicates that the differences we did observe were due to the varying epileptogenicity of the ipsilateral MTR, rather than another confounding variable across patient groups.
For this study, we defined epileptogenic electrodes as those within the clinically defined SOZ. While additional clinically annotated sites of early seizure propagation or irritative zones may be involved in a greater seizure network, they were not deemed responsible for seizure onset and therefore considered non-epileptogenic. However, some studies have found that electrodes that show early propagation of seizure activity can be more excitable to SPES than the rest of the network (Lega et al., 2015;Parker et al., 2018). This may have contributed to the unexpectedly high centrality of some of the contralateral non-epileptogenic MTR in the multifocal group, as each patient in this group had electrodes within contralateral non-epileptogenic MTR classified as early propagation or irritative sites. Additional unexpected significant differences in centrality of non-epileptogenic regions across patient groups may be similarly explained or due to differences in electrode coverage and stimulation across patients.
One potential drawback of this study is the grouping of amygdala, hippocampus, entorhinal cortex, and parahippocampal gyrus as one collective mesial temporal region of interest. While these structures were chosen in part due to consistent electrode coverage, they were grouped in this way primarily because of their joint involvement in seizure onset across patients. These structures rank among the highest epileptogenic involvement in mTLE cases, with multiple structures involved per patient, indicative of a network involvement rather than isolation to singular structures (Bartolomei, Chauvel, & Wendling, 2008). Additionally, these structures are robustly connected through evoked responses between each other (Catenoix et al., 2011;Enatsu et al., 2015;Wilson et al., 1990) and to cerebral cortex (Mégevand et al., 2017). However, we recognize that separate analysis of each structure's connectivity may provide additional insight to account for patient variability in structural epileptogenicity, differences in physiological connectivity, and asymmetry of bidirectional effective connections (Mégevand et al., 2017;Novitskaya et al., 2020). This level of detail would require more extensive sampling of each structure within and across patients and is outside the scope of this study, but a future study with a larger patient pool could investigate these differences.
| Directions and clinical applications
While this study focused on effective network properties from evoked potentials produced by SPES, higher frequency activity evoked by SPES can also be used to characterize the electrophysiological connectivity between stimulated brain regions. These evoked spectral responses can provide measures of functional connectivity (Crowther et al., 2019;Gkogkidis et al., 2017) and can be modulated by repetitive cortical stimulation (Keller et al., 2018;Huang et al., 2019) or states of wakefulness and sleep (Usami et al., 2019). While no predictive model is presented here, the unique effective network properties of focal epileptogenic MTR could be useful for assessing the importance of the involvement of the MTR in ictogenesis and seizure propagation in patients with mTLE. In future studies, we aim to study the role of the observed effective network properties as an evaluative tool to improve surgical outcomes.
| CONCLUSION
With epilepsy increasingly viewed as a network disease, a deeper understanding of the underlying electrophysiological interactions of brain regions and the mechanisms responsible for seizure generation and propagation through this network is crucial to improving the management of epilepsy and success of surgical outcomes. Here, the combination of graph theoretical analyses with SPES offers a novel method of characterizing the causal influence of focal epileptogenic brain regions within larger electrophysiological networks. By comparing the effective network properties of mesial temporal structures with varying degrees of epileptogenicity within and across patients, we demonstrate that the MTR is a strong, reciprocally connected subnetwork, with many bidirectional connections within and to surrounding network. We infer that the epileptogenic MTR is an important site of origination for seizure propagation while also acting, to a lesser degree, as a receiver of activity within the epileptogenic network. This elevated propagative influence over the effective network increases as the SOZ is more localized to mesial temporal structures. Our findings provide concrete evidence of network effects of mTLE and provide insights into how current and future therapeutic approaches might be optimized to address these effects.
|
2021-06-25T06:17:15.638Z
|
2021-06-24T00:00:00.000
|
{
"year": 2021,
"sha1": "828f049432ac0ba0c0a80ca0e3bc83496a739ed2",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/hbm.25418",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "82c754aaa7bad560fa682ecbb07592ba53428130",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258557325
|
pes2o/s2orc
|
v3-fos-license
|
On the Implementation of the Fixed Point Iteration Current Injection Method to Solve Four-Wire Unbalanced Power Flow in PowerModelsDistribution.jl
This report serves as a technology description of a Julia-based re-implementation of the fixed-point current injection algorithm, available in PowerModelsDistribution.jl [1]. This report does not describe a novel method for solving unbalanced power flow problems. It merely provides a description of the fixed point iteration variant of the current injection method, inspired by the existing open-source implementation in OpenDSS1 [2]. The current injection method is commonly conceived as a system of nonlinear equalities solved by Newton s method [3, 4]. However, as Roger Dugan points out in the OpenDSS documentation, the fixed point iteration variant commonly outperforms most methods, while supporting meshed topologies from the ground up. We note that the unbalanced power flow algorithm in turn relies on matrix solvers for sparse systems of equations. In the context of circuits and factorizing nodal admittance matrices, the sparsity-exploiting KLU solver [5] has proven to be both reliable and scalable. OpenDSS uses KLU. This report documents work-in-progress, and the authors aim to update it when learnings are obtained or more features are added to the implementation in PowerModelsDistribution.jl. The authors invite collaborators to contribute through pull requests on the repository.
Introduction on Unbalanced Power Flow
Unbalanced power flow is the problem of finding a solution to the steady-state physics of ac power networks. Specifically, unbalanced power flow engines embed the multiconductor ac circuit laws, where series impedances and shunt admittances are matrices, not scalars.
Without aiming to do a detailed literature review, unbalanced power flow algorithms commonly discussed in the literature are: • Backward-forward sweep (BFS) method, which is also a fixed-point iteration algorithm [6].
• Current injection method as a fixed point iteration algorithm [2] 3 .
Note that current injection type approaches are fundamentally centered around a nodal admittance representation of the circuit. Conversely, the canonical BFS approach uses element-wise impedance-based representations and requires a radial topology 5 . The BFS and CIM fixed point algorithms have the following similarities: • both are fixed-point iteration methods, and derivative-free, • both are flexible in the kind of load models they can accept, i.e. power as a function of voltage does not need to be smooth.
Some of their key differences are: • BFS does not require any large matrix factorization/solves, • fixed-point CIM requires one factorization overall, and one matrix solve in each iteration, • CIM can handle meshed networks naturally, whereas BFS requires workarounds.
Other algorithms have been proposed: • Holomorphic embedding power flow for three-phase networks [8].
• Interior point methods applied to a system of nonlinear equations representing the power flow physics [1]. 3 OpenDSS setting Algorithm=Normal. Note that the Gauss-Seidel power flow method, often used in teaching power flow solution methods for transmission networks, is also a fixed-point iteraiton method, however distinct from OpenDSS's approach. 4 OpenDSS setting Algorithm=Newton 5 i.e. BFS depends on hacks to overcome this limitation.
Document Structure
The document is structured as follows: • §2 introduces the notation and describes at an abstract level how components can be modeled.
• §3 describes the fixed-point current injection algorithm, building on the mathematical models of the components, based on a system nodal admittance representation.
• §4 derives the mathematical models for specific components that are supported, e.g. loads, generators, lines, cables, transformers.
• §5 discusses the validation of the algorithm implementation w.r.t. OpenDSS.
Abstract Component Models
Every network component c -including both power delivery and power consumption/injectionconnected to a set of bus-terminals pairs is modeled as a parallel composition of an admittance Y c and a current source I nl c , In this report, we illustrate expressions for a four-terminal/four-conductor network, which is general enough to capture up to four-wire networks with explicit neutral, as well as Kron-reduced four-wire networks. Nevertheless, the abstractions work for any number of terminals. For instance, the NEVTestCase set up in OpenDSS contains up to 17 electromagnetically coupled wires. The current and voltage vector sizes for different elements in the four-wire network are: • as power consumption/injection devices connect to a specific bus, the vectors U c , I c have length 4, • for power delivery elements, which connect two buses, the vectors U c , I c have length 2×4 = 8.
Note that the vectors do not need to be standardized to length 4. Nevertheless, in the description of models we assume this, to simplify notation. In the implementation, we use variable-size primitive vectors and matrices.
Indices and Sets
The followings sets and indices are used throughout the report: x ∈ X ⊂ C transformers (6) (i, p) ∈ T bt ⊆ I × P bus-terminal topology (7) (c, i) ∈ T bus ⊆ C × I component-bus topology (8) (c, i, p) ∈ T term ⊆ T bus × P component-bus-terminal topology (9)
Buses and Terminals
Two types of reference nodes occur in the current version, which have a fixed voltage associated with them: • reference buses, where the phasor is fixed (all phase terminals of a reference bus are given known values) • the neutral terminal of any bus when it is perfectly grounded (only the neutral of the bus is given a 0 value)
Power Delivery Elements
A 4-wire power delivery element such as branch l connecting buses i and j, shown in Figure 1, can be represented by the component-bus-terminal topology The sending end shunt admittance matrix Y sh lij is We can now construct the primitive branch admittance matrix Y tot Note that on the basis of Y s l being rank-4, the rank of the whole matrix depends on that of the shunt terms. If Y sh lij = Y sh lij = 0, the resulting matrix has rank 4, but otherwise rank 8. We compose a combined voltage vector U l by stacking the sending U i and receiving end U j voltage phasors, Similarly, we compose the branch current vector I l by stacking the sending I lij and receiving I lji current phasors, Finally, the bus injection model is stated in matrix form as, For the circuit components that are linear in current-voltage variables, the compensation current is set to 0,
Power Consumption/Injection Elements
At a 4-terminal bus, a load or generator (or storage system, ...) connected to bus i can be represented by the component-bus-terminal topology The component is first approximated by a primitive admittance matrix Y c (see Fig. 2), with a size proportional to the number of terminals it connects to: Power consumption and generation devices, such as the "constant power" load model, often have nonlinear characteristics in the current-voltage space. Therefore, we iteratively adapt the "compensation current" injections I nl c to obtain the desired solution and solve equation (1).
Abstracting Network Details
After building a model for each component in the network with the following structure: we construct the system nodal admittance matrix Y : T bt → T term based on the primitive admittance matrices Y c of the individual components c, by starting from a zero matrix, and adding the values of the primitive admittances to the appropriate rows and columns in the system matrix corresponding to the topological sets. We partition the system voltage vector U into f ixed (reference bus, grounded neutral terminals) and variable voltages, and derive the corresponding partitions for impedance and system current variables in I, We define the variable current vector I v as a vector of the nonlinear correction currents, I nl , which is a permutation of a stacked vector for all component correction currents I nl c . Note that the voltage vector is augmented with the internal (auxiliary) voltage variables for the transformers (see Section 4.3).
The variable current vector can now be stated as a function of the variable voltage vector,
Guaranteeing Invertibility of Y vv
Padding of impedance matrices with zeros to represent missing wires will lead to rank-deficient impedance matrices, so this is not allowed.
For wye-to-delta transformers, it is important to pay attention the nature of this transformation matrix w.r.t. earth. On the wye side, voltages are naturally defined w.r.t earth. If the neutral is solidly grounded, the phase voltages are interpretable w.r.t. earth, and if the neutral voltage is allowed to rise, a neutral voltage is interpretable w.r.t. to earth. However, when traversing the transformer from the wye to the delta side, the delta side only defines potential differences in its own framework, without referencing earth. If any reference to earth is made downstream of the delta winding, the voltages will be naturally unique. However, if no reference to earth is made, there remains a degree of freedom to resolve, also indicating that the admittance matrix is not invertible. In real systems in the vicinity of earth, a reference to earth is unavoidable due to capacitive effects of conductors w.r.t earth. Furthermore, line impedance matrices, when derived from Carson's equations, consider earth part of the circuit, so they imply the existence of such reference, even for three-wire networks where capacitance data has not been calculated.
For these reasons, one can generally add a small shunt capacitance to components that do not have any capacitance defined in the input data. Figure 3 the high level flowchart of the CIM algorithm with only a single factorization step. In the following we briefly explain how this flow is initialized, iterations, and the stopping criterion.
Initialization k = 0
We initialize the variable current vector at 0, Therefore, we can write (25) as, where U f is the vector of known voltage and U v 0 is the initialization (iterate 0) of the vector of variable voltages. We can now obtain the first estimates for the voltage by solving this equation, Note that we do not actually invert the admittance matrix, we instead factorize it and perform multiple solves based on that single factorization.
Iteration k > 0
We now solve (25) iteratively. First, we evaluate all functions I nl c (U f , U v k−1 ) for the nonlinear components independently and then correctly place them in the permutated compensation current vector We can solve this equation for the next voltage estimate, Note that the matrix solve here is identical to that performed in the initialization, therefore we can re-use the stored factorization.
Convergence Criterion
When the voltage changes between consecutive iterations become sufficiently small, we have reached the fixed point. For this purpose we choose the infinity norm criterion, If the calculations are done in SI units, it is wise to do normalization to account for different voltage levels.
Computation Aspects
We only need to factorize Y vv once, during the initialization step. The factorization is then reused every iteration. It is possible to further minimize the number of iterations by re-building and re-factorizing the admittance matrix at each iteration. For instance, we re-evaluate the primitive admittance matrices for all constant-power loads in the new voltage phasor. However, factorizing is computationally much more expensive than solving extra iteration steps. Therefore, updating the admittance matrix is best avoided, at the cost of an increase in the number of iterations.
Concrete Element Types
In this section we provided detailed derivation of the admittance matrix and compensation current for different concrete element types.
Lines and Cables
Lines and cable are linear power delivery elements, completely described by (15)
Switches
Ideal (lossless) switches should really be removed from the circuit 6 . However, for validation purposes we replicate the series and shunt admittance matrices proposed in OpenDSS by treating them similar to the overhead lines and cables and assuming constant series and shunt matrices.
Idealized and Lossy Transformers
We can decompose lossy n-winding transformers into a circuit composed of idealized single-phase transformers and series impedances and shunt admittances, using the methodology proposed in [9]. We re-use the previously derived representations for lines and shunts, and add a component to represent idealized single-phase transformers. A single-phase idealized transformer is shown in Figure 4. We model the idealized transformer with an admittance matrix Y tf , Table 1 provide the models for three transformer categories depending on the grounding configuration at either end. The model derivation is illustrated as follows: • With r as the transformation ratio, we define the following admittance matrix of an idealized transformer Y tf t for each type. To make the matrix invertible, we can augment the diagonal with a small positive scalar .
• The U t vector is the stack of the two from-side terminal voltages, the two to-side terminal voltages, and an auxiliary variable for the current, I aux t , • The terminal currents are stacked and the last entry is set to zero.
• We solve the Ohm's law expressions and derive the expressions for the terminal currents, we derive the model summary with → 0.
Note that the conservation of current is implied from these models. For instance, for the transformer with no grounding we have which satisfies I tij,a = I tji,a /r.
Grounding at sending end , I tij,a + I tji,a /r = 0, I tji,a + I tji,b = 0 Grounding at sending and receiving ends We also confirm this transformer is lossless since
Loads and Generators -Wye Connected
Wye-connected loads or generators are given 3 complex power setpoints, one between each phase and the neutral, A reference voltage level is needed for all voltage-dependent models except for the constant power model, Using these parameters, we can define an equivalent admittance vector for each phase (no mutual) as The kron reduced model of the wye-connected load/generator has the admittance matrix of the form where as the model with explicit neutral has the admittance matrix, Table 2 lists compensation current vectors for kron-reduced and explicit-neutral variants of different load types (constant impedance, constant power, constant current, and exponential). For each load type, we need to derive the variable power and current vectors to calculate the compensation current vector. In models with explicit neutral we have to make sure to balance the current,
Loads and Generators -Delta Connected
For delta-connected loads and generators, the power setpoints are interpreted as the product of the phase-to-phase voltages and the conjugate of the delta currents, We define a reference value for the phase-to-phase voltage magnitude. For instance, in a 230 V nominal phase voltage network, this would be a length-3 vector filled with the value 230 √ 3 V, We then derive the vector with the equivalent shunt admittance, where superscript •2 indicates the element-wise square and the element-wise division. The line voltage vector is a linear transformation of the phase voltage vector, with transformation matrix M, Such constant-admittance delta loads satisfy, Similarly, the phase current vector I c is a linear transformation of the line delta current vector I ∆ d , but with transformation matrix M T , We now substitute (48) into (49), and use (47) to derive the appropriate admittance matrix for delta-connected load and generators, Table 3 lists the compensation current vectors for different load types. Again, the variable current vector should be derived to compute the compensation current vector.
Differences w.r.t. OpenDSS
This report provides a summary of the power flow solver implemented in PowerModelsDistribution.jl (PMD), inspired by the existing open-source solver included in OpenDSS. However, there are still some differences between the two implementations: • The PowerModelsDistribution.jl (PMD) implementation is valid for up to four-wire networks, but not more, whereas OpenDSS can solve cases with more wires. Supporting n-wire should be relatively straight-forward though, as most of the code is agnostic, so the key challenge is the validation of such an extension.
• PMD computes power flow results in per unit values whereas OpenDSS does not 7 . The per unit system interacts with the addition of small diagonal values. E.g. OpenDSS adds the equivalent of 10 kvar capacitive at 345 kV 8 , which is easy to do in SI units in the matrix manipulation code context, but requires additional context if it is performed in per-unit.
The load model relaxation feature in OpenDSS (changing the load model to constant impedance outside of the vminpu, vmaxpu range) has not been implemented in CIM. The lack of this relaxation makes convergence more challenging.
• For maximum compatibility, CIM uses Julia's default matrix factorization routines whereas OpenDSS uses the KLU library. KLU and other specialized matrix solvers could be accessed from Julia through LinearSolve.jl 9 .
• The transformer decomposition described above [9] and implemented in CIM is slightly different from how OpenDSS models transformers (additional internal nodes).
• OpenDSS adds small values (ppm) to the diagonal of the admittance matrix for various elements and CIM does not replicate all those cases.
Validation
The CIM power flow results are validated against OpenDSS results (calculated using OpenDSS-Direct.jl 10 0.8.1) for which Table 4 lists the maximum voltage error in per unit between the CIM (tolerance 1E-8) and OpenDSS (tolerance 1E-10) as where U CIM i,p and U ODSS i,p are respectively bus terminal voltages calculated by CIM and OpenDSS-Direct.jl. The first batch of test cases are unit tests of the CIM package, and the second batch are larger networks. We aim to update this table when changes are made, or more cases are validated.
Note: It should also be mentioned that the implementation still has room for improvement as some cases still result in singularity errors. For instance the Egrid -SantaFe/urban-suburban/uhs0 1247 uhs0 1247 -udt4077 network.
Future work
The implementation can still be improved, and we invite collaborators to work on these issues: • implement the load model relaxation as used in OpenDSS, allowing faster convergence on more test cases; • post-processing of congestions (tagging of occurrences of under/over voltage/ current, deriving of reliability metrics); • testing limits of scalability of the implementation, e.g. across a variety of linear solvers; • continue validation efforts on more distribution network test cases; • alternative models for switches, e.g. Kron-reducing buses as part of the algorithm; • implementation without per unit conversion, without transformer decomposition; • volt-var/volt-watt control for inverters, including support for arbitrary control laws and multiple voltage inputs 11 ; • multi-time-step support that re-uses the previous solution and factorized nodal admittance matrix where possible; • uncertainty extensions, e.g. polynomial chaos or Monte-Carlo simulation.
As a final note, there is a need for more flexible distribution network data models that can capture unbalanced power flow as well as unbalanced OPF and state estimation data sets with well-defined semantics and libraries that check for issues and inconsistencies in such data sets. For instance, in OpenDSS one can mix Kron-reduced-neutral and explicit-neutral representations in a single case study, which makes data checks difficult. Furthermore, in the author's opinions, load model relaxation and Kron's reduction should be established as settings of the power flow algorithm, not features of a data set describing a network.
|
2023-05-09T01:16:10.711Z
|
2023-05-08T00:00:00.000
|
{
"year": 2023,
"sha1": "bc1b105d0a8677597102bc0cbec69865637ce62e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "bc1b105d0a8677597102bc0cbec69865637ce62e",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
203261193
|
pes2o/s2orc
|
v3-fos-license
|
Gender and language use in scientific grant writing
Women in science, technology, engineering, and math are not equally represented across tenure-track career stages, and this extends to grant funding, where women applicants often have lower success rates compared with men. While gender bias in reviewers has been documented, it is currently unknown whether written language in grant applications varies predictably with gender to elicit bias against women. Here we analyse the text of ∼ 2000 public research summaries from the 2016 Natural Sciences and Engineering Research Council (NSERC) individual Discovery Grant (DG) program. We explore the relationship between language variables, inferred gender and career stage, and funding levels. We also analyse aggregated data from the 2012 – 2018 NSERC DG competitions to determine whether gender impacted the probability of receiving a grant for early-career researchers. We document a marginally significant gender difference in funding levels for successful grants, with women receiving $1756 less than men, and a large and significant difference in rejection rates among early-career applicants (women: 40.4% rejection; men: 33.0% rejection rate). Language variables had little ability to predict gender or funding level using predictive modelling. Our results indicate that NSERC funding levels and success rates differ between men and women, but we find no evidence that gendered language use affected funding outcomes.
Introduction
Women in science face additional barriers to success that are rooted in historical biases (Wellenreuther and Otto 2016), which remain widespread despite over a decade of policies aimed at increasing female participation in science, technology, engineering, and math (STEM) fields (Larivière et al. 2013).While higher levels of gender parity have been achieved at earlier STEM career stages (e.g., graduate school, post-doctoral fellowships) (National Science Foundation and National Center for Science and Engineering Statistics 2018), there are still fewer women at senior levels (Larivière et al. 2013).This pattern has been described as the "glass obstacle course", a metaphor for the unequal gendered processes at work in women's careers in STEM fields (De Welde and Laursen 2011).
Gender diversity contributes to a diversity of perspectives, which can benefit scientific discovery by generating novel research questions and methods and facilitating wider application of research findings (Schiebinger et al. 2011(Schiebinger et al. -2018)).The untapped potential of fully trained and credentialed female scientists limits advances in basic and applied research, and these limitations represent an Written gendered language differences (e.g., written language characteristics associated with a particular sex or social gender) have been documented in nonscientific (Newman et al. 2008) and scientific contexts (Tse and Hyland 2008).For example, in reflective writing (e.g., narrative essays), female medical students used more words related to positive emotions than male students, and male medical students wrote longer documents compared with female students (Lin et al. 2016).It is currently unknown if differences in gendered language use in scientific grant writing exist, and no direct investigation of language use and gender for senior-level funding applications in STEM has been conducted.If a clear signature of gendered language use is detected and is related to funding differences between male and female researchers, steps can be taken to empower researchers with this knowledge and (or) to instruct reviewers to be careful about implicit bias being triggered by word choice.Conversely, if no differences are found then bias is unlikely to be triggered by linguistic associations with gender.
Here we investigate gender, career stage, and language use in publicly available summaries of individual Discovery Grant (DG) research proposals that were successfully funded in the 2016 awards competition by the Natural Sciences and Engineering Research Council (NSERC).NSERC funds research programs, not specific research projects, and DGs are the main source of general research funds for Canadian academics in the natural sciences and engineering.Principle investigators can only hold one DG at any point in time, and this program is important for sustaining research programs for Canadian scientists.Funding decisions and amounts are based on scores on: (i) excellence of the researcher, (ii) merit of the proposal, and (iii) training of highly qualified personnel and proposed budgets do not generally affect funding amounts, unless unjustified or underestimated (NSERC 2018).We use aggregated data supplied by NSERC from the 2012-2018 DG cycles to investigate the impact of gender on grant success and award levels for early-career researchers (ECRs, defined by NSERC as within three years of their first academic appointment).We then use all of the DG public summaries from 2016, as well as information about award values, to investigate the impact of language on grant funding.Specifically, we use these data to address the following four questions: 1.
Are gender and career stage associated with NSERC award value?
2. Do gender differences exist in language use within NSERC public summaries? 3.
Materials and methods
The data used in this study are publicly available or are obtainable from NSERC in summary form, so no research ethics board approval was required (Article 2.2 of CIHR, NSERC, and SSHRC 2014).
Data and analysis of NSERC funding success and award levels
We requested funding data from NSERC, who provided aggregated data summarizing the total numbers of applicants and funded grants (Table 1).Because of an observed gender difference in funding success for ECRs in 2016, we asked for further data on ECR success rates from the 2012-2018 DG grant cycles, broken down by gender and by selection committee (
Data collection for linguistic data
Using R (version 3.3.3;R Core Team 2017) within the RStudio environment (RStudio Team 2016) and custom scripts, we collected all researcher data and NSERC public summaries (n = 2094) for the 2016 DG Competition from the NSERC database (nserc-crsng.gc.ca/ase-oro/index_eng.asp,accessed 24 August 2017).We analyzed NSERC grants designated as "RGPIN" (research grant program to individuals) for the 12 main disciplines (i.e., "selection committee" as listed in Table 2).Note: Includes Discovery and Subatomic Physics Individual and Team Grants, which were excluded from our analyses.Here, "Female"/"Male"/"Not indicated" or "Early-career" (within three years of securing an academic job) were self-reported to NSERC during the grant application stage, with exact numbers obtained directly from NSERC.NSERC, Natural Sciences and Engineering Research Council.
Urquhart-Cronish and Otto We focussed our analysis on English public summaries (n = 1959), because the sample size for French public summaries was too small for an independent analysis (n = 62).Public summaries are limited in length by an online text box and English summaries were nearly 400 words in length (384.23 ± 1.6 words).In total, we analyzed 752 734 words across 1959 public summaries.Only the summaries and not the full grant proposals are publicly accessible; therefore, our linguistic analysis was limited to this portion of the grant application.Nevertheless, the public summaries form an important portion of the grant application, appearing on the first page of the proposal received by both the reviewers and the grant panel and serving as an abstract of the grant.
For each public summary, the gender of the author was inferred using the name association R package "gender" (Mullen 2015), which determines gender based on historical data sets of name use by gender, reporting the probability that a name is associated with a male (score of 1) or female (score of 0) individual.In cases where researcher gender was not strongly inferred (name not in the "gender" database or associated with gender probabilities between 0.2 and 0.8), we checked researcher websites and institutional news articles for identifying information (e.g., personal pronouns).We acknowledge that this method is subject to error and may not match each researcher's self-identified gender (including nonbinary genders), except as captured by their current name choice, and report our results in an aggregate form only.To determine career stage of each author, we requested information on year-of-first-grant for the successful 2016 DG cohort from the granting agency, but the request was declined citing logistical constraints.We thus inferred the career stage of each individual researcher by searching all years of data available in the NSERC online database (1991-2016;accessed 4 March 2018) for grants given to a researcher by the name and institution used in 2016.We calculated the number of years since researchers received their first listed NSERC DG as a discrete measure of career stage (range: 0-25 years).Our estimate of career stage was bounded by 25 years since the earliest date in the NSERC database was 1991, which may underestimate career stage for senior researchers.Researcher names and institutions were used jointly to limit misassignments of grants to different individuals with the same name, but we recognize that career stage is underestimated in cases where researchers change their name or institution, both within Canada and internationally.Our results using the individual data from the 2016 DG cycle (including all linguistic analyses) use inferred gender and inferred career stage (years since first NSERC DG at the same institution).
Generation of language variables
We used Language Inquiry and Word Count (LIWC) software (Pennebaker et al. 2015) to analyze the public summaries and generate language variables.LIWC software is widely used in the social sciences and has been parameterized across many different genres of writing, including scientific journal articles (Pennebaker et al. 2007), with the current version capturing, on average, over 86% of the words used in written text and speech (Pennebaker et al. 2015).Studies using LIWC have successfully detected gender differences in language in emails, narrative essays, and text excerpts from psychological studies (Newman et al. 2008;Cheng et al. 2011;Lin et al. 2016).LIWC is comprised of a large dictionary of words and compares inputted written text to its dictionary to generate scores for n = 92 language variables including word count, words per sentence, 86 traditional variables (e.g., content and style words), and four summary variables that are described below (Pennebaker et al. 2015).
Traditional LIWC variables include content and style words, and scores are expressed as a percentage of the total words used within the text provided.Content words generally include nouns, regular verbs, adjectives, and adverbs, whereas style words include pronouns, prepositions, articles, conjunctions, and auxiliary verbs.A broad distinction between the two word groups is that content words reflect what is being said, whereas style words reflect how people are communicating (Tausczik and Pennebaker 2010).The four summary variables included are Analytic thinking, Clout, Authenticity, and Emotional tone.Analytic thinking refers to the degree of formal, logical, or hierarchical thinking patterns in text.Clout scores writing that is authoritative, confident, and exhibits leadership.Authenticity refers to writing that is personal and honest.Finally, Emotional tone is scored such that higher numbers are more positive and lower numbers are more negative.These four summary variables are research-based composites that have been converted to 100-point scales, where 0 = very low along the dimension and 100 = very high (Pennebaker et al. 2015).Before data analysis, language variables (n = 92) were tested and removed if they exhibited near-zero variance (Kuhn 2017), leaving n = 74 language variables used in reported analyses.
Statistical analysis of award data
To investigate the magnitude of the gendered funding gap in the 2016 DG cohort we used a linear mixed-effects model relating award value to inferred gender and career stage.Gender and career stage were treated as fixed effects, whereas selection committee (i.e., discipline) was treated as a random effect to control for discipline-specific differences in average award value.Throughout, we used the Kenward-Roger adjustment for the degrees of freedom in the mixed-effects models.To determine if funded versus not-funded outcomes depended on gender, we performed Pearson's χ 2 tests on the total data and for each selection committee separately, based on the aggregated data from 2012 to 2016 (Fig. 1).We also performed a Mantel-Haenszel χ 2 test, which accounts for variation among selection committees and tests the null hypothesis of a common odds ratio of one (female and male researchers equally likely to be funded).Odds ratios were calculated using the R package "samplesizeCMH" in R version 3.4.3.
Statistical analysis of linguistic data
We used a principal component analysis (PCA) based on covariances to visualize the linguistic data and used a permutational multivariate analysis of variance using distance matrices (PERMANOVA) from the R package "vegan" (Oksanen et al. 2017) to examine whether gender explains differences in multivariate LIWC variables.To determine how many axes were meaningful, we compared the observed eigenvalues to a broken-stick random expectation; axes in which the observed eigenvalue was greater than randomly generated expected eigenvalues were considered meaningful.The broken-stick model has been shown to yield more consistent results compared to other stopping methods (Jackson 1993).Our comparison identified the first seven principal component (PC) axes as more informative than expected, explaining 40% of the variance in the data.
We used language variables with machine-learning techniques and random-forest predictors-a model averaging approach-to train and test classification and regression models using the R package "caret" (Kuhn 2017).The data were randomly subdivided into a training data set (75%) and a testing data set (25%).To address our imbalanced gender data and avoid classification models that always predict the most common class (men), we up-sampled women in our training data to an equivalent sample size of men (i.e., adding women by randomly sampling them from the training data set with replacement).The trained model was then applied to the testing data set, with its original unbalanced gender composition.We used one classification model with gender as a binary response variable and two regression models with career stage and award value as continuous response variables.We conducted mixed-effects models on variables that explained the most variance in our random-forest models, accounting for selection committee and gender (as appropriate) and controlling for multiple comparisons using a Bonferroni correction.
In addition to the presented results, we conducted analyses on data sets with reduced language variables to assess the robustness of our results, either using only the first seven PC axes or restricting the language variables to those exhibiting a standard deviation >1 (see Supplementary Material 1).We found no major differences in model performance compared with the broad language data set analysed here.
Results
1) Are gender and career stage associated with NSERC award value?
We used the aggregated data from NSERC for the 2016 DG cycle to determine differences in success rate by gender (Table 1).Overall, the success rates for applicants who self-identified as male and female were 64.6% and 67.5%, respectively, amounting to a higher success rate for men relative to women by a factor of 1.045 (Table 1).The average grant size was $33 155 (NSERC 2016), with men receiving a factor of 1.050 more funding relative to women (Table 1).
Analysing the individual data from the 2016 DG cycle, we inferred that 21.4% of awardees were women and 78.6% men, consistent with the aggregated data from 2016 (Table 1; 21.1% women and 78.9% men among those who self-reported gender) and from previous years (table 4.1 in NSERC 2017).The proportion of female awardees within each selection committee ranged from ∼10% in the physical sciences, engineering, and mathematics to ∼30% in life and Earth sciences (Fig. S1).We inferred that 36% of researchers received their first DG award in the 2016 competition, which is likely an overestimate given name and institutional changes.Indeed, this estimate is substantially higher than the 15.0% of awardees reported to be ECRs (Table 1), although a similar fraction of awardees were "Established Researchers Not Holding a Grant" (table 2 in NSERC 2016), who would also contribute to the number of awardees receiving a DG for the first time.Nevertheless, we caution that our measure of career stage, as inferred here, only reflects estimated time since first receiving an NSERC DG and is biased downwards by name and institutional changes.
Analysing the individual data for 2016 DG awardees, the average annual grant was $34 375 (± 623 SE) for women and $36 264 (± 384 SE) for men.Only career stage significantly affected differences in amount of funding awarded (F [1,1946.2]= 66.50, p < 0.0001), with a marginally significant effect of gender (F [1,1945.3]= 2.89, p = 0.089), and no significant interaction between the two variables (F [1,1944.6]= 0.02, p = 0.902) (Table 3).Keeping in mind the caveat of our measure of career stage, the main effect of career stage amounted to an average $386 (± 83.97 SE) increase in award value for every one-year increase in years since first DG award.Although only marginally significant, the gender difference in award amount, accounting for career stage and selection committee, was $1756 (± 1032 SE).
Obtaining funding as an assistant professor is an important step in establishing an academic scientific career.Among the inferred first-time recipients, a strong male bias was also found in the 2016 cohort; 77% of the first-time recipients were men and 23% women.Inferred first-time recipients who were men also received a higher average award of $31 965 (± 469 SE), compared to $30 257 (± 587 SE) for women.The gender bias in funding level for researchers inferred to have received their first NSERC DG in 2016 was again only marginally significant (F [1,699.3]= 3.65, df = 1, p = 0.057) (Table 3).Specifically, the gender difference in award amount among inferred first-time recipients, controlling for selection committee, was $1620 (± 848 SE), a roughly 5% higher award amount for male applicants relative to female applicants.
An even larger discrepancy exists when considering gender differences in the proportion of ECRs who were funded versus unfunded.Figure 12 in NSERC's (2016) annual summary suggested higher rejection rate for female ECRs relative to male ECRs.To investigate further, we requested data on summary outcomes for early-career applicants by gender and by selection committee from the 2012-2018 DG grant cycles (Table 2).These data were received in aggregated form and included ECR status and gender as self-reported to NSERC.Applications from female ECRs were rejected 40.4% of the time (352/872) compared to only 33.0% time for male ECRs (33.0%; 656/1990), a significant result with females being rejected 1.225 times more often (p = 0.00016 Pearson's χ 2 test), which remains significant when accounting for selection committee (p = 0.0033 Mantel-Haenszel χ 2 test).Figure 1 indicates that the gender disparity is particularly high in two of the selection committees (1501: Genes, Cells, and Molecules; 1502: Biological Systems and Functions).
2) Do gender differences exist in language use within NSERC public summaries?
We first performed a PCA on the linguistic data to focus on the main axes of variation among the multiple language variables measured by LIWC (n = 74).The variables with the largest loadings for PC1 were total pronouns, function words, dictionary words, the time orientation category "present focus", and regular verbs all having negative loadings.The main loading variables for PC2 were Clout, first person plural, affiliation, and social words having positive loadings and affect words having a negative loading.Because of the large sample size of our data set, we observed a significant effect of gender on the first two PC axes (F [1,1957] = 4.86, p = 0.001), with women tending to score higher on PC2 (Fig. 2).Nevertheless, gender explained little of the variability in language variables (R 2 = 0.002).
To investigate whether variation in language use could accurately predict author gender, we used a random-forest classification model using language variables to predict gender of award recipients, including selection committee as a covariate.Predictive model accuracy was lower than the no-information rate (NIR) (accuracy = 0.77, NIR = 0.79), and the random forest classification had a poor ability to correctly classify women (sensitivity = 0.06), despite up-sampling women to equal the sample size of men during model training, but a good ability to correctly classify men (specificity = 0.97).Precision, or a measure of classifier exactness, was 0.30, indicating a moderate false-positive rate.Cohen's kappa, a measure of classification accuracy normalized by the imbalance of the gender classes in the data, was 0.03, indicating a weak predictive ability.We confirmed our main results on predictive ability using another classification method (linear discriminant analysis also performed poorly: Cohen's kappa = 0.11).Among the linguistic variables, "conjunctions" was the most important variable in our random-forest model with the highest predictive ability, explaining ∼15% of the variation ascribed to gender.Considering the variable on its own, women had a significantly higher mean conjunction score than men in a mixed-effect model controlling for selection committee (women = 6.42, men = 6.00;F [1,1955.2] = 19.72,p < 0.0001, which is less than the Bonferroni corrected α = 0.00068 having considered 74 language variables).Conjunctions (e.g., "and", "also", "but", "though", "whereas") are style words that reflect how people are communicating.Funding amount did not vary with conjunction score, controlling for gender and selection committee in a mixed-effects model (F [1,1949.8]= 0.267, df = 1, p = 0.605).
Urquhart-Cronish and Otto
3) How does NSERC award value relate to language use, gender, and career stage?
We were also interested in testing whether language variables, gender, and career stage were predictors of award value.Although language use did not predictably vary by gender, we investigated whether certain writing styles or word use accurately predicted funding level, when accounting for career stage.To account for variation in award value among disciplines, we first scaled award value by the average award within each selection committee.We then used a random-forest regression model to assess the effects of language variables, career stage, and gender on scaled award value.A random-forest regression model using the testing data set had the same low R 2 of 0.03 as the training data set, indicating a poor predictive ability, with a marginally higher root-mean-square error (RSME; testing: 14 495; training: 12 970) and mean absolute error (MAE;testing: 9920;training: 9374).Career stage was the most important predictor, explaining ∼9% of the variation in scaled award value.Of the linguistic variables, "total pronouns" was the most important, explaining ∼6% of the variation in scaled award value.Considering this linguistic variable on its own, "total pronouns" was not a significant predictor of award value, controlling for selection committee and gender, once adjusting for multiple comparisons (F [1,1946.2]= 8.70, p = 0.003, which is much greater than the Bonferroni corrected α = 0.00068 having considered 74 language variables).Pronouns include personal pronouns (e.g., "I", "we", "us", "you", "she", "him", "her", "they") and impersonal pronouns (e.g., "it", "its", "those").4) What factors, including language use, predict career stage?
Finally, we tested whether language variables could be used to predict career stage, for example, whether writing style, word use, or tone predictably changed with continued grant writing experience, treating gender and selection committee as covariates.A random-forest regression model using the testing data set had a slightly higher, but still low, R 2 (testing: 0.07; training: 0.05), with similar RSME (testing: 0.96; training: 0.98) and MAE (testing: 0.83; training: 0.84).The "Genes, Cells, and Molecules" selection committee emerged as the most important predictor variable, explaining ∼6% of the variation in career stage of the 2016 awardees.According to our measure of career stage, more awardees had their first recorded DG award in 2016 in this selection committee (47%) compared with the total across all selection committees (36%).Among the linguistic variables, "tentative" was the most important predictor of career stage, explaining ∼5% of the variation.Considering the language variable "tentative" on its own, tentative scores increased with career stage, accounting for selection committee and gender as random effects, but this was only marginally significant after correcting for multiple comparisons (F [1,1921.1]= 11.09,p < 0.0009, which is slightly greater than the Bonferroni corrected α = 0.00068 having considered 74 language variables).Note that if we further remove linguistic variables that exhibit little variance using a higher threshold (standard deviation ≤1), the rise in tentative score with career stage remains significant even after Bonferroni correction (see Supplementary Material 1).The tentative score measures use of words such as "maybe", "perhaps", or "guess".
Discussion
Representation of women continues to lag in many scientific disciplines, with women particularly underrepresented among senior academic scientists (Council of Canadian Academies 2012).A report by the Council of Canadian Academies (2012) concluded that " : : : time alone will probably not be enough to balance the proportion of women and men at the highest levels of academia" (p.53).The challenges that stymie career progress for women in STEM are multifaceted (Shen 2013) and include biases that diminish evaluations of women's scientific work.We sought to determine whether grant funding differed by gender in Canada and whether language choice in scientific writing was correlated with gender, hypothesizing that men and women may differ in the tone of their word use (e.g., using words that are less authoritative in tone).If so, then biases could be triggered by reading scientific texts, even if the gender of the writer is not known.If such gendered word use could be detected, then scientists and reviewers might gain from the knowledge of how texts differ by gender and how they might be read.These questions are timely, as reflected by NSERC's current Equity, Diversity, and Inclusivity initiatives to address issues of equal representation in STEM and increase the social relevance and impact of research (Holmes and NSERC 2018).Below we summarize our main results in light of the questions posed in the introduction.
Gender and career stage are associated with NSERC award value
Our results show the effect of gender on award value in 2016 was marginally significant once we accounted for career stage and selection committee, with women at the same career stage awarded ∼5% smaller research grants than men ($1756 less, Table 3, based on inferred gender).Career stage had a significant effect on funding amount (see Results), with researchers awarded an average of $386 more for every additional year since they first held an award (see also NSERC report 2016).Previous studies have also reported gender differences in STEM grant success rates (Bornmann et al. 2007) and funding amounts (Head et al. 2013).
We also found the success rate of the ECRs was significantly higher for men than women (Fig. 1), using NSERC's definition of ECRs as within three years of their first academic appointment and based on the aggregated data supplied by NSERC (personal communication, 2018).Indeed, ECRs were significantly more likely to have their grant rejected if they were a woman (40.4% rejection rate) than if they were a man (33.0%) over the 2012-2018 award cycles (Table 2; Fig. 1).Given evidence that grant rejection at this early stage can have a substantial negative impact on future participation in funding competitions (Bol et al. 2018), the detected inequality in success rates poses a potentially important obstacle in women's academic careers.
Determining the extent to which bias plays a role in the evaluation of research excellence at this early stage and the impact of grant rejection on reapplication rates for women should be a major research priority.We thus recommend that NSERC investigate the causes behind the large discrepancy in funding success between early-career men and women and institute policies that correct for any biases that may exist in the assessment of grants.Initiatives that prioritize efforts at early-career stagesespecially those in underrepresented groups, including women-are particularly valuable, assisting the next generation of academics in getting their start.
Gender differences in language use were not detected within NSERC public summaries Contrary to our expectation that scientific writing might be gendered, we found that public summaries of awarded grants did not differ substantially in language use between male and female academics, as measured using the LIWC program.While a small and significant difference could be detected between male and female authors (Fig. 2), there was little to no ability to predict gender based on language use, according to either a random-forest classifier or a linear discriminant analysis.It has been previously demonstrated that as writing becomes more technical, gendered language differences diminish (Francis et al. 2001;Yavari and Kashani 2013).Our results similarly indicate little difference between male and female awardees in linguistic word use in formal scientific writing, at least as captured within NSERC public summaries (Fig. 2).This suggests that word choice in successfully funded grant applications is unlikely to trigger gender biases.
The most distinguishing variable was "conjunction", which explained about 15% of the variation ascribed to gender.One possible explanation is a difference in narrative style between men and women.Conjunctions function to join multiple thoughts together and are important for creating a coherent narrative in writing (Graesser et al. 2004).Scientists are increasingly incorporating elements of narrative style in their writing, which has been associated with articles published in higher-impact journals (Hillier et al. 2016), a factor that may be related to increased funding success according to NSERC's evaluation criteria (NSERC 2018).We found a significant difference in conjunction scores between men and women, with women scoring higher on average, demonstrating an aspect of language where gender differences exist.
NSERC award value or career stage were poorly predicted by language variables
We then examined whether award value could be predicted by differences in language use.For example, if certain styles of writing are associated with publications in higher-impact journals (Hillier et al. 2016), those styles also may be favoured by grant proposal reviewers.However, our random-forest analysis indicated that language use had little power to predict award value, with career stage being the most important predictor variable.This finding is consistent with the results of our linear mixed-effects model of the 2016 DG data, where we found a significant and positive relationship between time since first NSERC DG and award value, which may reflect increasing experience gained over the course of a career.
The language variable that explained the most variation in scaled award value in our model was "total pronouns", including all personal and impersonal pronouns.associated with the active voice.As mentioned, narrative writing styles using the active voice have been associated with publications in higher-impact journals (Hillier et al. 2016), which could affect grant evaluation and subsequent award value.While award level rose with total pronoun score, the relationship was nonsignificant following Bonferroni correction.
Finally, we sought to investigate whether language use differed among researchers at different career stages.Career stage was poorly predicted by language variables.Instead, one of the covariates-the "Genes, Cells, and Molecules" selection committee-was the most important predictor variable, with more awardees inferred to be receiving a grant for the first time in 2016 (47% versus 36%, averaged across all selection committees).The language variable that explained the most variation in model performance was "tentative", although the result was only marginally significant after Bonferroni correction.Tentative language tended to increase with career stage, which may reflect more cautious language use with respect to the proposed research and (or) an increased attention to gaps in the state of the field.
Strengths and limitations of our study
Although our predictive modelling framework found that linguistic variables were poor predictors of gender or grant value, we have no reason to believe our results are an artifact of the model training and tuning process.We up-sampled women in our classification training model to avoid biases in the model training process and performed multiple rounds of cross-validation without resampling to reduce variability and optimize model performance.We also have no reason to believe word count of analyzed public summaries (n = 1959) was too low (average of 384 words) to detect gender differences in writing, because machine-learning techniques have been able to detect gender differences in writing using emails with a mean word count of 116 using a training sample size of n = 1000 (Cheng et al. 2011).
Nevertheless, gendered language differences may still exist within STEM fields in other sections of written grant proposals, funded versus unfunded grants proposals, informal writing (e.g., email), at earlier career stages (e.g., graduate school or post-doctoral fellowships), or when evaluating aspects other than word use such as sentence structure and content.Thus, language use may still affect funding success beyond the scope of our analyses.
In particular, the public summaries that we analysed may have a more formulaic style, reducing gender differences in language use.A linguistic analysis of full grant proposals or the section "Most Important Contributions", which describes past research accomplishments, might reveal more differences than are apparent in the public summaries (e.g., in personal pronoun use, emotional tone, clout), although neither are publicly available.Additionally, as unfunded grants are not included on NSERC's public database, we could not analyze whether word use differed by gender and funding outcome (funded or unfunded) and could not determine the impact of a grant rejection on the proportion of men and women who re-apply.Future studies incorporating different types of writing, different academic career stages, and different success outcomes (e.g., funded vs. unfunded) would provide a more complete picture of the interaction between gender, career stage, and language use on research funding.
Conclusions
The current underrepresentation of women in senior scientific positions will not be solved without proactive policies (Holman et al. 2018;Grogan 2019).Pursuing potential factors driving biases (e.g., explicit, implicit, structural) that diminish evaluations of women's scientific work is necessary to achieve equity.We identified a significant gender difference in success rates among ECRs using aggregated data provided directly by NSERC, with women remaining unfunded by a factor of 1.225 more than men (40.4% vs. 33.0%rejection rate, respectively).This early-career gender disparity is a potentially important obstacle impeding the research progress of female scientists and should be investigated.
We also examined whether language choice in scientific writing varied in predictable ways according to gender, hypothesizing that male and female academics may differ in aspects of their word use.Our predictive modelling framework found little evidence of distinct writing styles, tones, or word use separating male and female academics in the 2016 funded DG cohort.Our classification model did identify a minor difference in conjunction use as language variable that distinguished men and women's writing, with women having a slightly higher conjunction score on average (6.42 vs. 6.00).
Our study is, however, correlative, and we cannot determine the causes of the detected gender differences in award success and award amount.For example, we cannot distinguish implicit or explicit biases during the review process from differences in measures of scientific productivity.Further research that identifies and draws direct links between funding barriers and funding outcomes will strengthen equity, diversity, and inclusivity initiatives, thereby reducing obstacles facing the progress of underrepresented groups as they proceed through tenure-track career stages.
FACETS | 2019
| 4: 442-458 | DOI: 10.1139/facets-2018-0039 447 facetsjournal.comFACETS Downloaded from www.facetsjournal.comby 35.160.27.221 on 07/22/24 Fig.1.The percentage of grants that remained unfunded was significantly lower from early-career male applicants (see Table1for selection committees and count data).Asterisks indicate a significant difference in the proportion unfunded between male and female awardees (χ 2 test: *, p < 0.05; ***, p < 0.001; ****, p < 0.0001).Natural Sciences and Engineering Research Council's (NSERC) definition of early-career researcher has changed over this time period but has consistently required applicants to be within the first few years of their appointments.Some applicants may have applied and been evaluated in multiple years (these cases could not be distinguished in the summary data provided by NSERC).Gender was self-identified to NSERC; 19.7% of the applicants chose not to specify their gender (data not analyzed).
Fig. 2 .
Fig.2.Principal component (PC) ordination plot of the first two axes with ellipses around 95% confidence levels.PC1 explains 11.2% of the variation in language variables, while PC2 explains 6.9%.The next five PC axes explained between 5.5% and 3.5% of the variation in language variables.
Table 1 .
Number of applications and awards in the NSERC (2016) Discovery Grant cycle.
Table 2 .
NSERC selection committee codes and aggregated count data on early-career researcher applicants for Discovery Grant competitions in 2012-2018.
Table 3 .
Summary statistics from linear mixed-effects models investigating the effects of gender and career stage on award value.
Note: Estimates for gender are reported as the funding level for male awardees minus that to female awardees.Estimates for career stage are reported as the increase in funding level with each additional year since an applicant's first award (see Methods for caveats).FACETS | 2019 | 4: 442-458 | DOI: 10.1139/facets-2018-
|
2019-09-17T02:40:25.889Z
|
2019-06-01T00:00:00.000
|
{
"year": 2019,
"sha1": "5cdde9a6bf3fbf124a6396f4d2bfdb41c269be1c",
"oa_license": "CCBY",
"oa_url": "https://www.facetsjournal.com/doi/pdf/10.1139/facets-2018-0039",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "3669507140584095b5da264b7a8ed223838ca046",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": []
}
|
3461220
|
pes2o/s2orc
|
v3-fos-license
|
HDAC1 and HDAC2 integrate checkpoint kinase phosphorylation and cell fate through the phosphatase-2A subunit PR130
Checkpoint kinases sense replicative stress to prevent DNA damage. Here we show that the histone deacetylases HDAC1/HDAC2 sustain the phosphorylation of the checkpoint kinases ATM, CHK1 and CHK2, activity of the cell cycle gatekeeper kinases WEE1 and CDK1, and induction of the tumour suppressor p53 in response to stalled DNA replication. Consequently, HDAC inhibition upon replicative stress promotes mitotic catastrophe. Mechanistically, HDAC1 and HDAC2 suppress the expression of PPP2R3A/PR130, a regulatory subunit of the trimeric serine/threonine phosphatase 2 (PP2A). Genetic elimination of PR130 reveals that PR130 promotes dephosphorylation of ATM by PP2A. Moreover, the ablation of PR130 slows G1/S phase transition and increases the levels of phosphorylated CHK1, replication protein A foci and DNA damage upon replicative stress. Accordingly, stressed PR130 null cells are very susceptible to HDAC inhibition, which abrogates the S phase checkpoint, induces apoptosis and reduces the homologous recombination protein RAD51. Thus, PR130 controls cell fate decisions upon replicative stress.
Reviewer #2 (Remarks to the Author)
In an intriguing paper, Goder et al report that phosphatase PP2A subunit PR130 expression is regulated by the histone deacetylase HDAC1/HDAC2. Moreover, PR130 controls the phosphorylation status of ATM, CHK1 and CHK2 but not ATR in response to replicative stress. As a result, HDAC1 inhibitors, MS275 mediates dephosphorylation of ATM/CHK1 and when combined with replicative stress such as that induced by RRM2 inhibition by hydroxyl urea, can result mitotic catastrophe followed by apoptosis.
Overall comments: The authors uncover for the first time that PR130 subunit of PP2A is required for replicative stress response induced DNA damage. The paper is characterized by a degree of novelty. The authors convincingly demonstrate using RNA silencing, genetic ablation and pharmacological blockade that HDAC1/PR130 circuit adds another layer of regulatory control in cellular response to replicative stress. By regulating the phosphorylation of ATR targets, CHK1 and the ATM/CHK2 signaling at later times after initial induction of replicative stress with hydroxyl urea i.e 6h versus 24 hours, the histone deacetylase ensures a backup mechanism of maintaining genome integrity in response to replicative stress.
Reviewer #3 (Remarks to the Author)
Goeder et al. investigated the role of histone deacetylases (HDACs) 1-3 in DNA replication checkpoint regulation, mostly in the epithelial colon cancer cell line HCT116. Long-term replication stress concomitant with HDAC inhibition decreased phosphorylation of the ATM and CHK1+2 kinases at key regulatory sites and this decreased checkpoint kinase phosphorylation correlated with checkpoint slippage and cell death. The decrease in phosphorylation pointed to the involvement of a phosphatase and indeed by a transcriptome profiling approach the authors identified a regulatory subunit of protein phosphatase 2A (PP2A), PR130, as a potential candidate responsible for the checkpoint kinase dephosphorylation. Consistent with their hypothesis deletion of PR130 or siRNA driven knock-down of PR130 abrogated the decrease in ATM phosphorylation upon replication stress and HDAC inhibition. CHK1, on the other hand, still got dephosphorylated to the same extent as in wt PR130 cells suggesting the involvement of another phosphatase. However, the application of replication stress alone caused a significant increase of CHK1 phosphorylation in cells lacking PR130 still suggesting a role for PR130 in this process. Consistent with the enhanced CHK1 phosphorylation in the delta PR130 cells these cells arrested earlier than wt cells in G1/S phase. Upon replication stress these cells showed an increased number of RPA foci compared to wt cells and additional HDAC inhibition led to a decrease in RPA foci (indicating ssDNA) in both cell types. yH2AX foci indicating double strand breaks were also increased in delta PR130 cells upon replication stress but additional HDAC inhibition did not further increase their level. Interestingly, more delta PR130 than wt cells underwent apoptosis (from the graph in Fig.7a it is difficult to read/deduce the % of cells in subG1) when exposed to replication stress and HDAC inhibition. The authors argued that the increased cell death is due to the attenuated CHK1 phosphorylation (ATM phosphorylation was unchanged) and the attenuated RAD51 levels. In principle these are interesting findings in particular for the field of cancer therapy. A major weakness of this study is the lack of data on the mechanism underlying the observed phenomenon (PR130 regulating ATM and CHK1 phosphorylation). Moreover, induction of a 24h replication stress concomitant with a 24h inhibition of HDACs1-3, which are major epigenetic regulators of probably thousands of genes, is a very artificial condition. What is the physiological relevance of results made under such experimental conditions? What can we learn from these data about the "normal" replication checkpoint regulation by HDACs and PP2A? For example, the authors conclude from their data that PR130-PP2A "promotes dephosphorylation" of ATM and CHK1. The authors should test directly if the PR130-PP2A holoenzyme is able to dephosphorylate ATM (at S1981, but also at S367 and S1893) and CHK1 (S317 and S345) and compare it to other di -or trimeric PP2A holoenzymes. Moreover, in Fig. 6F an interaction is shown between ATM and PR130. Several questions arise from this result and have to be answered: Is the fraction of ATM that is found in complex with PR130 dephosphorylated at the aforementioned sites? Is ATM also co-precipitating the other holoenzyme components (catalytic C and structural A subunit)? If yes, is the complex catalytically active when associated with ATM? Is ATM only associated with PR130-holoenzymes upon replication stress and HDAC inhibition or is it already associated even in unstressed cells? If already bound what is the signal to get it activated? The same analysis should be performed with CHK1 and PP2A. Minor points: Not only a subset but all Western blot data need to be quantified (from at least 3 independent experiments), see below comments to Fig.5b and 5e. The authors should provide % for the cell cycle distribution graphs in Fig. 2a and 7a and should refer in the text to the exact numbers and phrase carefully. For example Fig.7c shows a decrease of RPA foci upon HDAC inhibition under replication stress, in delta PR130 cells there is a decrease from ~27 RPA foci/cell to ~22 and the authors wrote in line 230: "The MS-275-induced loss of RPA foci …". Line 178, the authors wrote that PP2A inhibition by cantharidin rescued the inhibition of checkpoint kinase phosphorylation. However, in Fig.5b there is still a substantial decrease in pATM and (even more) in pCHK1 phosphorylation levels visible despite the inhibition of PP2A. Therefore, as asked for above, all western blot data need to be quantified and secondly, the authors need to phrase more carefully their conclusions. Line 192 + Fig 5e, once again, the authors claim based on a single Western blot experiment that knockdown of HDAC1+2 would lead to increased expression of PR130. However, this is not evident from the single experiment shown in Fig.5e. This experiment needs to be repeated and the signals need to be quantified.
We would like to thank all reviewers for the constructive and thoughtful comments on our work and for recognizing the novelty and impact of our results. We addressed all issues experimentally and we strongly believe that our new data significantly improve the quality of our article. Below we briefly sum up the key novel findings of our revised manuscript. We rewrote the manuscript to make it easier to read and we provide point-by-point responses to the reviewers' comments and a list summarizing new and revised figures.
The most important new data that we present are: • The histone deacetylases HDAC1 and HDAC2 bind to the PR130 promoter in vivo.
• HDAC inhibition with MS-275 specifically diminishes the phosphorylation of ATM during replicative stress.
• MS-275 triggers the interaction between PR130 and phosphorylated ATM.
• PR130 is acetylated and MS-275 increases its acetylation in vivo.
• MS-275 increases the mRNA and protein expression of PR130, but other PP2A B subunits are not affected at the mRNA or protein levels.
• Immunoprecipitates of PR130 from cells treated with MS-275 contain the catalytic and structural PP2A subunits and such complexes are active against the phosphorylated ATM peptide sequence in vitro.
• There is unscheduled origin firing and cells undergo a slippage from S phase into mitotic catastrophe when hydroxyurea and MS-275 are applied together. These processes can be explained by a reduction of the cell cycle regulatory kinase WEE1 and its negative impact on the cyclin-dependent kinase CDK1. In addition, MS-275 attenuates the phosphorylation and activity of the tumor suppressor p53, which halts cell cycle progression upon replicative stress.
• We can now distinguish between two distinct novel molecular mechanisms that regulate the dephosphorylation of ATM and CHK1. We reveal that PR130 controls the dephosphorylation of ATM in conjunction with the PP2A phosphatase directly and we identify that PR130 regulates a cell cycle-dependent control of CHK1 phosphorylation. In comparison with PR130 wild-type cells PR130 negative cells progress much slower from G1 phase into S phase upon replicative stress.
Nonetheless, these lower numbers of cycling cells show a higher phosphorylation of CHK1, reduced WEE1/pCDK1 signaling, and increased replicative stress with higher amounts of RPA foci, phosphorylated H2AX, and p53. When we applied increasing doses of hydroxyurea, we could verify that an earlier arrest in S phase leads to a higher phosphorylation of CHK1. HDAC inhibition with MS-275 interferes with the pronounced arrest of PR130 null cells, causes dephosphorylation of CHK1, and apoptosis. Thus, the HDAC1/HDAC2-dependent regulation of PR130 is a newly identified regulator of checkpoint kinase phosphorylation as well as a novel upstream regulator of cell cycle progression, replicative stress, DNA damage, and apoptosis.
New and revised Figures: Regulation of WEE1 and CDK1 by hydroxyurea and MS-275 NEW 2. Figure 2g Regulation of Cyclin B by hydroxyurea and MS-275 NEW 3. Figure 2h Regulation of p53 and its phosphorylation by hydroxyurea and MS-275 NEW 4. Figure 2i Regulation of p53 target genes by hydroxyurea and MS-275 NEW 5. Figure 5g Data from ChIP analyses verifying the interaction of HDAC1 and HDAC2 with the PR130 promoter NEW 6. Figure 6d Interaction between phosphorylated ATM and PR130 Extended figure with NEW data 7. Figure 6e Acetylation of PR130 NEW 8. Figure 6f Dephosphorylation of a phosphorylated ATM peptide sequence by PR130 NEW 9. Figure 7d Regulation of WEE1, CDK1, and pHDAC2 by hydroxyurea and MS-275 in HCT116 ΔgRNA and HCT116 ΔPR130 cells NEW 10. Figure 7e Dose-dependent control of cell cycle progression by hydroxyurea NEW 11. Figure 7f Dose-dependent control of CHK1 phosphorylation by hydroxyurea NEW 12.
Supplementary Figure S4b Expression of HDAC3 in HCT116ΔgRNA and HCT116ΔPR130 cells Expression of B56β and PR48 proteins in the presence of hydroxyurea and MS-275 NEW 16. Figure S4d Interaction of PR130 with PP2A-A and -C NEW 17. Figure S4e Response We thank the reviewer for the very positive and encouraging assessment of our work, the recognition of the novelty of our data, as well as for the critical evaluation and suggestions.
We addressed all issues raised as follows.
I would like to point the following: Fig. S4b). In addition, we determined the levels of HDAC3 and found that an elimination of PR130 does also not affect HDAC3 (Supplementary Fig. S4b).
Prompted by the referee's comment, we additionally assessed the phosphorylation of HDAC1 and HDAC2. While we could not detect pHDAC1, we observed that MS-275 led to an accumulation of pHDAC2. However, PR130 does not affect this increase in pHDAC2 (Fig. 7d).
These data are consistent with an equal induction of histone hyperacetylation in cells with or without PR130 in response to MS-275 (Supplementary Fig. S4b). Overall, we conclude that PR130 does not affect the expression and activities of HDAC1 and HDAC2. Our data show that in response to replicative stress, PR130 null cells progress slower from G1 to S phase than corresponding PR130 positive cells. However, the lower numbers of cycling PR130 negative cells have even more replicative stress. They carry higher levels of RPA foci, pH2AX, pCHK1, and activated p53 together with a lower activity of WEE1 against CDK1 (Figure 7a-d and 8a-c). We therefore conclude that PR130 is a novel regulator of this early section of S phase that is particularly prone to replicative stress.
MS-275 reduces both pCHK1 and WEE1. Accordingly, cell cycle control is lost and PR130 null cells that are arrested in early S phase disappear and the numbers of cells in G2/M phase and apoptosis increase (Figure 8e-f).
We additionally show that increasing doses of hydroxyurea applied to HCT116 cells lead to an earlier arrest in S phase and to a dose-dependent increase in pCHK1 that does not require a corresponding increase in WEE1 activity (Figure 7e-f). These data suggest that an earlier arrest in S phase -evoked by a higher dose of hydroxyurea or a loss of PR130-leads to a more pronounced phosphorylation of CHK1.
Last but not least, our finding that the inhibition of PP2A with okadaic acid and cantharidin can rescue the hydroxyurea-induced phosphorylation of ATM, but not pCHK1 from dephosphorylation in the presence of MS-275 illustrates PP2A-dependent and -independent pathways (Figure 5b and Supplementary Fig. S3a). These data are also coherent with our observation that it is specifically the ATR kinase that catalyzes the phosphorylation of CHK1 in response to hydroxyurea (Fig. 4d,f) and that a rescue of ATM phosphorylation is therefore not able to increase the phosphorylation of CHK1 (Fig. 5b).
Regarding the other PP2A B subunits, we analyzed our microarray data for their expression.
We found that MS-275 increased the mRNA levels of PR130/PPP2R3A and of B56β/PPP2R5B (this factor was mentioned by the reviewer in her/his last comment), but no induction of other B subunits by MS-275 (Supplementary Table 1). Therefore, we analyzed the protein expression of this factor. As control we analyzed the levels of PR48/PPP2R3B, which was not increased in the array. We found no significant alteration of their expression in our CRISPR-Cas9 cells with and without PR130 (Supplementary Fig. S4c and its negative effect on the cell cycle-promoting CDK1 (Figure 2f, 7d and Supplementary Fig. S4e). In addition, we demonstrate that MS-275 attenuates the phosphorylation and activity of the tumor suppressor p53, which halts cell cycle progression for DNA repair upon replicative stress (Figure 2h and 6b). A subsequent modulation of two p53 target genes, the cyclin-dependent kinase inhibitor p21 and the DNA repair relevant phosphatase WIP1, can also explain unscheduled cell cycle progression and DNA damage in cells treated with hydroxyurea and MS-275 (Figure 2i and 3e-f). Indeed, this needs clarification. We measured and normalized the band intensities and found no significant attenuation of hydroxyurea-induced pATM after a 6h incubation period with MS-275 (Fig. 1c). Figure 1e in the control, there is a decrease in pCHK1 as time after UV prolongs ?
Why in
(the opposite is with pCHK2 and pATM and pATR) These processes are typically seen in UV-treated cells (reviewed in Chen and Sanchez, DNA Repair 2004). UV was given as a pulse and pCHK1 stalls cell cycle progression. The activation of the other checkpoint kinases indicates DNA repair processes by nucleotide excision repair and homologous recombination.
3 The authros describe that MS-275 led to a significant accumulation of cells in G2 phase ( Fig. 2a-
b), but it looks there is an accumulation in G1 phase (70%) and not in G2 (25%)
The referee is right, this is the case for MS-275 and we corrected this in our revised manuscript. In combination with hydroxyurea, MS-275 promotes entry into G2/M-phase and mitotic catastrophe (Figure 2a- increased expression of PR130. The additional elimination of HDAC3 had no effect on PR130 ( Fig. 5e and Supplementary Fig. S3b-c). A combined reduction of HDAC1 and HDAC2 impaired the HU-induced phosphorylation of ATM and CHK1 ( Fig. 5f and Supplementary Fig. S3d).
The authors demonstrate universality of the mechanism by using three different cell lines; the colon cancer cells, HCT116 and RKO colon cancer cells as well as murine embryonic fibroblasts and three different replicative stress response agents, hydroxy urea (HU),
ultraviolet light (UV) and 5 fluorouracil (5-FU).
We thank the reviewer for the very positive and encouraging assessment of our work, for recognizing its novelty and impact, as well as for the critical and constructive evaluation and suggestions. We addressed all issues raised as follows.
Major points 1. There are no ChIP experiments to support recruitment of HDAC1/2 to PR130 promoter and inhibition of this recruitment by MS-275.
We agree that this needs to be shown experimentally. We carried out ChIP experiments to analyze whether HDAC1 and HDAC2 are found at the PR130 promoter; acetylated histone H3 was used as a control to test efficient HDAC inhibition. Indeed, HDAC1 and HDAC2 can be physically detected at this promoter (Figure 5g). The reviewer also asked for an inhibition of HDAC recruitment by MS-275. We see a reduced localization of inhibited HDAC1 at the PR130 promoter and the binding of HDAC2 is less affected by MS-275. We assume that this is due to an association of HDAC1 and HDAC2 with different transcription factors as both cannot bind DNA directly. Moreover, we report that MS-275 increases the phosphorylation of HDAC2 (Fig. 7d), which has been reported to promote its association with chromatin ( Prof. Penny Jeggo's group has shown that ATR phosphorylates ATM in cancer cells and that this mechanism is independent of double strand DNA breaks and the MRN complex (Stiff et al., EMBO J. 2006). This mechanism differs from ATM activation by direct dsDNA damage.
Others similarly found that ATM activation is different during replicative stress and lasergenerated direct double strand breaks (Duquette PLOS Genetics 2012).
Nonetheless, we agree with the reviewer that it is interesting to determine whether MS-275 affects ATM phosphorylation in cells with dsDNA breaks that are induced directly by gammairradiation. Therefore, we irradiated HCT116 cells and analyzed pATM by Western blot. We noted that a pre-incubation with MS-275 that efficiently induced PR130 did not affect pATM when it is induced directly by gamma-irradiation ( Supplementary Fig. S4e). We assume that this is due to an autophosphorylation of ATM and a different structural composition of the ATM complex within the MRN complex that is formed upon direct DNA damage. These data reveal that the HDAC1/HDAC2-dependent control of PR130 has a specific impact on replicative stress signaling.
When we analyzed effects of MS-275 on the gamma-irradiation-induced pATM/CHK2/WEE1 mediated G2/M checkpoint, we found that MS-275 reduced the levels of WEE1 significantly.
This reduction of WEE1 by MS-275 is associated with reduced pCHK1 in gamma-irradiated cells and pCHK2 is controlled in a similar fashion (Supplementary Fig. S4e).
Hence, while HDAC inhibitors specifically evoke a loss of ATM phosphorylation during replicative stress, they cause a loss of CHK1 phosphorylation in response to replicative stress and direct dsDNA damage. The loss of the pATM/CHK2/WEE1 checkpoint and the associated loss of pCHK1 are coherent with our novel model in which the stress-dependent phosphorylation of CHK1 depends on cell cycle progression (please see response to Point 4).
As HDACs inhibitors not only regulate gene expression but also protein acetylation, the authors have to demonstrate whether there is a direct interaction between PR130 and CHK1 or CHK2 in response to HU by co-immunoprecipitation experiments.
We focused these analyses on CHK1, as we see no protective effect of CHK2 on the survival of hydroxyurea-treated HCT116 cells (Fig. 4i). We immunoprecipitated pCHK1 (n = 3) and total CHK1 (n = 2) and we could not detect any interaction of them with PR130 (data not shown). Therefore, we had to reconsider our conclusions and to change our model.
Please see our answer to the next point, in which we explain our novel data and the new conclusions regarding a cell cycle-dependent, previously unrecognized control of CHK1 in response to hydroxyurea. These data and conclusions are entirely in agreement with the lack of interaction between PR130 and pCHK1.
The paper is confusing to read at times, particularly the role of pCHK1 in the time delayed
response is not convincing! The authors need to add more into discussion explaining why the benzamide MS-275, which specifically inhibits HDAC1,-2,-3 is specifically inhibiting CHK1 phosphorylation without affecting its upstream kinase ATR. 6b). This finding indicates that ATR activity is still intact in PR130 null cells that are incubated with MS-275. Accordingly, we see that ATR is not targeted by PR130 and MS-275 ( Fig. 1a and 7a).
Furthermore, we provide new details on the control of CHK1 phosphorylation by PR130. In our original manuscript, we speculated that the dephosphorylation of CHK1 in response to MS-275 relies on PP2A. Now we present a refined and more advanced model. We are able to distinguish two distinct novel molecular mechanisms. We demonstrate a direct dephosphorylation of pATM by PR130-PP2A and a cell cycle-dependent control of CHK1 phosphorylation that is modulated by PR130.
Our data show that in response to replicative stress, PR130 null cells progress slower from G1 to S phase than corresponding PR130 positive cells. However, the lower numbers of PR130 negative cells have even more replicative stress. They carry higher levels of RPA foci, pH2AX, pCHK1, and activated p53 together with a lower activity of WEE1 against CDK1 (Figure 7a-d and 8a-c). We therefore conclude that PR130 is a novel regulator of the duration of this early and particularly replicative stress prone section of S phase.
MS-275 reduces both pCHK1 and WEE1. Accordingly, cell cycle control is lost and PR130 null cells that are arrested in early S phase disappear and the numbers of cells in G2/M phase and apoptosis increase (Figure 8e-f). We additionally show that increasing doses of hydroxyurea applied to HCT116 cells lead to an earlier arrest in S phase and to a dosedependent increase in pCHK1 that does not require a corresponding increase in WEE1 activity (Figure 7e-f). These data suggest that an earlier arrest in S phase -evoked by a higher dose of hydroxyurea or a loss of PR130-leads to a more pronounced phosphorylation of CHK1. Also, our finding that the inhibition of PP2A with okadaic acid and cantharidin can recue the hydroxyurea-induced phosphorylation of ATM, but not pCHK1 from dephosphorylation in the presence of MS-275 illustrates PP2A-dependent and -independent pathways (Figure 5b and Supplementary Fig. S3a).
The sustained activation of ATR in the presence of hydroxyurea plus MS-275 is verified by a phosphorylation of its target ATM in hydroxyurea plus MS-275 treated cells when its dephosphorylation is prevented by an elimination of PR130 (Fig. 6b-c). This ongoing activation of ATR can be explained by the ongoing replication in cells incubated with hydroxyurea and MS-275 (Figs. 2d-g and 3c-f). The cells continue to replicate DNA and this unscheduled origin firing with the resulting delay in replication fork progression promotes the phosphorylation of ATR (and allows the phosphorylation of ATM by ATR if pATM is not dephosphorylated by the PR130-PP2A complex; Fig. 6). These findings agree with our observation that ATR catalyzes the phosphorylation of CHK1 in response to hydroxyurea ( Fig. 4d,f) and that a rescue of ATM phosphorylation is therefore not able to increase the phosphorylation of CHK1 (Fig. 5b). Fig. 3c-d).
We thank the reviewer for making this important point and for sharing expertise. The reviewer's suggestion to analyze WEE1 has clearly advanced our work. The strong reduction of WEE1 by MS-275 ( Fig. 2f and Supplementary Fig. S4e) provides an explanation why the S phase arrest cannot be maintained in cells treated with hydroxyurea and MS-275 ( Fig. 2ag). Furthermore, we offer data that we collected with the neutral comet assay, which detects dsDNA breaks. We demonstrate that ATM becomes phosphorylated in hydroxyurea-treated cells before the occurrence of such DNA lesions (Supplementary Fig. S2b). These data are consistent with the phosphorylation of ATM by ATR independent of dsDNA breaks (Stiff et al., EMBO J. 2006). In addition, a pre-incubation with MS-275 that suffices to induce PR130 can also suppress short-term hydroxyurea-induced checkpoint kinase signaling (Fig. 5a). (Fig. 4b) Fig 4b and 4d clearly shows that HU-induced pCHK1 phosphorylation is independent of ATM.
Furthermore, ATM inhibition increases HU-induced γH2AX indicating DSBs
It has recently been shown that in response to hydroxyurea, ATR prevents the conversion of stress-induced single strand DNA into dsDNA breaks that are characterized by an increase of pH2AX (Toledo et al., Cell 2012). Our data confirm these published results (Fig. 4f). The mentioned article did not clarify if the ATR targets ATM and CHK1 are relevant to prevent DNA damage. Therefore, we thank the referee for pointing out that our new data suggest that ATM has a previously unknown protective effect on replication forks that are halted in the presence of hydroxyurea (Fig. 4b).
In addition, we demonstrate such a function for CHK1 ( Fig. 4h) and rule it out for CHK2 ( Fig. 4i).
We were also puzzled by the observation that an inhibition of ATM augments the phosphorylation of H2AX while a deletion of CHK2 does not affect pH2AX initially and then attenuates the levels of pH2AX (Fig. 4b,i) (Fig. 4f). Due to these difficulties, we tried to alternatively address the question of the reviewer and we are happy to offer another set of experimental data. We treated HCT116 cells for 6 and 24 h with hydroxyurea and performed a neutral comet assay to measure dsDNA breaks. After a 6 h exposure to hydroxyurea, there are no comet tails in HCT116 cells. We only find them when we expose the cells for 24 h and with a positive control (Supplementary Fig. S2b). Nevertheless, we see activation of ATM after 6 h and these findings are in perfect agreement with the phosphorylation of ATM by ATR in response to hydroxyurea (Stiff et al., EMBO J. 2006). Furthermore, we would like to mention that the increase in pATM after ATR inhibition in hydroxyurea-treated cells (Fig. 4f) is consistent with the well-established ATM activation in cells with dsDNA breaks.
Furthermore, the reviewer pointed out that RNAi against ATR and its pharmacological inhibition illustrate an ATR-dependent phosphorylation of CHK1. These data are consistent with the literature (e.g., reviewed in Iliakis et al., Oncogene 2003). Concerning Fig. 4d and 4f, we are grateful to the reviewer for mentioning that we verify CHK1 as a bona fide downstream target of ATR in hydroxyurea-treated cells. This finding entirely agrees with the notion that rescued pATM cannot promote the phosphorylation of CHK1 (Fig. 5b).
We discuss the above-mentioned new data in the revised manuscript and we phrase more carefully. (Fig. 6b). In contrast, MS-275 appears to decrease CHK1 phosphorylation in PR130 -negative cells and PR130 positive cells (Fig. 6c- We are now able to fully separate these two events as stated in our response to Point 4. We show and discuss these new data in our revised manuscript. (Figs. 5 and 6b-c). ATM supports cell survival upon replicative stress (Fig. 4a-c). To corroborate these data and to follow the reviewer's suggestion, we immunoprecipitated PR130 complexes from cells treated with MS-275 and we tested their activity against a phosphorylated peptide around the S1981 of ATM. We present evidence that the PR130 complex is active against pATM (Fig. 6f). Moreover, this complex contains both the catalytic and the structural subunits of the PP2A complex ( Supplementary Fig. S4d).
We would also be interested to see if hydroxyurea and MS-275 regulate the phosphorylation of ATM at S367 and S1893. Phosphorylation of these sites is induced by direct DNA damage We believe that our conditions of replicative stress do not alter these phosphorylation sites, which are induced by direct DNA damage. While there was no reason to analyze this further, we cite the more recent work by Kozlov and colleagues in light of the complexity of ATM phosphorylation sites.
Concerning pCHK1, Reviewer 3 believes that the higher phosphorylation of CHK1 in hydroxyurea-treated PR130 null cells relied on a direct interaction between PR130 and CHK1. We also believed this, but we found no interaction between CHK1 and PR130. Our new data suggest that an earlier arrest of cells in the G1/S transition causes this effect. This finding is novel and shows that CHK1 is not only an upstream regulator of cell cycle progression, but also regulated by an alternative cell cycle regulation by PR130.
Now we present a refined and more advanced model. We are able to distinguish two distinct novel molecular mechanisms. We demonstrate a direct dephosphorylation of pATM by PR130-PP2A and a cell cycle-dependent control of CHK1 phosphorylation that is modulated by PR130. Our data show that in response to replicative stress, PR130 null cells progress slower from G1 to S phase than corresponding PR130 positive cells. However, the lower numbers of cycling PR130 negative cells have even more replicative stress. They carry higher levels of RPA foci, pH2AX, pCHK1, and activated p53 together with a lower activity of WEE1 against CDK1 (Figure 7a-d and 8a-c). We therefore conclude that PR130 is a novel regulator of this early section of S phase that is particularly prone to replicative stress.
MS-275 reduces both pCHK1 and WEE1. Accordingly, cell cycle control is lost and PR130
null cells that are arrested in early S phase disappear and the numbers of cells in G2/M phase and apoptosis increase (Figure 8e-f).
We additionally show that increasing doses of hydroxyurea applied to HCT116 cells lead to an earlier arrest in S phase and to a dose-dependent increase in pCHK1 that does not require a corresponding increase in WEE1 activity (Figure 7e-f). These data suggest that an earlier arrest in S phase -evoked by a higher dose of hydroxyurea or a loss of PR130-leads to a more pronounced phosphorylation of CHK1.
Last but not least, our finding that the inhibition of PP2A with okadaic acid and cantharidin can rescue the hydroxyurea-induced phosphorylation of ATM, but not pCHK1 from dephosphorylation in the presence of MS-275 illustrates PP2A-dependent and -independent pathways (Figure 5b and Supplementary Fig. S3a). These data are also coherent with our observation that it is specifically the ATR kinase that catalyzes the phosphorylation of CHK1 in response to hydroxyurea (Fig. 4d,f) and that a rescue of ATM phosphorylation is therefore not able to increase the phosphorylation of CHK1 (Fig. 5b).
Regarding the other 17 B subunits, we analyzed our microarray for their expression. We found that MS-275 increased the mRNA levels of PR130/PPP2R3A and of B56β/PPP2R5B (this factor was mentioned by the reviewer in her/his last comment), but no induction of other B subunits (Supplementary Table 1). Therefore, we tested the expression of this subunit at the protein level. Moreover, we also analyzed the levels of PR48/PPP2R3B, which was not increased in the array as a control. We found that B56β and PR48 were not induced at the protein level by MS-275 and hydroxyurea plus MS-275 (Supplementary Fig. S4c). Thus, we conclude that the increase in PR130 is a restricted and specific effect of MS-275.
Moreover, in Fig. 6F an interaction is shown between ATM and PR130. Several questions arise from this result and have to be answered:
Is the fraction of ATM that is found in complex with PR130 dephosphorylated at the aforementioned sites?
We have shown in our initial submission that ATM being phosphorylated at S1981 can interact with PR130 ( Fig. 6d; was 6F in the initial submission) and we can now demonstrate that HDAC activity prevents the interaction between PR130 and pATM (Fig. 6d). We had to carry out these experiments in the presence of okadaic acid as otherwise there would be hardly any pATM, which is the target of the PR130-PP2A complex. If hydroxyurea is given alone with okadaic acid, we cannot detect a complex between pATM and PR130 (Fig. 6d).
This finding appears reasonable, because hydroxyurea can strongly promote the phosphorylation of ATM and a pre-existing binding of PR130-PP2A would prevent this posttranslational modification. We can further detect that PR130 is acetylated in cells and that MS-275 increases both its expression and even more pronouncedly its acetylation (Fig. 6e).
Our data showing that the PR130 complex from HCT116 cells is able to dephosphorylate the ATM phosphorylation site in a short peptide sequence demonstrate that such immunoprecipitates have dephosphorylating activity against pATM (Fig. 6f).
These data suggest that an increased expression and acetylation of PR130 in cells treated
with the HDACi MS-275 promote the interaction of pATM with PR130-PP2A and the ensuing dephosphorylation of pATM by the catalytic subunit of the PP2A holoenzyme.
Is ATM also co-precipitating the other holoenzyme components (catalytic C and structural A subunit)? If yes, is the complex catalytically active when associated with ATM?
The question if the ATM-PP2A complex is catalytically active cannot be resolved due to a technical problem.
Is ATM only associated with PR130-holoenzymes upon replication stress and HDAC inhibition or is it already associated even in unstressed cells? If already bound what is the signal to get it activated?
We did not detect an interaction between PR130 and pATM in resting and hydroxyureatreated cells. We now show that the interaction between PR130 and pATM is prevented by HDAC activity (Fig. 6d). Please see our response to Point 2 for a more detailed reply to this relevant question. In brief, our data suggest that an increased expression and acetylation of The same analysis should be performed with CHK1 and PP2A.
According to our new data, CHK1 is not a direct target of PR130-PP2A. We explain this finding above in response to Point 1.
Minor points:
Not only a subset but all Western blot data need to be quantified (from at least 3 independent experiments), see below comments to Fig.5b and 5e.
We used the Odyssey System for most of our Western blots. This system allows the detection of antibody-based signals in a linear range and avoids problems with exponential signals and overexposure. Moreover, it allows the quantification of the signals directly. We The authors should provide % for the cell cycle distribution graphs in Fig. 2a and 7a and should refer in the text to the exact numbers and phrase carefully. For example Fig.7c We provide this information in the text and we corrected the mistake as requested.
Line 178, the authors wrote that PP2A inhibition by cantharidin rescued the inhibition of checkpoint kinase phosphorylation. However, in Fig.5b there is still a substantial decrease in pATM and (even more) in pCHK1 phosphorylation levels visible despite the inhibition of PP2A. Therefore, as asked for above, all western blot data need to be quantified and secondly, the authors need to phrase more carefully their conclusions.
In our original manuscript, we speculated that the dephosphorylation of CHK1 in response to MS-275 relies on PP2A. Now we present a refined and more advanced model that we have briefly explained as response to the first issue raised.
Line 192 + Fig 5e, (PPP2R3A) directly or indirectly regulates the checkpoint kinases ATM1 and CHK1, respectively, and that HDAC1 and 2 sustain checkpoint kinase phosphorylation by suppressing the expression of PR130. The novel data reveal that HDAC1 and 2 bind to the PPP2R3A promoter and HDAC1/2 inhibition by MS-275 correlates with increased histone H3 acetylation at the PPP2R3A promoter. HDAC1/2 inhibition leads to increased expression (Fig. 5d, PR130 levels need to be quantified) and acetylation (New Fig. 6e) of PR130 and an interaction between PR130 and phosphorylated ATM (pATM) (Revised Fig. 6d). PP2A holoenzymes consisting of the PR130 subunit and the PP2A scaffold A and catalytic C subunits could be isolated from HDAC1/2 inhibited cells (Fig. S4d) and these holoenzymes dephosphorylated in vitro a phosphorylated peptide surrounding S1981 (New Fig. 6f). From this the authors concluded that HDAC1/2 inhibition triggers the interaction between pATM and PR130-PP2A holoenzymes and promotes directly the dephosphorylation of ATM. This major conclusion is not supported sufficiently by the additional data: 1) In Fig. 6d is necessary for the decreased pATM levels but is it also sufficient? E.g. will ectopic expression of PR130 alone in HU-treated HCT116 cells lower pATM levels? HDAC1/2 inhibition in HCT116 cells caused a cell cycle arrest in G1 phase and this correlated with reduced WEE1 and pCDK1 levels (new Fig 2f) but also with several fold increased levels of the CDK inhibitor p21 (new Fig. 2i). Neither in the result section nor the discussion did the authors comment on the increased p21 levels and the possible cell cycle consequences of the elevated p21 levels. The CDK inhibitor p21 is a known target of HDAC1 and 2 (Mol Cell Biol. 2010 Mar; 30(5):1171-81) and primary mouse fibroblasts lacking HDAC1 and 2 show a block in G1 phase that is associated with elevated p21 and p57(Kip2) levels (Genes Dev. 2010 Mar 1; 24(5):455-69). Interestingly, induction of replicative stress in the MS-275 treated cells caused a reduction of p21 levels (new Fig. 2i). The p21 levels should also be analyzed in the HCT116 cells lacking PR130 under the same conditions. Does PR130 presence/absence affect p21 regulation? In HU treated cells lacking PR130 higher levels of phosphorylated pCHK1 are present than in PR130 wt cells. However, upon additional MS-275 treatment pCHK1 levels decreased to a similar extent in both cell types indicating that this dephosphorylation happens in a PR130 independent manner (Fig7a). Based on the "failure" of PP2A inhibitors to rescue the pCHK1 in HDAC-inhibited cells (Fig. 5b) the authors concluded that PP2A complexes are not regulating directly CHK1 phosphorylation. However, a closer look at Fig. 5b reveals that upon HU or HU+MS-275 treatment CHK1 phosphorylation increases 3-4 fold upon PP2A inhibition with 20µM Cantharidin. At 40µM Cantharidin pCHK1 levels stay high in HU-treated cells and in HU+MS-275 treated cells drop to ~ half of the levels in 20µM, which, however is still twice as much as in cells without PP2A inhibitor.
In my opinion these data do not exclude the possibility that pCHK1 is a substrate of PP2A. The authors argue that CHK1 cannot be a direct target of PR130 because they could not detect a stable interaction between CHK1 and PR130 (data not shown). However, many PP2A-substrate interactions occur only transiently during catalysis and thus escape detection by standard methods. As I had suggested in my previous review, the authors should test in vitro if PR130 holoenzymes are able to dephosphorylate CHK1 (S317 and S345) and compare it to other di-/trimeric PP2A holoenzymes. Others have shown that PP2A is able to dephosphorylate directly CHK1 in vivo and in vitro (Mol Cell Biol. 2006 Oct;26(20):7529-38) but these authors did not determine the specific PP2A holoenzyme responsible for CHK1 dephosphorylation. The evidence in the revised manuscript does not allow excluding the possibility of a direct PP2A-CHK1 substrate interaction.
The authors hypothesize that -based on lower levels of Wee1 in HU treated cells lacking PR130 (WEE1 levels in clone #16 are even increased) and the increased levels of pHDAC2 in these cells treated with MS-275 -that an earlier S phase arrest could explain the higher pCHK1 levels in cells without PR130. However, this is not evident from the Wee1 and pHDAC2 immunoblot signals in Fig. 7d. pCDK1 levels were quantified and seem to be decreased in HU-treated cells lacking PR130. Have the pCDK1 signals been normalized to the loading control levels (HSP90)? This is not indicated in the figure legend. In any case, without proper quantification such conclusions cannot be drawn. The Wee1 and pHDAC2 signals need to be quantified and normalized to the HSP90 control levels and dependent on the outcome the text has to be revised. Minor points: Fig. 5d, the increasing PR130 levels need to be quantified, the same accounts for the PR130 levels in HCT116∆gDNA cells in Fig.6b left panel. Page 8, line 165: "compared to cells treated with alone" should read as "compared to cells treated with HU alone" Throughout the discussion the authors refer too many times to specific figures, which makes the discussion read like a result section.
Point-by-point responses to all issues raised:
The reviewer stated that we provided additional experimental evidence for our model, but asked for new experiments and explanations. We provide further data and discuss our data regarding the literature and our novel results.
1. The reviewer asked in her/his comments #1-2: #1: "In Fig. 6d These experimental suggestions are based on the work by Goodarzi and colleagues (EMBO J. 2004, Nov 10;23(22):4451-61) and we agree that the reviewer raised an interesting point.
Following the reviewer's request, we ordered an antibody against exactly the epitope that was used in the publication by Goodarzi et al. (EMBO J. 2004 Nov 10;23(22):4451-61). These colleagues obtained the antibody from Oncogene Research, which was later on bought by Millipore, from which we received the antibody. We followed their immunoprecipitation protocol precisely, but were unable to detect a basal interaction. Due to this negative experimental outcome, we systematically changed the parameters in the IP protocol, ranging from different antibody concentration to a fixation step with paraformaldehyde (PFA) to obtain the postulated complex. Below, we show a selection of experimental outcomes. Although we precipitated and enriched ATM with the antibody, we could never detect any PP2A-A or PP2A-C bound to immunoprecipitated ATM in lysates from HCT116 colon cancer cells and K562 leukemia cells: Immunoprecipitation (IP) using 4 µl (0.4 µg) of ATM antibody (IgG, pre-immune IgG): Immunoprecipitation using 8 µl (0.8 µg) of ATM antibody: Immunoprecipitation using 40 µl (4 µg) of ATM antibody: Immunoprecipitation using 8 µl (0.8 µg) of ATM antibody and PFA fixation: Concerning the PFA experiment: ATM detection was performed with the same eluate on a second Western blot membrane, using only 15 µl IP eluate compared to 60 µl used for the lower blot; amounts of input were equal in both settings.
These findings do not match the Goodarzi publication in the EMBO Journal. This inconsistency could be due to the use of different cell lines. We included K562 leukemia cells, because leukemic cells were used in the Goodarzi paper. Also in K562 cells, there is no basal complex between PP2A and ATM. The inputs are clear positive controls in all experiments we provide.
In our revised manuscript we refer to the data shown above as "data not shown", but if the reviewer wishes, we will incorporate them as a supplementary figure.
We would like to stress that our data shown above are entirely consistent with the transient nature of PP2A-substrate interactions, a dephosphorylation reaction taking milliseconds, and a high turnover of substrate phosphorylation by PP2A. Thus, the interaction between ATM and PP2A could be too transient to be detectable. As the reviewer said (see below) "However, many PP2A-substrate interactions occur only transiently during catalysis and thus escape detection by standard methods". We entirely agree with this remark. The hit-and-run mechanism exerted by PP2A is in perfect agreement with rapid dephosphorylation of PP2A substrates. Furthermore, we would like to point out that our genetic and pharmacological experiments unequivocally demonstrate that elimination of PR130 and inhibition of PP2A restores the phosphorylation of ATM in the presence of HDACi and PR130 (Figs. 5b, 6b-d, Supplementary Fig. 3a). We have clarified these issues in our revised manuscript.
Regarding additional experiments with the above-mentioned anti-ATM antibody, we noted that this antibody yields a non-specific band that is very close to PR130 in immunoprecipitations; please note the successful purification and the good enrichment of ATM in this IP. Hence, this antibody is useless for an analysis of a PR130/PP2A complex. Immunoblotting for PP2A-A and PP2A-C gave no positive signals (similar data were collected with another antibody against PP2A-C; not shown). Thus, there cannot be an enzymatic activity associated with such immunoprecipitations.
Immunoprecipitation of ATM from lysates of HCT116 cells that were treated with 2 µM MS-275 or left untreated (Ctrl) for 24 h. IP was performed using 8 µl (0.8 µg) of ATM antibody (IgG, pre-immune IgG; Input = 2.5% of IP). This experiment particularly illustrates that it is mandatory to run an IP with pre-immune serum to avoid an erroneous identification of unspecific bands as PP2A-A/-C): We obtained similar results with a second anti-ATM antibody from Abcam (ab32420).
Another anti-ATM antibody from Santa Cruz BT, failed completely in both Western blot and immunoprecipitation (data not shown).
Maybe we should briefly note that evidence against a role of PP2A for the phosphorylation of ATM in unstressed cells has been collected in two reports using Xenopus extracts (Petersen et al., Mol. Cell Biol. 2006 andYou et al., Nature Cell Biology 2007). Furthermore, several groups have immunoprecipitated ATM from cell extracts and could induce its autophosphorylation without the need to inhibit PP2A (e.g., Kozlov et al., J. Biol. Chemistry 2003;Lee and Paull, Science 2005).
The reviewer asked whether PR130-PP2A gets recruited to ATM in response to replicative stress. However, according to our data, the recruitment of PR130, which binds both PP2A-A/-C and accumulates upon HDAC inhibition, leads to a dephosphorylation of pATM. We show this in several experiments, in which we used HDACi, knocked down HDAC1/HDAC2, and eliminated PR130 (Figs. 1, Supplementary Fig. 1, Fig. 5c-h; 6b-f).
We provide the following figure to depict our new data and we hope this makes it easier to understand our reasoning.
An increase in PR130 upon HDAC inhibition or a knockdown of HDAC1/HDAC2 leads to a dephosphorylation of ATM phosphorylation in response to hydroxyurea. These data reveal a novel mechanistic link between the epigenetic modifiers HDAC1 and HDAC2, PR130, and checkpoint kinase signaling controlling key cell fate decisions. would take several years and the elimination of some B-type subunits impairs cell growth, which poses an obstacle (see e.g., Sablina and colleagues, Cancer Res. 2010).
Regarding dimeric complexes: It is not possible to isolate PP2A-A and PP2A-C without their attachment to B-type subunits. There is no covalent interaction between PP2A-A/C and harsh immunoprecipitation conditions will also break up their interaction. Thus, this experiment cannot be carried out technically. Establishing a fully recombinant biochemical assay for PP2A-A/C and the at least 17 subunits would again take several years. It might even be impossible, as to be functional this trimeric complex requires posttranslational modifications from cells, which are not even characterized in detail. Furthermore, the stoichiometry of PP2A- A/-C complexes that are optimal for checkpoint kinase dephosphorylation are not known.
Clearly, such questions are rather suited for a specialized biochemical journal and far beyond the scope of our work.
The PR130 knockdown and knockout experiments in HU+MS-275-treated HCT116
cells indicate that PR130 is necessary for the decreased pATM levels but is it also sufficient? E.g. will ectopic expression of PR130 alone in HU-treated HCT116 cells lower pATM levels?
We did the experiment as requested (n=3, all measurements done). Its outcome perfectly confirms our model, in which PR130 levels regulate the dephosphorylation of ATM by PP2A.
Thus, there is mass action of PR130 on pATM. Our observation that the phosphorylation of CHK1 was unaffected additionally supports our hypothesis that PR130 targets pATM but not pCHK1 (compare to Figs. 6b and 7a). We show a typical experimental outcome, which we include in the revised manuscript (Fig. 5h).
Numbers indicate densitometric analysis of Western blot signals normalized to their respective loading controls (mSIN3A, Vinculin) and relative to hydroxyurea (HU) treated cells.
HDAC1/2 inhibition in HCT116 cells caused a cell cycle arrest in G1 phase and this correlated with reduced WEE1 and pCDK1 levels (new Fig 2f) but also with several fold increased levels of the CDK inhibitor p21 (new Fig. 2i) phosphorylation, expression, and activity and this also affects the p53 target gene p21 in HCT116 cells (Fig. 2h-i). This observation, together with reduced WEE1/pCDK1 and remaining Cyclin B1 (Fig. 2f-g) agrees entirely with the entry of cells into mitotic catastrophe (Fig. 2a-e).
Interestingly, PR130 affects the induction of p21 and the phosphorylation-dependent activation of its upstream regulator p53 (Figs. 6b and 7e). We also noted that PR130 null cells have more p21 than cells with PR130 (Fig. 7e). This finding provides a nice explanation for the early G1/S phase arrest of PR130 null cells when they are exposed to hydroxyurea (Fig. 7c). MS-275 also induces p21, but the hydroxyurea/MS-275 combination decreases p21 levels ( Fig. 7c) and this is again coherent with the loss of the hydroxyurea-induced cell cycle arrest. The reduction of pCDK1 in cells lacking PR130 is a further explanation for the loss of the hydroxyurea-induced, early G1/S phase arrest when HDAC1/HDAC2 are inhibited by MS-275 ( Fig. 7c-d).
Therefore, we asked whether the earlier G1 arrest in cells treated with increasing doses of hydroxyurea (Fig. 7f) is also linked to p21. Indeed, a proportional increase in p21 is associated with an earlier G1 arrest in HCT116 cells (Fig. 7g). This figure also illustrates that a dosedependent increase in the levels of p21, but not of pCDK1, correlates with an earlier G1 phase arrest and higher levels of pCHK1 in response to hydroxyurea ( Fig. 7f-g).
We have added the citations that the reviewer mentioned and discuss the new data on p21. inhibitors to rescue the pCHK1 in HDAC-inhibited cells (Fig. 5b) We entirely agree that PP2A can target pCHK1. We corrected the manuscript as requested, and we cite the mentioned reference. In our experimental setting, MS-275 reduces the phosphorylation of ATM dependent on the activity of PP2A. The PP2A inhibitors cantharidin and okadaic acid restore the phosphorylation of ATM. Cantharidin also has an effect on the MS-275-mediated reduction of CHK1 phosphorylation by hydroxyurea, but okadaic acid shows only a very mild effect in this setting ( Fig. 5b and Supplementary Fig. 3a Fig. 5b and Supplementary Fig. 3a). This decision is also consistent with the use of okadaic acid to block PP2A in Fig. 6d.
In HU treated cells lacking
In our presented work, we focus on PR130, as we see that this B-type subunit accumulates upon an inhibition of HDAC1/HDAC2. Most importantly, elimination and overexpression of PR130 show that PR130 levels are crucial for the phosphorylation of ATM, but not of CHK1, in vivo (Figs. 5h, 6b-c, 7a-b). Hence, even if a PR130-PP2A complex dephosphorylates pCHK1 in vitro, this outcome would not reflect the cellular reactions. Such a finding would rather be an in vitro artifact that does not add value to our work.
Our new data show that PR130 integrates the control of CHK1 phosphorylation by cell cycle regulatory molecules (p53/p21 and WEE1/pCDK1 signaling nodes; Fig. 7a As said above for dimeric complexes: It is not possible to isolate PP2A-A and PP2A-C from cells without having them attached to B-type subunits. Moreover, unpredictable issues on posttranslational modifications and stoichiometry may prevent or cause artifacts in this analysis. We agree that showing the measurements adds value to figure 7d. As we stated in the figure legend, we made the measurements (signals normalized to HSP90).
We did not imply that the phosphorylation of HDAC2 is linked to cell cycle arrest or the phosphorylation of CHK1. HDAC2 phosphorylation is not consistently affected by the PR130 knockout (this question was originally asked by Reviewer 2; there is only a non-significant trend towards increased phosphorylation of HDAC2 by MS-275). Concerning WEE1, we find that its activity rather than its level depend on PR130. WEE1 phosphorylates CDK1 to slow cell cycle progression (Mahajan and Mahajan, Trends Genet. 2013). The inhibition of WEE1/pCDK1 by MS-275 in hydroxyurea-treated cells -together with the above-mentioned regulation of p21-provides an explanation for the loss of cell cycle control (Figs. 2 and 7c-g). : Fig. 5d, the increasing PR130 levels need to be quantified, the same accounts for the PR130 levels in HCT116∆gDNA cells in Fig.6b left panel.
Minor points
Page 8, line 165: "compared to cells treated with alone" should read as "compared to cells treated with HU alone" Throughout the discussion the authors refer too many times to specific figures, which makes the discussion read like a result section.
We did the quantifications ( Supplementary Fig. 4b) including statistics and corrected the minor mistakes. The references to the figures were intended to improve the readability of our
|
2018-04-03T00:05:52.305Z
|
2018-02-22T00:00:00.000
|
{
"year": 2018,
"sha1": "00ccc7db992a958ad6a270329be1817e3f806827",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41467-018-03096-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d4fb17ff9bf8f1d538a048b1cf1a379559a59ca1",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
1362994
|
pes2o/s2orc
|
v3-fos-license
|
Health research policy: a case study of policy change in Aboriginal and Torres Strait Islander health research
Background There is considerable potential for health research to contribute to improved health services, programs, and outcomes; the policies of health research funding agencies are critical to achieving health gains from research. The need for research to better address health disparities in Indigenous people has been widely recognised. This paper: (i) describes the policy changes made by the National Health and Medical Research Council (NHMRC) from 1997 to 2002 to improve funding of Aboriginal health research (ii) examines catalysts for the policy changes (iii) describes the extent to which policy changes were followed by new models of research and (iv) outlines issues for Indigenous health policy in the future. Methods This study had two parts: (i) semi-structured interviews were conducted over a four -month period with seven individuals who played a leading role in the policy changes at NHMRC during the period 1997–2002, to describe policy changes and to examine the catalysts for the changes; (ii) a case study was undertaken to evaluate projects by recipients of NHMRC People Support awards and NHMRC Capacity Building Grants in Population Health Research to examine the types of research being undertaken five years after the policy changes were implemented. The proposals of these researchers were assessed in terms of whether they reported intending to: evaluate interventions; engage Indigenous community members and organisations; and build research capacity among Indigenous people. Results Seven policy changes over a period of five years were identified, including those to: establish an ethical approach to working with Indigenous people; increase the influence of Indigenous people within NHMRC; encourage priority research directed at improving Indigenous health; and recognise Aboriginal and Torres Strait Islander health research as a priority area including a commitment to an expenditure target of 5% of annual funds. Seven catalysts for this change were identified. These included: a perceived lack of effective response to the health needs of Indigenous people; a changed perception of the role of NHMRC in encouraging research to maximise health gains; and leadership within the organisation. The case study analysis demonstrated that 45% of all People Support recipients intend to engage Indigenous community members and organisations in consultation, 26% included an evaluation of an intervention and two (6.5%) were granted to an individual from an Indigenous background. Six of seven Population Health Capacity Building Grants that were awarded to study Indigenous health between 2004 and 2006 included an intervention component; these grants supported 34 researchers from Indigenous backgrounds. Conclusion NHMRC made significant policy changes from 1997 to 2002 to better support Indigenous health as a result of external pressure and internal commitment. The policy changes have made some progress in supporting better research models particularly in improving engagement with Indigenous communities. However, there remains a need for further reform to optimise research outcomes for Indigenous people from research.
Methods:
This study had two parts: (i) semi-structured interviews were conducted over a fourmonth period with seven individuals who played a leading role in the policy changes at NHMRC during the period 1997-2002, to describe policy changes and to examine the catalysts for the changes; (ii) a case study was undertaken to evaluate projects by recipients of NHMRC People Support awards and NHMRC Capacity Building Grants in Population Health Research to examine the types of research being undertaken five years after the policy changes were implemented. The proposals of these researchers were assessed in terms of whether they reported intending to: evaluate interventions; engage Indigenous community members and organisations; and build research capacity among Indigenous people.
Results: Seven policy changes over a period of five years were identified, including those to: establish an ethical approach to working with Indigenous people; increase the influence of Indigenous people within NHMRC; encourage priority research directed at improving Indigenous health; and recognise Aboriginal and Torres Strait Islander health research as a priority area including a commitment to an expenditure target of 5% of annual funds. Seven catalysts for this change were identified. These included: a perceived lack of effective response to the health needs of Indigenous people; a changed perception of the role of NHMRC in encouraging research to maximise health gains; and leadership within the organisation.
The case study analysis demonstrated that 45% of all People Support recipients intend to engage Indigenous community members and organisations in consultation, 26% included an evaluation of an intervention and two (6.5%) were granted to an individual from an Indigenous background. Six of seven Population Health Capacity Building Grants that were awarded to study Indigenous health between 2004 and 2006 included an intervention component; these grants supported 34 researchers from Indigenous backgrounds.
Conclusion: NHMRC made significant policy changes from 1997 to 2002 to better support Indigenous health as a result of external pressure and internal commitment.
The policy changes have made some progress in supporting better research models particularly in improving engagement with Indigenous communities. However, there remains a need for further reform to optimise research outcomes for Indigenous people from research.
Background
There is great potential for health research to contribute to better health services, programs, and outcomes. The policies of health research funding agencies can substantially influence the kind of research conducted; there is therefore considerable interest in how the policies of research funding agencies are established, their responsiveness to government and community pressures, and the impact on research practice. For example, the recent review of health research funding in the UK-the Cooksey Report-emphasised the need for an overarching health research strategy and found that "the UK is at risk of failing to reap the full economic, health and social benefits that the UK's public investment in health research should generate" [1].
In Australia, the National Health and Medical Research Council (NHMRC) is the major funder of health research. A review of health and medical research in Australia commissioned by the government in 1998 (the Health and Medical Research Strategic Review, or Wills Committee Review) recommended changes in policy to better focus the research effort on outcomes such as health and wealth creation [2]. Subsequent changes to NHMRC policy contributed to a substantial increase in the number of patents arising from funded research and therefore potential wealth creation. However, less progress has been made in encouraging research to inform health policy and practice and produce health gains [3].
Of all the sub-groups in the Australian population who require a strengthened research effort to produce improved health outcomes, the need is clearest for Indigenous Australians. Research directed at improving the health of Indigenous people is recognised as a major priority in Australia. The life expectancy for Indigenous Australian men is 19 years less than for non-Indigenous men and 18 years less for Indigenous women than their non-Indigenous counterparts [4]. There has been little change in the mortality differential in recent years, in contrast to the progress made in comparable countries such as New Zealand and Canada, where Indigenous health has improved relative to that of the rest of the population [5].
Historically, the research effort in Indigenous health in Australia has been less than optimal [6,7]. Aboriginal populations in Australia have been the subject of research since the 19th Century; however, this research was primarily anthropological and focused on accumulating information before Aboriginal people were 'lost to science' rather than on how best to address Indigenous health problems [8]. Aboriginal Australians report feeling that they have been exploited by disrespectful experimentation-subjected to invasive examinations and procedures, objectified, scrutinised, and inaccurately representedwithout this research conferring any health benefits to Aboriginal populations [7,9,10]. In the 1970s a new dialogue began, led by Indigenous people and focused on issues of control, from community consultation and consent, to intellectual ownership and application of research findings [11][12][13][14]. In the 1980s, these debates culminated in the articulation of ethical guidelines for research in Indigenous populations. Two themes emerged from these debates: for Aboriginal communities to have ultimate ownership and control of research, a concerted effort to train Indigenous researchers will be required and to enable "useful research" to be conducted, Indigenous communities need to help identify and define research questions [6,11,12,14].
Informed by these discussions, there has been a growing consensus over the past ten years both within Australia and internationally that research is more likely to have a long-term impact in improving the health of Indigenous people if it evaluates the impact of health programs rather than simply describing health problems, involves Indigenous researchers in all stages of the research, and builds capacity among Indigenous researchers [15,16]. Research funding policies designed to stimulate research of this kind could greatly assist in creating useful evidence to improve Indigenous health.
It is clear that research funding policies over the past twenty years could have been better targeted to improve Indigenous health. Too often, research has simply described health problems without seeking to find solutions; for example, a review of published research in Indigenous health from 2001-2003 found that a mere 13% of Australian peer-reviewed papers evaluated the impact of interventions, with the remainder primarily describing health problems or their causes [17]. Further, it is widely recognised that in Australia, research has frequently failed to engage Indigenous people as equal partners in research or offered an opportunity for involvement in all stages of data collection, including planning, implementing, analysing and disseminating [6,7].
Beginning in 1997, NHMRC responded to the challenge of improving Aboriginal health with a number of substantial policy changes designed to increase funding for Indigenous research and to better target the research effort. NHMRC's response is of considerable interest in understanding the factors that can contribute to policy change in research funding. Like many research funding agencies internationally, NHMRC had historically almost exclusively funded investigator-initiated research with little capacity to strategically target funding to specific areas; its response to the challenge posed by the need to improve Indigenous health was therefore unique in its history.
This paper (i) describes policy changes NHMRC has made to improve funding of Aboriginal health research (ii) examines the catalysts for policy changes (iii) describes the extent to which policy changes were followed by new models of research and (iv) outlines issues for Indigenous health policy in the future. The paper focuses on an analysis of funding for scholarships, fellowships and other awards to pay the salaries of individual researchers (referred to by NHMRC as People Support). We decided to examine People Support because it provided an opportunity to explore the impact of the policy changes on the development of workforce capacity in Aboriginal health research and to examine support for researchers from Indigenous backgrounds. The impact of policy change on the amount of funding through People Support for Indigenous health research is described elsewhere [18].
Key informant interviews
In order to describe the policy changes and to examine the catalysts for the changes, semi-structured interviews were conducted over a four-month period with seven individuals who played a leading role in the policy changes at NHMRC during the period 1997-2002. These seven individuals were involved in the policy changes that were led by the Aboriginal and Torres Strait Islander Research Agenda Working Group (RAWG); in the course of its business, RAWG interacted with other NHMRC principal committees. Accordingly, interviews were undertaken with the former Chair of the Research Committee, as well as the Chair and other leading Indigenous and non-Indigenous members from RAWG during this period.
The interviews were conducted by one of the authors (SLB) and addressed the policy changes that had occurred; the key factors driving policy change (key evidence, individuals, and circumstances); climate and timing for policy changes; barriers to changing policy; as well as approaches and strategies used to encourage policy change. The interviews were digitally recorded and transcribed. Both the transcribed interviews and notes taken during the interviews were used to conduct the analysis.
The interviews were content-analysed according to major thematic areas and trends in current policy-making literature.
Case Study Analysis
We conducted a case study evaluating the projects undertaken by NHMRC People Support recipients of the following funding vehicles: Scholarships (for postgraduate study, most usually leading to a PhD), Training Awards (for postdoctoral researchers), Career Development Awards (for researchers two to twelve years after the award of a PhD), and Career Awards (for senior researchers). We also examined recipients of NHMRC Capacity Building Grants in Population Health Research; this new funding model was introduced in 2002 as a short-term initiative. The grants support junior researchers to work for five years within a mentoring environment with senior researchers.
A keyword search of the NHMRC Research Management Information System (RMIS) was used to identify Indigenous health researchers who received NHMRC People Support awards with the following terms in either the title, lay summary, keywords, or fields of research: Aborigines or Aboriginal; Torres Strait Islander; Indigenous; Koori. For Capacity Building Grants in Population Health Research, a standardised question on the application form (section 1.3) was used to identify "research involving Aboriginal and Torres Strait Islander Peoples." Applications that ticked 'yes' were included. The authors of this paper were granted access to original application forms for review, and double-coded all applications for 14 items including information about thematic areas of the Road Map, project design, and research practices. Operational definitions for coding the data were developed and reviewed by all the authors. Each case was coded independently by two people against a detailed operational definition; in case of disagreement, the appli-cation was jointly reviewed for consideration until the coders could agree on how the item should be coded.
The grants were assessed according to whether they: • Evaluated interventions: that is, the applications included evaluations or trials of interventions, services or programs designed to improve the health of Aboriginal people.
• Engaged Indigenous community members and organisations: that is, the application (a) described an advisory group with Indigenous membership in project design; and (b) completed a special section of their application to NHMRC outlining their commitment to engage Indigenous community members and organisations in research partnership. Completion of this section is considered mandatory by NHMRC for all health and medical research with Indigenous Australians.
• Built research capacity among Indigenous people: that is, the application (a) proposed to employ or train an Indigenous person as part of the research team; or (b) was a People Support Award to an individual who self-identified as Indigenous.
Classification of Indigenous status
NHMRC People Support applicants were classified as Indigenous if they self-identified as an Aboriginal or Torres Strait Islander person on the application form.
(i) Policy changes to improve funding of Aboriginal health research
Respondents nominated seven changes to NHMRC policy over a five-year period and spanning two triennia of the operation of RAWG; the evolution and sequence of these policy decisions is illustrated in Figure 1.
In the early part of this period, three changes occurred: • Adoption of the Darwin Criteria: Most participants nominated the adoption of the Darwin Criteria in 1997-an initiative of RAWG in its first triennium-as a key policy change. The Darwin Criteria were developed as a set of principles to guide research with Indigenous communities. They were intended to be used to assess project grant applications in terms of their level of engagement and capacity building with Indigenous communities, the significance and benefit of research proposals to Indigenous health and the transferability of the methods to other settings. NHMRC adopted the Darwin Criteria as part of the assessment process to gauge applicants' approach to working ethically and in partnerships with Indigenous communities.
• Establishment of the Indigenous Health Review Panel:
This panel was established in 1997 as part of the assessment process for project proposals in Indigenous health; it was an initiative of RAWG in its first triennium. The Indigenous Health Review Panel utilised the Darwin Criteria and the expertise of Indigenous panel members to provide advice on cultural appropriateness of applications and methods and to comment on approaches to community consultation. The Indigenous Health Review panel was also able to comment on the scientific quality of the applications and to stipulate conditions upon which funding is contingent. The reports of the Indigenous Health Review Panel were used by Grant Review Panels in making further assessment of project grant applications and final funding recommendations.
At the 144 th Council session of NHMRC held in October 2002, major policy issues addressed by RAWG in its second triennium of activity (2000)(2001)(2002) were considered. An options paper developed by RAWG was tabled outlining policy options for Aboriginal health research, including consideration of the RAWG Road Map, mechanisms to increase representation of Indigenous people in the NHMRC and consideration of efforts to increase the level of specific research funding to Indigenous health [19]. Interviewees reported that policy documents prepared by RAWG were received with "a high degree of vocal support" on Council. The following policy decisions were endorsed at that time:
• Endorsement of the NHMRC Road Map: A Strategic Framework for Improving Aboriginal and Torres Strait Islander Health Through Research:
In its second triennium, RAWG conducted a national consultation process to identify priorities for research in Indigenous health. The RAWG Road Map for research was developed through this consultation process, involving a series of four national workshops and written submissions to set out a strategic approach to Indigenous health research for NHMRC. Indigenous community members, researchers and policy-makers contributed to the consultation process. All participants acknowledged the comprehensive consultation process used to develop the Road Map, and identified the Road Map as a compelling policy document to guide future NHMRC investments.
• Acknowledgement of Aboriginal and Torres Strait Islander health research as a priority area for development:
Several participants highlighted the importance of the acknowledgement by the Council of Indigenous research as a priority in improving expenditure and Indigenous representation within the agency. One respondent reported that this acknowledgement enabled Council to allow funds to be earmarked for Indigenous health research.
• Commitment to target of 5% annual expenditure: At the October 2002 Council meeting, it was agreed that NHMRC would work towards a target of expending at least 5% of its annual budget on Indigenous health research. All participants acknowledged this funding commitment as a landmark decision by NHMRC.
• Increase Aboriginal and Torres Strait Islander representation across all NHMRC Principal Committees and Council:
In accordance with the principle to engage Indigenous community members in all stages of research, all participants emphasised the importance of Indigenous participation in decision-making within NHMRC and noted increased Indigenous participation throughout the 1997-2002 period.
(ii) Catalysts for policy change
Participants identified both external and internal factors as being influential in bringing about the policy change. Four external factors were identified by all participants: Policy timeline for Indigenous health research Two other issues of importance were raised by participants: • Legislation: Several participants commented on the role of the NHMRC Act and differing interpretations over time.
In early discussions, there was a view in the organisation that the legislation precluded the allocation of funding for specific purposes, such as 5% for Indigenous research. This view was revised in internal discussions during the 2000-2002 triennium.
• Implementation: All participants emphasised the importance of revisiting existing policies to evaluate the implementation. Respondents noted that the Road Map had not included measurable indices for implementation and that it would therefore be difficult to assess the extent to which NHMRC had implemented the recommendations. Further, several participants also noted that a strategy for building capacity in Indigenous health [21] had been agreed by NHMRC at its Council meeting in December 2005 but had not been implemented. However, probably of most importance, the policy changes resulted in an explicit acknowledgement of Indigenous health research as a priority area for development and a commitment to a 5% expenditure target of annual funds [19].
(iii) Impact of policy change on models of research
Taken together, this represents a substantial set of policy changes to address an urgent health need through research-the magnitude of the policy response is unique in the NHMRC's history. Historically, the vast majority of research funded by NHMRC has been investigator-initiated research selected for funding primarily on the basis of scientific excellence; as noted by the Wills Review in 1998, relatively little funding prior to 1998 was directed towards strategic research focused on government or community priorities. The lack of response to Indigenous health prior to this time was therefore part of a general philosophy about research funding and indeed hampered the NHMRC's capacity to act strategically in other areas of health need as well [22]. The funding philosophy at the time is illustrated by the view inside NHMRC that its legislation may preclude allocation of funds to a specific area (e.g. 5% to Indigenous health). There was also a view during the period of reform addressed in this paper that a designated allocation of funds to Aboriginal health would necessarily involve a decline in scientific standards.
It is therefore of some interest to understand the factors that led to these policy changes. Kingdon's multiple streams model of policy-making outlines three streams that contribute to whether a policy change is adopted: the problem stream (a given situation has to be identified and explicitly formulated as a problem or issue); the policy stream (an explicit formulation of policy alternatives and proposals must be available); and the political stream (a political event or climate that affects the balance of costs and benefits) [23]. Based on interviews with key participants, three broad sets of factors corresponding to Kingdon's streams were identified. First, there was a clear identification of the problem-a lack of effective response to the urgent health needs of Aboriginal and Torres Strait Islander communities both from governments and from Indigenous communities. The data about disparities in health status were compelling enough for government to recognise that action was required.
Second, a clear action was identified for NHMRC-namely to increase the proportion of funding provided for Aboriginal health research to 5%. The identification of a simple technically feasible response by the House of Representatives, followed by a letter from the Minister, was of fundamental importance to the adoption of the policy change [20,24,25]. The RAWG Road Map also made specific recommendations that could be readily adopted by NHMRC [16]. Third, this research has highlighted the need for new initiatives to build capacity among researchers from Indigenous backgrounds. Relatively few of the People Support Awards were to researchers from Indigenous backgrounds and this has been confirmed by other analyses over a longer time frame [18]. What strategies should the NHMRC implement to increase the numbers of Aboriginal and Torres Strait Islander students in research? Evidence suggests that funding models centred around collaborative research environments with blended teams of skilled and early career professionals are more likely to work successfully on meaningful, long-term research projects and train highly skilled researchers in the process. The NHMRC Capacity Building Grants in Population Health appear to be an effective means of attracting multidisciplinary research teams to work together on highly beneficial, applied research. Some of these grants supported additional early career researchers in groups with already established strengths in Indigenous health research while others were used to establish capacity in groups interested in building new Indigenous health research teams. The reasons for the relative success of Capacity Building Grants in Population Health in supporting researchers from Indigenous backgrounds are not clear; however, it seems likely that these grants may be perceived as offering better opportunities for researchers from Indigenous backgrounds. The grants are longer (five years), are from larger teams with established infrastructure (including members of Indigenous communities and organisations), and provide greater financial support to team investigators than might be received through a Scholarship. They also offer the opportunity to work collaboratively with other early career researchers from Indigenous backgrounds.
The NHMRC might consider drawing on international models for building capacity among researchers from Indigenous backgrounds. For example, the Canadian Institutes for Health Research demonstrated a commitment to "building research capacity and infrastructure in Aboriginal health research" by establishing the Institute for Aboriginal Peoples Health to administer eight Aboriginal Capacity and Developmental Research Environments (ACADRE) centres. Each of these centres provides an array of scholarship and training opportunities to undergraduate and graduate students (the majority of which selfidentify as Aboriginal), as well as to community members and organisations interested in conducting health research [15,26]. Another example is provided by the New Zealand Centre of Research Excellence; in 2002, a fiveyear target was set to graduate a total of 500 Maori PhD scholars across all academic disciplines. In addition to active recruiting and extensive student support services, all students were provided with a mentor to guide their academic development and provide social support throughout their PhD program. The initiative has been highly successful, and New Zealand is on track to establish a critical mass of Maori scholars in disciplines including health, history, social sciences, and education [27]. As part of the current review of the Road Map, NHMRC might consider the inclusion of similar strategies.
In conclusion, it is evident that with sufficient external pressure and internal commitment, it is possible to make substantial changes to health research funding policy. The NHMRC made significant changes to its policy in 2002 to better support Indigenous health. By 2007, funding for Indigenous health research-at least through its People Support Awards-appears to be moving towards a better model of practice. However, there remains a considerable way to go before Australia could be said to have in place strategies that optimised the research effort in improving the health of Indigenous people. Government, the community and researchers should continue to advocate for improved funding and for the development of new models reflecting international best practice.
|
2016-05-12T22:15:10.714Z
|
0001-01-01T00:00:00.000
|
{
"year": 2009,
"sha1": "82170225d50999c79677c4a5afe7ef67788d1dbb",
"oa_license": "CCBY",
"oa_url": "https://anzhealthpolicy.biomedcentral.com/track/pdf/10.1186/1743-8462-6-2",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "82170225d50999c79677c4a5afe7ef67788d1dbb",
"s2fieldsofstudy": [
"Political Science",
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256647479
|
pes2o/s2orc
|
v3-fos-license
|
IMPLEMENTATION OF ENSEMBLE METHOD ON D NA DATA USI N G VARI OUS CROSS VALIDATION TECHNIQUES
Due to the growing size of datasets, which contain hundreds or thousands of features, feature selection has drawn the interest of many scholars in recent years. Usually, not all columns show important values. As a result, the machine learning models may perform poorly since the noise or unnecessary columns may confound the algorithms. To address this issue, various feature selection methods have been developed to evaluate large dimensional datasets and identify their subsets of pertinent features. The data, however, frequently skews feature selection algorithms. As a result, ensemble approaches have emerged as a substitute that incorporates the benefits of single feature selection algorithms and makes up for their drawbacks. In order to handle feature selection on datasets with large dimensionality, this research aims to grasp the key ideas and links in the process of aggregating feature selection methods. The suggested idea is tested by creating a cross-validation implementation that combines a number of Python packages with functionality to enable the feature selection techniques. By identifying pertinent features in the human, chimpanzee, and dog DNA datasets, the performance of the implementation was demonstrated.
INTRODUCTION
In recent years, datasets with a lot of attributes have become more common in several fields.Microarray categorization serves as the best illustration.Numerous datasets containing this type of data have been produced as a result of improvements in DNA microarray.The majority of these datasets show that the ratio of instances to features, which range from 6 to 60 genes, is not greater than this.However, most of the genes in these datasets do not represent helpful information to support a machine learning process.In order to efficiently classify microarray data, a pre-processing stage is therefore required.This article will explain how to do so by choosing a representative subset of genes from the original set of genes(Mera-Gaona, LÅLopez, Vargas-Canas, and Neumann, 2021) [16].The individual success of the ensemble's basis learners and the independence of the base learners' results due to low error and great diversity are the two major factors that determine how well an ensemble performs.By utilising foundation learners of the same or different types, diverse base learners can be built.When using the same type of base learners, diversity is produced by giving each base learner in the ensemble a different training set.Different training data sets can be created using a variety of techniques, including bagging, boosting, random subspaces, random forests, and rotation forests.In order to créate a superior composite global model with more precise and trustworthy estimates or conclusions than can be produced by utilising a single model, an ensemble methodology combines a group of models, each of which addresses the same original problem.The fact that different classifier types have distinct inductive biases is one of the key reasons why ensemble methods are so successful (Gopika1 and Azhagusundari, 2014) [9].Finding ways to enhance feature selection on datasets with high dimensionality and few examples is the major goal of this work.Additionally, cross validation is used in the display of ensemble methods to combine the benefits of several feature selection algorithms, avoid their biases, and make up for their shortcomings (Mera-Gaona et al., 2021)[16].
ENSEMBLE METHODS
The Ensemble categorization is founded on the idea that several experts can provide more accurate judgments than a single expert.A single composite model with higher accuracy is produced through ensemble modelling, which combines the collection of classifiers.According to research, predictions from a composite model provide better outcomes than predictions from a single model.Since the previous few decades, ensemble technique research has gained popularity.The outputs of many classifiers are combined, which minimises generalisation error, according to a number of experimental tests carried out by machine learning experts.The ensemble approaches are described in this section (Pandey and Taruna, 2014) [10][11]. (
1) Bagging
The bagging technique is used to reduce variance, and the bagging ensemble method's goal is to divide the dataset into several subsets for training that are randomly chosen with replacement(Singh and Pal, 2020) [10].The Bootstrap sampling approach provides the basis for bagging.A distinct set of bootstrap samples is produced for each iteration of the procedure in order to build a unique classifier.During the sample phase of the bootstrap sampling approach, data items are chosen at random with replacement, meaning that some instances may be repeated or some may be omitted from the original dataset.Combining all of the classifiers built in the previous phase is the next stage in the bagging process.To arrive at a final prediction, bagging combines the output of the classifiers with input from the voting process a12 [11][12].
(2) Boosting
Another crucial ensemble method is the boosting classifier.It is used to develop a collection of classifiers.By fitting classifiers to data and then assessing mistakes, classifiers are serially trained in the boosting approach (Singh and Pal, 2020) [10].The weak classifier's performance is improved by boosting to a strong level.With the help of reweighting the data instances, it creates sequential learning classifiers.All the instances are given initial weights that are equal and https://doi.org/10.17993/3ctecno.2022.v11n2e42.59-69consistent.Each time a learning phase is completed, a new hypothesis is taught, and the examples are reweighted such that instances that were properly identified during that pase have a lower weight and the system may focus on instances that weren't.Instances that were incorrectly categorised are chosen so they can be correctly categorised in the following learning stage.This procedure keeps on till the final classifier is built.To arrive at the final forecast, the output of each classifier is finally merged using majority voting.The Boosting method has been generalised in AdaBoost(Breiman, ) [12].
(3) Random Subspaces
The approach comes in two different types.Each base learner is taught using a distinct feature subspace of the initial training data set at the first form.Only decision trees may be utilised as the base learner at the second form (Gopika1 and Azhagusundari, 2014) [9].(4) Random Forest Breiman proposed Random Forest.Bagging plus the second kind of random subspaces can be used to formulate it (Breiman) [12].The bagging and random subspace methods are combined to induce the tree.Although each model is a random tree rather than a single model, it differs from bagging in that each tree is created in accordance with the bootstrap sample of the training set to N. Each node is divided using yet another random step.Instead of examining all potential splits, a limited subset of features is randomly picked, and the optimum split is determined from this subset.Across all trees, the majority vote determines the final categorization [11].(5) Rotation Forest Rotation Forest is a brand-new ensemble approach built on the Principal Component Analysis (PCA) and decision trees.To create a training set for the base classifier using a K axis rotation of the feature subset, the attribute set F is randomly divided into K subgroups, and PCA is then performed separately to each subset.By keeping all of the PCA, Rotation Forest maintains all of the information.The basis classifier for Rotation Forest is the decisión tree(Pandey and Taruna, 2014) [11].
CROSS VALIDATION TECHNIQUES
A statistical technique called cross-validation determines how well a trained model will perform on unobserved data.By training the model on a subset of the input data and testing it on a different subset, the model's effectiveness is confirmed.Building a generalised model is assisted by crossvalidation.Cross-validation is helpful for both performance estimate and model selection since modelling is an iterative process.Cross-validation involves the following three steps: i. Split the dataset into two sections: a training section and a testing section.ii.Use the training dataset to train the model.iii.Use the testing set to gauge the model's effectiveness.Check for problems if the model doesn't perform well with the testing set.If a model can predict accurately for a variety of input data and does well on unknown data, it is stable and consistent.Evaluation of the stability of machine learning models is aided by crossvalidation.The dataset has to be divided into three separate sections for training and testing the model: • Training Data: Using the training data, the model is trained to discover the dataset's hidden characteristics and patterns.The model continually assesses the data to better understand its behaviour, and then it modifies itself to achieve its goal.Basically, it's employed to fit the models.This paper discusses eight alternative cross-validation approaches, each with advantages and disadvantages that are stated below (1) Leave p out cross-validation An exhaustive cross-validation strategy called leave p-out cross-validation uses the p-observation as validation data while utilising the remaining data to train the model.This is repeated in all possible ways on a validation set of p observations and a training set to trim the original sample.In order to estimate the area under the ROC curve of a binary classifier in a virtually unbiased manner, leave-pair-out cross-validation, a variation of Leave p-out with p=2, has been suggested (Kumar, 2020)[14].
(2) Leave one out cross-validation
A thorough cross-validation method is leave-one-out cross-validation.It falls within the leave p-out cross validation category with the instance of p=1.The first row of a dataset of n rows is chosen for validation, and the remaining n-1 rows are utilised to train the model.The second row is chosen for validation and the remainder is used to train the model for the following iteration.Similar to that, the procedure is repeated up to n operations or phases.Cross-validation techniques i.e. leave p-out and leave One-out that learn and test in every conceivable way are known as exhaustive crossvalidation techniques.They share the advantages such as straightforward, understandable, and simple to use and disadvantages such as the model might provide a little bias and a lot of computing time is needed [13][14].
(3) Holdout cross-validation
The dataset is randomly divided into training and validation data in holdout cross-validation.In general, training data are split more evenly than test data.The model is created using training data, and validation data is used to assess the model's effectiveness.The model becomes better as more data are used to train it.The holdout cross-validation approach isolates training data from a sizable amount of data.The advantages for this such as straightforward, understandable, and simple to use and disadvantages such as it's not suitable for an unbalanced dataset and a lot of data is not being used to train the model (Raschka, 2020) [5].
(4) Repeated random sub-sampling validation
The dataset is randomly divided into training and validation in repeated random subsampling validation, commonly known as Monte Carlo cross-validation.Unlikely k-fold cross-validation separates the dataset into random splits rather than groups or folds in this.Analysis determines the number of iterations; it is not a set quantity.The outcomes are then multiplied by the divides.Advantage for such validation is i.e. there is no relation exists between the number of iterations or divisions and the fraction of train and validation splits and the disadvantages such as possible that some samples won't be used for either training or validation and not appropriate for a dataset with imbalance [5][14].
(5) k-fold cross-validation
The original dataset is evenly divided into k subparts or folds for k-fold cross-validation.For each iteration, one of the k-folds or groups is chosen as the validation data, while the remaining (k-
2020). (6) Stratified k-fold cross-validation
All the cross-validation methods mentioned above might not be effective with an unbalanced dataset.Unbalanced dataset issue was resolved by stratified k-fold cross-validation.The dataset is divided into k groups or folds in stratified k-fold cross-validation such that the validation data has an equal number of instances of the target class label.This makes sure that, especially when the dataset is unbalanced, one specific class is not overrepresented in the validation or train data.The average of the scores for each fold is used to get the final score.As a benefit, it performs well for an unbalanced dataset (PAYAM REFAEILZADEH, 2008) [3][14].
(7) Time Series cross-validation
When dealing with problems involving time series, the data's order is crucial.Data divided or in k-folds into train and validation for time-related datasets might not produce the best results.The forward chaining method, also known as rolling cross-validation, is used to divide the time-series dataset's data into train and validation groups.The subsequent instance of train data can be used as validation data for a certain iteration(a13, ) [13].
BASIC PROCESS OF MACHINE LEARNING
The field of machine learning blends traditional statistical methods with computer science techniques.
CONCLUSION AND FUTURE WORK
This method of feature selection and feature extraction from DNA data sequence was successfully completed.Here, we employed K-mer counting, one-hot encoding, and ordinal encoding as the language for choosing DNA sequence features in python libraries.We have demonstrated the result using these libraries in the forms of a matrix, vector, and graph.In future, we also retrieved K-mers to use in the classifier process.
( 8 )
Nested cross-validationWe obtain a subpar estimate of the error in training and test data while using k-fold and stratified kfold cross-validation.In the prior techniques, hyper-parameter adjustment is done individually.Nested cross-validation is necessary when cross-validation is used to tune the hyper-parameters and generalise the error estimate at the same time.Both the stratified k-fold and k-fold variations can use nested cross validation [14].
Fig. 1 :
Fig. 1: Basic Process.Source: datavalley. 1.Data Collection: Gather all the information you need from the many systems that might contribute to your situation.2. Data Pre-processing: Prior to processing and analysis, raw data must be cleaned and transformed.Prior to processing, it is a crucial phase that frequently entails reformatting data, making adjustments to data, and fusing data sets to enhance data.-DataCleaning: The initial phase in data mining consists of removing incomplete or inconsistent data since data sets frequently contain missing data and inconsistent data.Low data quality will have a significant negative influence on the information extraction process.-DataIntegration: If the data to be examined come from many sources, they must be reliably aggregated.3. Feature Engineering: This covers all modifications made to the data, from cleaning it up to ingesting it into the machine learning model.You choose and prepare the features that will be used in your machine learning model in this stage, making sure they are in the format required by the model.4. Selection of Model: Choose the best model for the situation and then make any necessary adjustments.
Fig 3 DNA
Fig 3 DNA Sequence with class.
Fig 4
Fig 4 Class distribution graph.Feature Selection: Although the DNA sequence in a show is represented by characters, machine learning algorithms need numerical values or feature matrices.In order to convert these characters into values, we use three general approaches such as ordinal encoding, one hot encoding and kmers counting.Ordinal Encoding: With this method, each nitrogen base must be encoded as an ordinal value."A, T, G, and C," for instance, becomes [0.25, 0.5, 0.75, and 1.0].Any additional base, like "Z," may be a 0.
Fig 5 Fig 6 Feature
Fig 5 Python Code for ordinal encoding.
Fig 7 Fig 8 Feature
Fig 7 Python code for One-hot encoder.
Fig 9 Fig 10 Feature
Fig 9 Python code for K-mers counting.
• Validation Data: This is used to confirm that the model's training results were accurate.It aids in adjusting the hyper-parameters and settings of the model appropriately.The prediction error for model selection is estimated using the validation data.Validation data helps prevent overfitting models.• Test Data: Following training, the test data confirms that the trained model is capable of 3C Tecnología.Glosas de innovación aplicadas a la pyme.ISSN: 2254-4143 Ed. 42 Vol.11 N.º 2 August -December 2022 [13]ng precise predictions.It is used to evaluate the generalisation error of the last model chosen (Hulu and Sihombing, 2020)[1](Jung and A K-Fold, 2015)[7][8](Wu, )[13][14](??, ).https://doi.org/10.17993/3ctecno.2022.v11n2e42.59-69 1)groups are chosen as the training data.Until each group is considered as validation and the rest as training data, the procedure is repeated k times.The mean accuracy of the kmodels validation data is used to calculate the model's final accuracy.The model exhibits little bias, low temporal complexity and both training and validation use the complete dataset is the advantages and the (Raschka, 2020)daranar University, Tirunelveli (Tamil Nadu), India., Assistant Professor, Department of Statistics, Dr. Ambedkar Government Arts College, Vyasarpadi, Chennai (Tamil Nadu), India., and Assistant Professor.Department of Statistics, TMG College of Arts and Science, Chennai (Tamil Nadu), India., 2021)(Raschka, 2020)(Raschka, 3C Tecnología.Glosas de innovación aplicadas a la pyme.ISSN: 2254-4143 Ed. 42 Vol.11 N.º 2 August -December 2022 https://doi.org/10.17993/3ctecno.2022.v11n2e42.59-69
|
2023-02-08T16:09:55.715Z
|
2022-12-29T00:00:00.000
|
{
"year": 2022,
"sha1": "1c6892d5fbc88c3af2ac052a01cbcd45c7359837",
"oa_license": "CCBY",
"oa_url": "https://www.3ciencias.com/wp-content/uploads/2023/01/art-5-3c-tecnologia-ed42-vol11-n2-Implementation-of-Ensemble-Method-on-DNA-Data-Using-Various-Cross-Validation-Techniques-1.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f8d7ac139a4a63f75e9f89777400c17eefef109a",
"s2fieldsofstudy": [
"Computer Science",
"Biology"
],
"extfieldsofstudy": []
}
|
255801234
|
pes2o/s2orc
|
v3-fos-license
|
Mesalazine Inhibits Amyloid Formation and Destabilizes Pre-formed Amyloid Fibrils in the Human Insulin
Amyloid formation due to protein aggregation is associated with several amyloid diseases (amyloidosis). The use of small organic ligands as inhibitors of protein aggregation is an attractive strategy for the treatment of these diseases. In the present study, we evaluated the in vitro inhibitory and destabilizing effects of Mesalazine on human insulin fibrillation. To induce fibrillation, human insulin was incubated in 50 mM glycine buffer (pH 2.0) at 50 °C. The effect of Mesalazine on insulin amyloid aggregation was studied using spectroscopic, imaging, and computational approaches. Based on the results, the Mesalazine in a concentration-dependent manner (different ratios (1:0.1, 1:0.5, 1:1, and 1:5) of the insulin to Mesalazine) prevented the formation of amyloid fibrils and destabilized pre-formed fibrils. In addition, our molecular docking study confirmed the binding of Mesalazine to insulin through hydrogen bonds and hydrophobic interactions. Our findings suggest that Mesalazine may have therapeutic potential in the prevention of insulin amyloidosis and localized amyloidosis.
Introduction
Amyloid-related disorders such as Alzheimer's, Parkinson's, type 2 diabetes, localized amyloidosis etc., the accumulation of amyloid aggregates from specific proteins [1,2]. The number of patients with protein misfolding diseases (PMDs) is increasing rapidly [3]. Several factors, such as the lack of effective drug options, increased life expectancy, and population growth, are the reasons for increasing attention to amyloid diseases worldwide [4].
Insulin, a small globular protein consisting of 51 amino acids, is one of the proteins that is used as an excellent model protein for studying protein aggregation inhibitors. Human insulin, as a blood sugar-reducing hormone, is widely used as an anti-diabetic medication [5]. On the other hand, different studies demonstrated that this protein causes localized amyloidosis after repeated subcutaneous injections [6]. This process can severely limit its therapeutic effect against type 2 diabetes [7]. In addition, according to a recent study, insulin aggregation has been identified at insulin injection sites and in diabetic patients with Parkinson's disease (PD). Such cases significantly reduce hormone activity and are often responsible for necrotic deposits in diabetic individuals [8]. Insulin fibrils also are associated with Parkinson's disease, as patients' sera show an autoimmune response to oligomers and insulin fibrils [9]. In the treatment of amyloidogenic disorders, inhibition of amyloid formation and disruption of pre-formed fibrils have been proposed as two important strategies [10]. Small molecule inhibitors are potential candidates for diseases caused by amyloid fibrillation [3,[11][12][13].
Mesalazine or 5-aminosalicylic acid (5-ASA) is one of the non-steroidal anti-inflammatory drugs (NSAIDs). This synthetic drug plays a role in the treatment of inflammatory bowel diseases and has anti-inflammatory, antioxidant, antifungal, antibacterial, anti-cancer, anti-diverticulosis, anti-amyloid, anti-ulcer and gastric protection (gastroprotective) properties [14]. Mesalazine can be very important therapeutically in the development of new molecules that prevent the accumulation of insulin and other localized amyloidosis. In previous study, we investigated the effects of Mesalazine on the formation and elimination of amyloid fibrils from lysozyme proteins, in vitro [15]. According to the potent anti-amyloidogenic properties of this compound on lysozyme protein, we decided to investigated the in vitro inhibitory and destabilizing effects of Mesalazine on human insulin protein fibrillation. Under experimental conditions (acidic pH and high temperature), insulin monomers change to relatively folded intermediates. Therefore, the mentioned factors are among the important triggers in the creation of amyloid fibrils [16,17].
Formation of Insulin Amyloid Fibrils and Anti-aggregating and Dis-aggregating Activities of Mesalazine
Using a 50 mM glycine buffer (pH 2.0) containing 0/02% NaN3, the protein solution was prepared at a concentration of 4 mg/ml. In order to amyloid fibril formation, 200 µl of native insulin (HI) solution was incubated at 50 °C without stirring. To evaluate the protein fibrillation process, insulin samples were incubated at 50 °C and pH 2.0 in the absence and the presence of different ratios of Mesalazine (1:0.01, 1:1, 0.05, 1:1, and 1:5). Then, to investigate the protein disaggregation process, the pre-formed insulin fibrils were incubated at 37 °C for 3 days without stirring in the absence and the presence of different ratios of the protein to Mesalazine (1:0.1, 1:0.5, 1:1 and 1:5).
Congo Red (CR)
In this test, CR dye stock solution (2 mM) was prepared by dissolving CR in 5 mM potassium phosphate buffer (pH 7.4) containing 150 mM NaCl and 0/02% NaN3, then passed through a 0.22 µm filter and kept in dark at temperature 4 °C. To measure the CR absorbance spectra of human insulin (incubated at 50 °C) in the absence and presence of Mesalazine, 195 µl of CR solution (final concentration 12.5 µM) was mixed with 5 µl of each sample and incubated at room temperature and kept for 30 min in dark [18,19]. CR spectra were recorded between 400 and 700 nm using Epoch microplate reader (BioTek, USA).
Thioflavin T Fluorescence (ThT)
In this assay, ThT dye stock solution (2 mM) was prepared by dissolving ThT in 10 mM sodium phosphate buffer (pH 7) containing 150 mM NaCl and 0/02% NaN3, then passed through a 0.22 µm filter and kept in dark at temperature 4 °C. To measure the fluorescence intensity, 15 µl of each sample was added to 585 µl of ThT solution (dye final concentration of 20 µM) and incubated for 2 min at room temperature [18,19]. ThT fluorescence intensities were recorded using a Cary-Eclipse fluorescence spectrophotometer (Agilent, USA) with excitation at 440 nm and emission at 488 nm. Excitation and emission slit widths were set at 5 and 10 nm, respectively. In order to obtain the kinetic curve and the lag phase time, ThT fluorescence intensity of incubated samples was measured over time at 485 nm according to the above method, then the data were fitted according to the following equation [20]: where, F is the fluorescence intensity at time t, F m ax is the fluorescence intensity at the end of experiment, and t m is the time it takes to reach 50% of maximum fluorescence. Nonlinear regression was used to calculate the value of the fibril growth time constant (τ). 1/τ, the fibril growth rate constant and t m -2τ express the early lag time.
8-anilinonaphthalene-1-sulfonic Acid (ANS)
The method of preparation and storage of concentrated stock solution of ANS is the same as ThT. To measure the ANS fluorescence intensity of the sample, 15 µl of it was added to 585 µl of ANS solution with a final concentration of 20 µM and mixed and incubated for 1 min at room temperature [18,19]. ANS fluorescence intensities were recorded using a Cary-Eclipse fluorescence spectrophotometer (Agilent, USA) with excitation at 380 nm and emission between 420 to 600 nm. Excitation and emission slit widths were set at 5 and 10 nm, respectively.
Atomic Force Microscopy (AFM)
A volume of 5 μL of each sample was diluted 50 times with deionized water and was dried on a mica plate in the air to study the effect of Mesalazine on the morphology of insulin fibrils. After drying, the samples were imaged using Full Plus AFM in the non-contact AFM (NC-AFM) imaging mode at a scan frequency of 0.5 Hz (Ara Pajoohesh, Iran).
Molecular Docking
AutoDock tool v1.5.6 was used to investigate the interaction between insulin and Mesalazine. The crystalline structure of human insulin was extracted from the Protein Database (PDB id code: 1GUJ) (http:// rcsb. org/) and the three-dimensional structure of Mesalazine (CID: 4075) was extracted from the PubChem database. Discovery studio and VMD v1.9.3 software was used to prepare the two-dimensional (2D) and three-dimensional (3D) schematic geometric shapes of the docking model and also to show different orientations between the ligand and the protein.
Statistical Analysis
Data analysis was performed in GraphPad Prism software version 9.2.0 with one-way ANOVA test. Data obtained from 3 repetitions of experiments were displayed as mean ± deviation from the standard and the value of P value < 0.05 was considered as a significant difference.
CR Binding Assay
Detection of amyloid fibrils using Congo red is associated with increased Congo red absorption and shift to wavelength above 490 nm (red shift) [11,21]. Congo red absorption in the presence of amyloid fibrils increased and shifted from 490 to 510 nm (Fig. 1). We showed that the effects of Mesalazine on fibrillation (Fig. 1a) and disaggregation (Fig. 1b) of insulin. Mesalazine in different concentrations reduced the absorption of Congo red and shift the maximum absorption peak to a shorter wavelength (blue shift). The largest decrease is related to the ratio of 1:5 (insulin: Mesalazine). These changes confirm the ability of Mesalazine to inhibit and destabilize amyloid fibrillation [22,23].
ThT Fluorescence Analyses
One of the methods to identify amyloid fibrils and study the fibrillation kinetics is to bind to the fluorescent dye ThT, which increases the fluorescence emission [5]. Identifying the key stages of insulin fibril formation can show the important information to prevent fibril formation [24]. Changes in the ThT fluorescence intensity of insulin solutions in the presence and absence of different concentrations of Mesalazine for different time intervals are shown in Fig. 2a. Co-incubation of different concentrations of Mesalazine with insulin show a decrease in ThT intensity (Fig. 2b), which is more evident in the ratio of 1:5 (insulin: Mesalazine). Addition of Mesalazine in different ratios to pre-formed fibrils decreased the fluorescence intensity, as shown in Fig. 2c.
ANS Fluorescence Analyses
ANS is a hydrophobic fluorescence probe that increases its intensity by binding to hydrophobic surfaces of proteins [25,26]. It is used to study the surface hydrophobic changes in amyloid fibrils. In amyloid fibrils, hydrophobic areas are more exposed to the surface. As shown in Fig. 3, the fluorescence intensity in the fibrils samples increases significantly, and the emission maximum shifts to shorter wavelengths. These changes indicate the formation of amyloid fibrils due to the exposure of hydrophobic surfaces and the interaction of ANS with these regions. In the presence of different concentrations of Mesalazine, (Fig. 3a, b) the fluorescence intensity, especially at a ratio of 1:5 (insulin: Mesalazine) decreased. Figure 4a and d shows the elongated appearance and unbranching of Insulin amyloid fibrils [9] formed after incubation at 50 °C and pH 2.0. AFM images of insulin samples incubated in the presence of Mesalazine did not show any visible fibrillar structures ( Fig. 4b and e), in comparison to fibril samples. In addition, the preformed insulin fibrils were significantly reduced in the presence of Mesalazine (Fig. 4c and f).
In Silico Study
Molecular docking is used as a key virtual search tool in drug discovery [27]. Based on the results, Mesalazine binds to the hydrophobic nucleus of the protein as a ligand (Fig. 5a, b). At this binding, the ligand is surrounded by Tyr19 and Thr27 form regular hydrogen bonds with these amino acids (3.16 and 3.20 Å, respectively). On the other hand, the amino acids Ile2, Tyr19, Phe25, Tyr26, Thr27 also form hydrophobic bonds with the ligand (Fig. 5c).
Discussion
One of the main problems that occur due to long term repeated subcutaneous injections of insulin in diabetic patients is localized amyloidosis at the injection sites. This event causes many problems related to the process of insulin treatment in these patients. It is well accepted that hydrophobic interactions such as π-π stacking between aromatic rings as well as hydrogen bonds between polypeptide chains play an important role in intermolecular interactions in fibrils and aggregation-prone structures [28][29][30][31][32]. using of small molecules that interfere with the formation of insulin protein aggregates or destabilize these pre-formed fibrils can be as an important strategy to prevent amyloidosis in these sites. In one study, it is proved that ibuprofen was able to inhibit human insulin amyloid formation in the nucleation phase. Following the binding of ibuprofen to the native protein, its secondary and tertiary structures are almost completely protected. In addition, it is also able to maintain the native structure of insulin by reducing the hydrophobicity of the protein surface [33]. In another study, researchers shown that minocycline can bind hydrophobic regions in the sheet rich structures and destabilize the insulin amyloids composed of human insulin in diabetic patients treated with insulin [34]. Our results were in line with these studies. In the presence of Mesalazine the ThT fluorescence intensity decreased and the lag phase was delayed, indicating its anti-aggregation potential. Moreover, other studies have shown prolongation of the lag phase of insulin aggregation [35,36]. Our previous study demonstrate that Mesalazine, in all concentrations, especially in 1:1 ratio and higher (drug to protein), had a greater inhibitory effect on lysozyme protein fibrillation It has been shown a direct role of aromatic ring and OH groups around the ring in the inhibitory activity of Mesalazine [15]. After confirming the inhibitory effect of Mesalazine on insulin fibril formation, the effects of the compound on preformed amyloid fibrils was investigated. Our results demonstrated that Mesalazine is able to destabilized the preformed fibrils, dose-dependently. Since aromatic residues are present in the structure of insulin, it's thought that Mesalazine, as a small molecular compound with an aromatic ring, was able to interact with aromatic residues and impede interaction between amino acid side chains, resulting in fibrillar aggregates destabilize. These results were in contrast to our previous study, which showed that Mesalazine had no effect on the destabilization of preformed fibrils [15]. This contradiction can be related to the difference in the structure of insulin and lysozyme and the type of fibrils formed by these two proteins. Docking studies also confirmed the experimental results. As above mentioned, hydrophobic interaction and hydrogen bonding have been identified as the key driving factors in promethazine anti-amyloid activity [37]. Based on biophysical studies, B chain or part of it may be the main determinant of insulin fibrillation [38]. The results of a study showed that aggregation begins with dimerization and therefore surface and non-polar residues (RGFFYTPKT, B22-30) are critical for the nucleation stage [39]. In molecular docking, we showed that Mesalazine has the potential to prevent the exposure to hydrophobic core surface of the protein, due to its binding to the structure of insulin through hydrogen bonding and hydrophobic interactions. It is assumed, this ability of Mesalazine is thought to stabilize insulin and inhibit the process of protein fibrillogenesis.
Conclusions
The results of CR, THT, ANS analyses, and AFM images were consistent with each other, indicating the high ability of Mesalazine to inhibit and destabilize the process of protein fibrillation, in all concentrations tested and especially in the ratio of 1:5 insulin to Mesalazine. Therefore, Mesalazine can largely prevent the formation of amyloid fibrils and has a great effect on the removal of preformed plaques. Our research is also consistent with the results of studies conducted by other researchers. Therefore, Mesalazine may be able to contribute to the development of potential therapeutic strategies and the design of drug molecules against the formation of amyloid fibrils as a drug candidate.
|
2023-01-15T06:16:21.337Z
|
2023-01-14T00:00:00.000
|
{
"year": 2023,
"sha1": "e51a0080b374d80d9f5567cb3454e9465923d266",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-2207289/latest.pdf",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "78fb266476961f8c0e4d1142838b1872b4658af3",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
208610242
|
pes2o/s2orc
|
v3-fos-license
|
Relationship between the effects of food on the pharmacokinetics of oral antineoplastic drugs and their physicochemical properties
Background Food is known to affect drug absorption by delaying gastric emptying time, altering gastrointestinal pH, stimulating bile flow, increasing splanchnic blood flow, or physically interacting with drugs. Although food is known to affect the pharmacokinetics of oral antineoplastic drugs, the relationship between the effects of food and the physicochemical properties of drugs remains unclear. Methods In this study, we surveyed the literature on three kinds of pharmacokinetic changes, AUC ratio, Cmax ratio and Tmax ratio, in the fasted and fed state for 72 oral antineoplastic drugs that were listed on the drug price standard in May 2018 in Japan. We further predicted the physicochemical properties from the 2D chemical structure of the antineoplastic drugs using in silico predictions. Results As a result of analyzing the relationship between the effects of food and physicochemical properties, we found that compounds that show increased absorption in the fed state had higher logP and lower solubility in fasted-state simulated intestinal fluid (FaSSIF). However, compounds with delayed absorption had higher solubility in FaSSIF. Furthermore, as a result of decision tree analysis, it was classified as AUC increase with logP ≥4.34. We found that an AUC increase in the fed state did not occur with compounds with low lipid solubilities (logP < 1.59). From these results, it is predicted that 7 compounds out of the 24 compounds for which the effects of food are unknown are at risk for increased absorption in the fed state and that no increase in absorption would occur in 13 compounds. Conclusion In this study, we found that drugs that will show increased absorption in the fed state and drugs for which absorption is not dependent on food can generally be predicted by logP. These results suggest that logP can be a useful parameter for predicting the effects of food on drug absorption.
Background
Food is well known to affect drug absorption by delaying gastric emptying time, altering gastrointestinal pH, stimulating bile flow, increasing splanchnic blood flow, or physically interacting with drugs [1][2][3]. Furthermore, different foods, based on factors such as nutritional composition (high-protein, carbohydrate-rich, or high-fat meals), calorie content (low vs high calorie meals), volume, temperature and fluid ingestion, have distinct influences on the transit time, luminal dissolution, permeability and bioavailability of the drug product [4].
The Biopharmaceutics Classification System (BCS) is a scientific framework for classifying drug substances based on their aqueous solubility and intestinal permeability [5]. According to the BCS, drug substances are classified as four categories based on their solubility and intestinal permeability. Fisher et al. reported that drug-food interactions could generally be predicted based on the BCS class [6]. Class 1 drugs with high solubility/high permeability; high-fat meal will have no significant effect on drug bioavailability, Class 2 drugs with low solubility/high permeability; high-fat meal will increase drug bioavailability, Class 3 drugs with high solubility/low permeability; high-fat meal will decrease drug bioavailability, Class 4 drugs with low solubility-low permeability; it is difficult to predict what will occur [6,7]. Gu CH et al. further improved the prediction of food effects by classifying drugs based on solubility, permeability and dose of a compound [8]. Although they analyzed 90 marketed compounds, only one oral antineoplastic drug was included in their models.
The number of oral antineoplastic drugs approved for manufacture in Japan has been substantially increasing [9]. In particular, remarkable increases in molecular target drugs, including many drugs affected by food, have occurred in recent years [10]. There are many drugs for which dietary conditions are defined in the usages described in the package inserts [11]. On the other hand, oral antineoplastic drugs that are not molecular target drugs include many drugs for which dietary conditions are not defined in the usage instructions. Since the therapeutic range and the toxic range are in close proximity for oral antineoplastic drugs, the effects of food must be considered when evaluating their varying pharmacokinetics. Although it is already known that food may affect the pharmacokinetics of oral antineoplastic drugs [12][13][14], the relationship between the effects of food and the physicochemical properties of the drugs remains unclear.
In this study, we review the pharmacokinetic changes caused by food in oral antineoplastic drugs and evaluate their relevance to the physicochemical properties of antineoplastic drugs by in silico predictions. In addition, we predicted the pharmacokinetic changes in drugs for which the effects of food are unknown using the physicochemical properties as indicators.
Investigation of oral antineoplastic drugs
We surveyed the literature on three kinds of pharmacokinetic changes, including the area under the curve of the drug concentration-time profile (AUC) ratio, the maximum serum concentration (C max ) ratio, and the time at which C max is observed (T max ) ratio, in the fasted and fed state for 72 oral antineoplastic drugs that were listed on the drug price standard in May 2018 in Japan [15]. For drugs without ratio data in the literature, ratios were calculated from the medians or averages of the AUC, C max and T max values in the fasted or fed state. In addition, for drugs with data from several clinical trials, we selected data from high-fat meals when several data on meals were available and the closest data to those of the usage approved in Japan when data from several dosages and administration techniques were available. We analyzed the distributions of the AUC ratio, C max ratio and T max ratio and the relationships between the ln (AUC ratio) and ln (C max ratio) using JMP® Pro 13.1.0 (SAS Institute Inc., Cary, NC, USA), which is a statistical analysis software, based on the collected information.
The magnitudes of the effects of food were classified based on the reported pharmacokinetic differences between the fed and fasted states. With regard to the AUC ratio, food effects were classified into 3 groups, the absorption increase group (AUC ratio > 1.25), the absorption invariant group (0.8 ≤ AUC ratio ≤ 1.25), and the absorption decrease group (AUC ratio < 0.8), in accordance with the variations in bioequivalence in the guidelines for bioequivalence studies of generic products (0.8-1.25) [16]. The T max ratios were classified into 3 groups, the absorption time prolongation group (T max ratio > 2.0), the absorption time invariant group (0.5 ≤ T max ratio ≤ 2.0), and the absorption time shortening group (T max ratio < 0.5).
In silico prediction of the physicochemical properties of oral antineoplastic drugs
We predicted the following physicochemical properties from the 2D chemical structures of antineoplastic drugs by a prediction model using artificial neural network technology: octanol/water partition coefficient (logP); solubility in fasted-state simulated gastric fluid (FaSSGF), fasted-state simulated intestinal fluid (FaSSIF) and fedstate simulated intestinal fluid (FeSSIF) [17,18]; and nonionized fraction at pH 6.8 (FUnion 6.8 ) and pH 1.2 (FUnion 1.2 ). These predictions were made using ADMET Predictor™ 8.1 (Simulation Plus, Inc., Lancaster, CA, USA), which is an ADMET physicochemical properties prediction software. For the accuracy of the logP predictions, the root mean square error (RMSE) was 0.314 log units, the mean absolute error (MAE) was 0.241 log units, and the R 2 value was 0.971. a) In addition to what is specified in the package inserts, conditions described in the precautions such as "avoid taking from 1 h before meal until 2 h after meal", "avoid taking 1 h before and after meal" or "avoid taking from 1 h before meal until 2 h after meal when taking high fat meals" are also classified as "fasting" We analyzed the relationship between the known effects of food and physicochemical properties using JMP® Pro 13.1.0. We analyzed the bivariate relationship using AUC changes (AUC increase, invariance and decrease) as objective variables and logP and the solubility in FaSSGF, FaSSIF and FeSSIF as explanatory variables and compared the medians for all pairs using Steel-Dwass test. Similarly, we analyzed the bivariate relationship based on T max changes (T max prolongation, invariance and shortening) as objective variables and logP and the solubility in FaSSGF, FaSSIF, FeSSIF and FaSSIF/FeSSIF solubility ratio as explanatory variables and compared the averages using Welch's test.
Based on the results of the analysis, a decision tree analysis was performed with the changes in AUC as the objective variables and logP as the explanatory variable. The criterion function by which nodes are split is the LogWorth statistic [LogWorth = (− 1)*ln (chi-squared p-value)], which is to be maximized. The division point of logP related to the increase in drug absorption by food was obtained. Furthermore, we predicted whether the absorption would increase for drugs for which the effects of food are unknown.
Effects of food on the pharmacokinetics of oral antineoplastic drugs
Information on effects of food on the pharmacokinetics of 48 compounds (66.7%) out of the 72 investigated oral antineoplastic drugs was obtained. There were 30 compounds for which dietary conditions were defined in the usages or the precautions described in the package inserts; 15 compounds required postprandial administration, and the other 15 compounds required fasting administration ( Table 1). The medians (maximum, minimum) of the AUC ratios, C max ratios and T max ratios were 1.08 (8.96, 0.61), 0.94 (13.97, 0.30) and 1.91 (3.92, 0.50), respectively. There was a positive correlation between ln (AUC ratio) and ln (C max ratio) (r 2 = 0.86) (Fig. 1).
Classification based on the type of the effect of food based on the AUC ratio resulted in 14 compounds in the absorption increase group, 26 compounds in the absorption invariant group, and 7 compounds in the absorption decrease group. Classification based on the T max ratio resulted in 15 compounds in the absorption time prolongation group, 23 compounds in the absorption time invariant group, and no compounds in the absorption time shortening group. The compounds in the absorption increase group and absorption decrease group are shown in Table 2. The AUC increased by a factor of 8 or more due to food in the cases of bexarotene and abiraterone acetate. On the other hand, AUC decreased by approximately 60% due to food in the cases of capecitabine and afatinib. In silico prediction of the physicochemical properties of oral antineoplastic drugs Using JMP® Pro 13.1.0., we analyzed the relationship between the reported effects of food and the physicochemical properties obtained from in silico predictions. The bivariate relationship was analyzed using AUC changes as the objective variables and logP as the explanatory variable. The medians of the logP value (maximum, minimum) were 4.97 (7.46, 1.59) in the AUC increase group, 2.40 (5.44, − 1.99) in the AUC invariant group, and 4.05 (5.56, 1.28) in the AUC decrease group. The median in the AUC increase group was significantly higher than that of the AUC invariant group (P = 0.0054) (Fig. 2a). In the bivariate analysis of AUC changes and solubility in FaSSIF, the median of lnFaSSIF was − 4.66 in the AUC increase group, − 2.28 in the AUC invariant group and − 3.41 in the AUC decrease group. The median in the AUC increase group was significantly low than that of the AUC invariant group (P = 0.0013) (Fig. 2b). Similarly, in FeSSIF, the median of lnFeSSIF in the AUC increase group was lower than that of the AUC invariant group, although the difference was not significant (Fig. 2c).
In the bivariate analysis of the changes in T max and solubility in FaSSIF, the median of lnFaSSIF was − 1.88 in the T max prolongation group and − 4.27 in the T max invariant group (Fig. 3). The median in the T max prolongation group was significantly higher than that of the T max invariant group (P = 0.0129), and a similar trend was observed for FeSSIF. However, no significant difference was observed between the T max prolongation group and the T max invariant group in the bivariate analysis of the changes in T max and logP. As described above, we found that compounds for which the absorption was increased by food had higher logP and lower solubilities in FaSSIF and FeSSIF and that compounds for which the absorption was decreased had higher solubilities in FaSSIF. On the other hand, no relationship between the effects of food and other physicochemical properties, such as the nonionized fraction, were observed.
Since a correlation was found between increased absorption by food and the values of logP, the decision tree analysis was performed with AUC changes as the objective variables and logP as the explanatory variable. The division point of logP related to the increase in drug absorption by food was obtained ( Table 3). As a result, the division point of logP was 4.34, and it was classified as AUC invariant with logP < 4.34 and as AUC increase with logP ≥4.34 (the true rate was 77.5%). The falsepositive rate and the false-negative rate were 15.4 and 35.7%, respectively. Furthermore, we found that an AUC increase due to food did not occur with compounds with lower lipophilicities (logP < 1.59). Based on these results, we were able to predict whether the absorption would increase for 24 compounds for which the effects of food are unknown (Table 4). We predicted that the risk of absorption increase due to food was high for 7 compounds with logP ≥4.34. All of these compounds had lower FaS-SIF solubilities and were consistent with the characteristics of compounds for which the absorption was increased by food. On the other hand, we inferred that an absorption increase would not occur with 13 compounds with logP < 1.59. These compounds tended to show higher FaSSIF solubilities relative to compounds with logP ≥4.34.
Discussion
In this study, we analyzed variables correlated with the AUC ratio out of physicochemical properties obtained by in silico predictions, and it was suggested that drugs with high lipophilicities (logP values) and low intestinal solubilities (in FaSSIF and FeSSIF) had high risks of absorption increases due to food. This result is considered to be due to the solubility increase caused by the promotion of bile secretion by food [19]. Since the majority of tyrosine kinase inhibitors (TKIs) are substrates for drug transporters (e.g. ABCB1 and ABCG2) [7,20], food may also inhibit drug transporters, thereby increasing drug absorption [10]. On the other hand, we predicted that an absorption increase would not occur for the compounds with high water solubilities. In the case of drugs with high intestinal solubility, there are risks of delaying the absorption rate.
In the decision tree analysis, the division point of logP related to increased drug absorption by food was calculated as 4.34. In support of this finding, a previous study that predicted the effects of fed-state intestinal contents on drug dissolution showed that hydrophobic drugs with logP > 4 showed a significant increase in solubility in FeSSIF [18]. It was also reported that increase of solubilization by bile acids would not occur in drugs with a logP< 2 [3,21]. When the division point of logP was 4.34, 64% (9 out of 14) of true positives (AUC increase) could be accurately predicted, while 36% were predicted as false negatives. In other words, logP≥4.34 provides a high likehood of being an AUC increase drugs, whereas 36% of AUC increase drugs have the properties with logP < 4.34. Total 85% (22 out of 26) of true negatives (AUC invariant) could be accurately predicted, while only 15% were predicted as false positives. That means that AUC invariant drugs are almost in logP < 4.34.
Based on the literature results, we found that the usages described in the package inserts in most drugs for which the absorption increased or decreased due to food were defined as postprandial administration, fasting administration, or other specific conditions. On the other hand, for the drugs for which no clinical trial data on the effects of food are available, 7 compounds were predicted to have risks of absorption increases by decision tree analysis, and meal conditions were defined in the usages in the package inserts for only 3 of these compounds. Since the absorption may be increased by food in drugs with logP ≥4.34, food effects should be considered even in the cases of drugs that have no clinical trial data on food effects. In this study, we focused on the pharmacokinetic changes caused by food in oral antineoplastic drugs and evaluate their relevance to logP values. A logP value, which indicates lipophilicity, is a frequently used parameter in correlation with membrane permeability [22][23][24][25] and is a popular index for Japanese pharmacists. The logP value of each antineoplastic drug is easily available on the drug package insert, and is easy to evaluate for pharmacist. The BCS classification has been used to evaluate food-drug interactions in the development stage of pharmaceuticals [26,27], however, the index is not popular among clinical pharmacists in Japan to date. Additionally, the identification of "highly soluble" and "highly permeable" for BCS are not simple [26,27]. Therefore, we believe that the simple prediction of drug-food interaction by the logP values obtained from the results of this study is useful for clinical pharmacists.
In this study, we found significant differences between compounds with food-induced absorption increases and food-invariant absorption, and between those with foodinduced absorption time prolongation and foodinvariant absorption times. In addition, we found some common trends in these compounds based on their structures. There was no significant difference between the compounds for which the absorption is decreased by food and those for which the absorption is increased or invariant. Although their molecular weights tend to be large, there were only 7 compounds for which absorption decreased due to food, making it difficult to conduct further evaluations. Additionally, further evaluation of the relationship between food and the physicochemical properties is required because drug administration tests are not performed under the same conditions and the contents of the meal ingested can vary. It is difficult to verify our predictions in clinical trials, we are trying to build a more accurate in slico classification model, using an artificial neural network (ANN) as an alternative method to verify our predictions.
Conclusion
In this study, we found that the antineoplastic drugs for which absorption increases or does not change due to food can be generally predicted by their logP values. This suggests that we should implement pharmaceutical management with regard to meals and the timing of administration using logP as an index and considering the characteristics of drugs such as the narrowness of their therapeutic and toxic ranges.
|
2019-12-04T14:23:02.032Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "5d01050ec4acce2a0d421ff322b5a6cea928d0b4",
"oa_license": "CCBY",
"oa_url": "https://jphcs.biomedcentral.com/track/pdf/10.1186/s40780-019-0155-1",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "08f420bdbd87dea7d0ee33560734226296bc502d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
256977070
|
pes2o/s2orc
|
v3-fos-license
|
Sociodemographic characteristics of pediatric patients with vascular malformations: Results of a single site study
Vascular malformations, the abnormal development of blood vessels, are a rare set of congenital anomalies. The sociodemographic factors associated with vascular malformations in pediatric patients are poorly understood. This study examined sociodemographic factors of 352 patients presenting to a single vascular anomaly center from July 2019 to September 2022. Characteristics such as race, ethnicity, sex, age at presentation, degree of urbanization, and insurance status were recorded. This data was analyzed by comparing the different types of vascular malformations, including arteriovenous malformation, capillary malformation, venous malformation (VM), lymphatic malformation (LM), lymphedema, and overgrowth syndrome. Patients were primarily white, not Hispanic or Latino, female, had private health insurance, and were from the most urban setting. No differences in sociodemographic factors were found among the different vascular malformations except patients with VM presented at a later age than patients with LM or overgrowth syndrome. This study provides novel insight into the sociodemographic factors of pediatric patients presenting with vascular malformations and indicates a need for their improved recognition for the timely initiation of treatment.
Introduction
Vascular anomalies are a broad spectrum of pathologies that consist of two broad categories: vascular tumors and vascular malformations (1). Vascular malformations are further divided into different categories depending on which portion of the vascular or lymphatic system is impacted and if they are a component of an underlying syndrome (2). Vascular malformations are thought to be present at birth due to impaired vascular or lymphatic morphogenesis in early development but may not become symptomatic until later in life, if ever (3). Vascular malformations in childhood may present a spectrum of morbidity, from cosmetic concerns to CNS involvement and systemic coagulopathy (3,4). Vascular tumors, however, are caused by endothelial cell proliferation and categorized by their invasive potential. Though they may sometimes be confused with vascular malformations based on mutual rarity and symptomology, they represent a different disease process entirely (2). Despite significant research into the causes and treatments of vascular malformations, less is known about the diversity of pediatric patients presenting with these conditions. Much research has explored how various sociodemographic factors interact with different diseases revealing disparities in prevalence and outcome. For rare diseases such as vascular malformations, difficulty receiving an accurate diagnosis is an example of how sociodemographic factors may intersect with disease and cause disparities. Genetic testing modalities are increasingly improving diagnostic ability for rare diseases. However, studies have shown that underrepresented minorities, including children, are less likely to receive genetic testing delaying or prohibiting a diagnosis from being made (5,6). Also, one's ability to access expertise care may lead to an incorrect diagnosis. Vascular malformations are often misdiagnosed, delaying access to appropriate care and potentially resulting in improper treatment (7,8). Expertise care needed for correct diagnosis and management may be inaccessible for patients and their families who live outside of the range of specialized vascular anomaly centers and lack the financial resources to travel long distances or miss work to do so (8). Highlighting disparities such as these can inform public health policy and practice to reduce the effect of sociodemographic characteristics on health.
Because of the rare nature of vascular malformations, it has been challenging to characterize sociodemographic factors of pediatric patients with these conditions. Without a comprehensive understanding of these factors, it is difficult to determine if and how these factors are associated with the development of vascular malformations or a delay in diagnosis or treatment. The purpose of this study was to examine the characteristics of a large population of pediatric patients with vascular malformations and investigate whether differences exist in sociodemographic factors.
Study design and participants
The institutional review of board approved this study and waived the need for informed consent. This is a cross-sectional study of 352 patients under the age of 18 presenting with vascular malformations at a single vascular anomaly center from July 2019 to September 2022. This vascular anomaly center is an academic institution outpatient center located in a suburban region in the Southern United States. The center is composed of a broad multidisciplinary team of approximately 40 providers spearheaded by pediatric interventional radiologists, pediatric dermatologists, and pediatric hematologists. Data was gathered regarding patient diagnosis and sociodemographic factors via chart review of the electronic medical record (EMR).
Diagnosis categories
2018 ISSVA classification was used to categorize patient groups (1). Vascular malformations were categorized as arteriovenous malformation (AVM), capillary malformation (CM), venous malformation (VM), lymphatic malformation (LM), lymphedema, and overgrowth syndrome. Overgrowth syndrome included specific diagnoses of Klippel-Trenaunay syndrome (KTS), Congenital Lipomatous Overgrowth, Vascular Malformations, Epidermal Nevis, Spinal/Skeletal Anomalies/Scoliosis syndrome (CLOVES), Diffuse Capillary Malformation with Overgrowth (DCMO), Parkes Weber syndrome (PWS), PTEN hamartoma of the soft tissue (PHOST), and PIK3CA-Related Overgrowth Spectrum (PROS) other than KTS or CLOVES. Patients with isolated CM with no combined overgrowth syndrome or other vascular malformation were categorized into the CM group. Five patients had multiple vascular malformations that were not a part of a syndrome (VM with CM) and were classified as VM to reduce group complexity and because of the relative severity of the conditions. Patients who presented with vascular tumors were excluded from the study to narrow the scope of the study and reduce complexity.
Demographic variables
Data was gathered from the electronic medical record regarding race, ethnicity, sex, age at presentation, insurance status, and residential information captured by rural-urban commuting area (RUCA) codes. Race was categorized as white, black or African American, Asian, American Indian or Alaskan Native, Other, or unknown if no data was available. Patients who identified as more than one race were coded as the race that is assigned in the EMR which may be one of their multiple races or "Other". Insurance status was categorized as private, Medicaid, military, state-health plan, or self-pay. RUCA codes were gathered from the 2010 US census, and patient address was used to assign each patient a particular RUCA code (9). RUCA codes ranged from 1 to 10 and represent a continuum of the most urban setting <1> to the most rural setting <10>.
Statistical analysis
Statistical analyses were conducted to assess whether relationships existed for a particular sociodemographic characteristic and a vascular malformation. All statistical analyses were performed using MedCalc statistical software version 20.015 (MedCalc Software Ltd, Ostend, Belgium). The χ 2 -test was used to test for differences among race, ethnicity, sex, and insurance status for each vascular malformation. The Shapiro-Wilk test was used to test for normality in comparing the mean age at presentation for each type of vascular malformation. Once normality was confirmed, the ANOVA test was used to test for significant differences among the groups, and the Tukey Kramer test was used to determine between which groups specifically the difference existed. p < 0.05 was used to determine statistical significance. Table 1).
Overall patient characteristics
One hundred ninety-eight (59.4%) fell within RUCA code 1, representing the most urban setting. Fifteen (4.26%) fell within RUCA code 7, which represents small-town regions, but this code had the most patients relative to the total population of that RUCA code (5.91 patients per 100,000 residents) (Figure 1). Tables I-IV).
Difference by vascular malformation type
When each vascular malformation was compared by age at presentation, an initial p-value of 0.0000176 indicated a difference existed, and further testing discovered that patients with VM were older at presentation than patients with both LM (mean 9.75 years vs. 6.66 years, p = 0.0022) and overgrowth syndrome (mean 9.75 years vs. 5.80 years; p = 0.0042) (Figure 3).
Discussion
Health disparities have been extensively documented and are present within a vast number of diseases, affecting both disease incidence and outcomes. The existence and cause of disparities are not always intuitive, necessitating exploration of their existence and the ways they intersect with diseases. For example, in patients with hereditary hemorrhagic telangiectasia, white patients were found to have less pulmonary and cerebral AVMs compared to Asian and Hispanic patients respectively (10). In patients with diseases like vascular malformations where disparities are not Proportion of vascular malformation patients by RUCA code. extensively studied, research is needed to identify disparities where they exist so that these may be mitigated where possible.
There have been several studies examining sex differences among patients with vascular malformations. Some have found no difference similar to ours, while others observed females to be more likely to present with vascular malformation (11)(12)(13). Furthermore, multiple studies looking specifically at KTS have made conflicting claims about sex differences (14, 15). While literature does exist examining sex differences for vascular malformations, our study is the first in the literature to our knowledge to review several sociodemographic factors in pediatric patients with this diagnosis.
Vascular malformations are congenital malformations, some of which are caused by genetic mutations leading to impaired morphogenesis during development. The literature continues to grow with discoveries in genes responsible for some vascular malformations, but it is unclear if and which environmental factors also play a role in pathogenesis (16). Exploration of sociodemographic factors will help uncover the contribution of environmental factors beyond genetics, providing a more comprehensive understanding of vascular malformations. Many known causative genes in vascular malformations are somatic mutations whose expression may increase with environmental stressors, hinting at a potential role of environmental factors contributing to their development (16,17). However, our findings demonstrated no difference in race, ethnicity, sex, and insurance status, indicating that the role of environmental factors in the development of vascular malformations may be less important than the role of genetics.
Patients with VM presented later than both LM and overgrowth syndromes which has not been previously reported. VM can cause a variety of symptoms depending on severity and location, ranging from cosmetic concerns to functional impairment, organ damage, thrombophlebitis, and increased risk of deep vein thrombosis (16,18). Many VMs do not become symptomatic until adolescence or adulthood and are often misdiagnosed as a hemangioma which may indicate why there is a delay in their diagnosis (18,19). LMs often present with symptoms of lymphedema, pain or swelling and most often occur around the head and neck (20). Overgrowth syndromes can be associated with vascular malformations or may be a component of certain syndromes. For example, Parkes-Weber syndrome commonly presents with CMs and overgrowth. However, patients often have AVMs which can have debilitating consequences (16). Mathes et al. described their findings that only 56.5% of vascular anomalies were diagnosed at birth (13). Their study also found that vascular malformations, in general, are misdiagnosed due to their complexity. This further supports the need for improving diagnostic accuracy among physicians so that early VM management can be initiated.
This study has several strengths, including a large cohort of patients with relatively rare diseases. This study also compares a wide variety of vascular malformations allowing for intergroup comparisons to be made, whereas existing literature often presents sociodemographic factors within the context of a specific condition.
As far as limitations, this was a single center study that primarily serves a single state. While this is helpful for providing epidemiological data for this region, it may limit the generalizability of the findings. In general, VM and LM are more common than isolated CM, and the disproportionate numbers within each group may prevent adequate comparison. Similarly, disproportionate group size in RUCA codes limited the ability to compare the number of patients per total population in each RUCA code. Categories created for simplicity led to the grouping of several heterogeneous conditions, such as "overgrowth syndrome" which contained several different syndromes with vastly different symptomology and etiology (16,21). This potentially could have masked intra-group differences as well. Another limitation of this study is that it only tracked age at the presentation at a single vascular center which may be skewed if initial care was received at another center. While this study period was selected to include the most recent patient data, it included the onset of the Covid-19 pandemic. This may have altered the patient population that presented during this time if some patients delayed seeking care.
This study provides useful sociodemographic information about pediatric patients presenting with vascular malformations. Patients were primarily white, not Hispanic or Latino, female, had private health insurance, and were from the most urban setting. While no differences existed among the various vascular malformations for race, ethnicity, sex, and insurance status, patients with VM presented at a later age than patients with LM and overgrowth syndromes. This suggests that the development of these conditions via somatic mutations may be less affected by environmental factors and more by random genetic events. Improving accurate diagnosis upfront may bridge the gap of age presentation between VM and LM and overgrowth syndromes and improve care. Future research is needed to determine if sociodemographic factors like these studied lead to disparities in patient treatment and outcome.
Data availability statement 'The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding author/s. Age at presentation of vascular malformation.
|
2023-02-18T16:12:50.495Z
|
2023-02-16T00:00:00.000
|
{
"year": 2023,
"sha1": "f0709e04f9867f4100640c5be634efc7f81f5da3",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "bb60892e347978408dc911c9e00e88a1cfaca583",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213176476
|
pes2o/s2orc
|
v3-fos-license
|
Pragmatism as a Pillar of the New Developmentalism
The scholars of New Developmentalism have generated a substantial body of knowledge regarding structural transformation and the policies that should be adopted to foster its achievement. Nevertheless, as is argued in this paper, New Developmentalism, by contrast with Neoliberalism, lacks a strong philosophical foundation to legitimise the policies it favours on grounds other than their ability to generate prosperity. It is also argued that New Developmentalists should explicitly adopt a pragmatic philosophy in order to become a more serious alternative to other political economy doctrines. Character count including references, tables and footnotes( excluding cover page and bibliography): 51,656
Introduction
The term 'new developmentalism' (ND) has been garnering attention in recent times, both in academic and policy circles (Ban, 2013;Bresser-Pereira, 2016;2017;2018). As a theory of political economic practices, ND aims to be alternative to Neoliberalism. However, as welldeveloped as ND has already become in proposing a set of policies based on a coherent body of knowledge, it lacks a philosophical foundation that can match that of Neoliberalism, putting it at a disadvantage when contending with the latter. In this paper, I examine the importance that such a foundation has for the advancement of ND and propose that ND explicitly adopts a Pragmatist philosophy.
In order to undertake the task proposed above, this paper will be divided as follows. In the first section, I will analyse the ways in which the term ND has been used, given that it has been employed in significantly different manners, and select the one to which this paper will be devoted to -what I call 'São Paulo New Developmentalism' (SND). In the second part, I will examine the philosophical basis of Neoliberalism and how it helps advance its policies. In the third section, I will show that this concern with more 'abstract' philosophical matters and their social consequences was a central concern for predecessors of SND, including Gunnar Myrdal, Joan Robinson and Albert Hirschman. Finally, in the fourth part of this text, I propose that the Pragmatist tradition could provide SND with the philosophical grounding it lacks.
The Meanings of New Developmentalism
In addition to describing existing approaches to economic policy (see, for example, Cho,2000;Deyo, 2002;North and Grinspun, 2016), the term ND has been used to refer to theoretical systems or aggregations of economic ideas -three different variants can be discerned.
Two understandings of ND can be traced back to the Mount Holyoke College
Conference of 2008, and the edited volume entitled Towards New Developmentalism: Market as Means rather than Master (Khan and Christiansen, 2011) that ensued from that event. As noted in the introduction to this volume, this conference assembled developing economists supportive of a developmental programme alternative to Neoliberalism.
This very book has as its main objective 'to explicate and name an alternative [ND, in this case], what is new in this program and projecting it onto the academic landscape' (Khan 2011:3). In fact, a major motivation for doing so concerned the fact that, in spite of the numerous important contributions made by the scholars studying the successful East Asian Pragmatism as a Pillar of the New Developmentalism -João Silva -EARLY DRAFT -DO NOT QUOTE 3 development experiences, such as Alice Amsden, Robert Wade and Ha-Joon Chang, these had not coalesced into a distinct alternative to Neoliberalism. Also in the introduction to this volume, it is stated that what unites these ND economists is a form of 'developmental pragmatism' (Khan 2011:3) in that: (i) their concerns are the same as the 'old' developmentalists, such as Paul Rosenstein-Rodan and Ragnar Nurske ; (ii) they endorse the policy recommendations of the old developmentalists; (iii) they are supportive of institutional development and engagement with economic globalisation; (iv) they have a concern with promoting social justice; (v) they believe that the market should be seen 'as means to be harnessed for development' rather than a master to be obeyed.
In this same edited volume, Chang (2011) proposes a different interpretation of ND. In his contribution, he analyses the concept of development and how it has changed throughout the years. From the end of World War II to the 1970s, 'there was a general consensus that development is largely about the transformation of the productive structure (and the capabilities that support it) and the resulting transformation of social structure -urbanization, dissolution of the traditional family, changes in gender relationships, rise of labour movement, the advent of the welfare state, and so on' (Chang, 2011:47).Nevertheless, according to Chang (2011:48), since the 1970s, the concept of development has largely come to 'mean poverty reduction, provision of basic needs, individual betterment, sustenance of existing productive structure'. Important agendas, such as the UN's Millennium Development Goals, came to focus on things such as gender equality and reduction child mortality (noble issues themselves) while largely ignoring the idea of development in the old sense. This formula can be seen as a sort of 'development without development' -or 'Hamlet without the Prince of Denmark'. Chang (2011) calls for a ND which revives the developmentalism centred on productive transformation (like in the 1960s/1970s), but also arguing that it should explicitly incorporate elements of Human Development and the Capability Approach of Amartya Sen -in a sense, a reminder that material progress in not an end in itself but a means to something else. Chang (2011) also argues that this approach should incorporate environmental concerns, and a more refined understanding of political, technological and institutional processes that the 'old developmentalists'.
The third definition of ND, to which this paper will be devoted to, concerns the 'theoretical and policy alternative' to the Neoliberal ortodoxy first proposed by Bresser-Pereira in 2003(Bresser-Pereira, 2016) -it will be referred to as São Paulo New Developmentalism (SND), due to its strong roots at Fundação Getúlio Vargas, in this same city. More than simply Pragmatism as a Pillar of the New Developmentalism -João Silva -EARLY DRAFT -DO NOT QUOTE 4 an economic theory or theoretical system (see Bresser Pereira, 2016), I believe that SND is best described as 'a theory of political economic practices', a term used by David Harvey (2007:22) to describe Neoliberalism.
Although it is recognised that SND is still a work in progress -which may eventually lead to a fully-fledged school of thought (Bresser-Pereira, 2016) -it has already generated an agreement about its main principles. Crucial for this coalescence was the São Paulo Conference on the New Developmentalism, held in São Paulo in May 2010. The outcome of this conference (and subsequent discussions) was the Ten Theses on the New Developmentalism (henceforth, TTND) -a document that lists the core tenets and policy-oriented ideas of SND.
According to the TTND, economic development is a process of structural transformation that involves the shifting of the economy towards higher value-added activities. It is agreed that, although markets should have a major role in promoting development, the state is, in this perspective, expected to play a key part in this process.
Economic development requires a national development strategy with macro and micro economic elements, presented here in short. At the macro level, the state should assure adequate investment opportunities, keep inflation and debt under control and promote financial stability. At the micro level, the state should promote strategic industrial policy. As an end or objective of ND is the idea that 'society as a whole should develop a welfare system that reduces inequality and is anti-cyclical'. (North and Grinspun, 2016).
A closer look at SND
In addition to the elements presented in the TTND, the core of SND has also been made more coherent and grounded over the years through the works of a wide array of economists. The extent to which this body of knowledge has become sophisticated is documented by Bresser-Pereira (2016, 2017, 2018, who explains the key elements of SND. Before delving into these matters in more detail, it is important to note SND draws heavily from 'Classical Developmentalism' 2 and should be seen as an 'addition' to the latter (Bresser Pereira, 2018).
The more 'technical' side of ND is concerned with macro and microeconomic issues.
The macroeconomics of SND (the more well-developed side of it) is largely inspired by post-Keynesianism. The SND has, as a core concerns, (i) the tendency for the exchange-rate to be overvalued in the long-term and (ii) the financial instability accruing from indebtedness in a foreign currency. Overvalued exchange-rates make the non-commodity tradeable goods less competitive, thus hampering the development of these industries. Indebtedness in foreign currency, due to 'exchange rate-populism', creates a tendency for financial instability and current account crises, leading to the need for 'confidence building' and the following of the (Neoliberal) policies that help achieve this end.
In order to avoid or mitigate these problems, the state, from the SND perspective, should make use of its policy tools and focus on getting right the five macroeconomic prices: the profit rate, the exchange rate, the interest rate, the wage rate, and the inflation rate. By the term 'right', it should be understood not the 'prices defined by full competition, but prices that make sense economically and politically': '(a) the profit rate must be high enough to support investment by business; (b) the exchange rate must make the business enterprises competitive; (c) the level of the interest rate should be as low as possible; (d) the wage rate should increase with productivity, and be consistent with a satisfactory profit rate; (e) the inflation rate should be low' (Bresser-Pereira, 2016:341).
In terms of microeconomics, SND is inspired by Classical Developmentalism, including the most recent works on the East Asian development experiences (Bresser-Pereira, 2016).
The SND recognises that market coordination should prevail in competitive sectors, the state should play a role in planning and regulation of non-competitive industries, such as infrastructure companies, basic input companies and big banks. In addition to playing co-7 ordinating role, there is also room for the state to make use of selective and strategic industrial policy in order to foster technological progress and make competitive the production of goods with a high level of complexity. It is also important to state that, from the SND standpoint, industrial policy is not a substitute, but should also be subsidiary to the macroeconomic policies described above.
In methodological terms, SND adopts historical-deductivism, i.e. its models are 'not inferred from a supposed rational agent, but from the regularities and tendencies that can be observed in the economic systems' (Bresser-Pereira, 2016:350). The highly successful experience of East Asian countries are considered as highly relevant in this regard. This methodology contrasts with the hypothethical-deductivism of Neoclassical Economics (a foundational element of Neoliberalism) of which SND itself has a critique of (see Bresser Pereira, 2010).
Closely related to the methodology of SND, is its political economy, which covers a variety of matters. Firstly, it proposes the history of mankind and of specific countries is understood as divided by an industrial or capitalist revolution, and that interventionism (or developmentalism) rather than its absence has been the default in promoting these transitions. Secondly, it is recognised that national bourgeoisies are heterogeneous and that successful industrialization experiences have generally been built upon 'developmental coalitions'. Applied to recent experience, it is recognised that in now-middle-income countries, 'the industrial bourgeoisie, the urban industrial workers, part of the salaried middle class, and the public bureaucracy form typically the developmental class coalitions, while rentier capitalists, financiers and the top executive of the great private corporations form the liberal class coalitions, dominant in the rich world since the 1980s' (Bresser-Pereira, 2017:5). Thirdly, SND is economically nationalist, as it mainly focuses on national economic development, and sees nation states around the world as competing with one another, even though it also supports some forms of co-operation between them. Fourthly, SND supports the investment in the development of a 'capable state', i.e., one that is 'endowed with political legitimacy, competent administration, and ability to finance major investments domestically' (Bresser-Pereira, 2016:333).
Beyond economic theory, methodology and political economy, it is also important to underline that the SND seems to draw significantly from Human Development (even though this is not made as explicit as in the other dimensions), often associated with approaches other than those focusing on transformation of productive structures (see Chang, 2011, for an Pragmatism as a Pillar of the New Developmentalism -João Silva -EARLY DRAFT -DO NOT QUOTE 8 important take on this matter). I believe that this is an important element of the SND because the latter considers material progress as a means to something else -'human development, which also involves the increase in security, the increase of individual liberties, the reduction of inequalities, and the protection of the environment' (Bresser-Pereira, 2016: 341).
The missing element of SND
So far, as has been seen, SND has developed a sophisticated body of knowledge which has generated important agreement amongst some of the most prominent world's economists and social scientists. SND knows what it wants: greater prosperity to ultimately create a developed welfare system and society with low levels of inequality. SND has a well-delineated, historically justified, set of policies that are most likely to lead to this end, whereas those that are SND 'innovations' or simply adopted from its predecessors. SND also has reality on its side: the failures of Neoliberalism world-wide, whereas in terms of financial instability, greater inequality and the inability of middle-income countries achieving high-income status. SND also has a deep understanding of political economy matters, especially regarding political transformation, international competition and the need for political legitimacy. So, it is important to ask: is there something that can be built into SND in order to make it a stronger contender against Neoliberalism? I believe that, while other aspects may be lacking in SND, one important answer is: a philosophy.
Why is a philosophy so important? For many reasons. But mainly because issues such as 'what is' (metaphysics or ontology) and 'what ought to be' (ethics) are main drivers of action. As Mark Blyth (2004:129) notes, there is a great difference between the natural world and the economic world -'what we believe about falling stones will have no impact whatsoever upon the trajectory they take. But in the economic world the problem is qualitatively different since the ideas that agents have about their interests, the impacts of their actions, and those of others, shapes outcomes themselves'. And here I am concerned with the rhetorical aspect of economics, understood as the ability to persuade others to adopt a set of ideas and/or put them into action. Being a discipline that was a branch of moral philosophy, it is not uncommon for economic arguments to be clad, more explicitly or tacitly, in a cloth of morality. Although it is beyond the scope of this text to analyse the intricacies of this matter, the claim made here is simple -the more convincing the metaphysical/moral construct behind a set of policies is, the more likely it is to be implemented.
Neoliberalism and its philosophy
Neoliberalism is 'a theory of political economic practices which proposes that human well-being can best be advanced by the maximization of entrepreneurial freedoms within an institutional framework characterized by private property rights, individual liberty, free markets and free trade' (Harvey, 2007:22). The function of the state, from the Neoliberal perspective should be to create and maintain an institution framework that is suitable for such practices. Although Neoliberalism has been applied unevenly and through different means worldwide (see Saad-Filho and Johnston, 2005), it is important to note that, as far as a 'list' of Neoliberal policies is concerned, the ten policies of the Washington Consensus, dominant from the 1980s to the late-1990s, and its more recent 'Augmented' -'institutions matter' -version are useful guidelines for this matter (see Chang, 2005;Rodrik, 2006).
According to Chang (2001), Neoliberalism was born from the 'unholy alliance' between Neoclassical Economics and Austrian-Libertarianism. Neoclassical Economics, with its apparently sophisticated mathematical apparatus provides the scientific respectability to Neoliberalism, whereas Austrian-Libertarianism provides the moral and political philosophy. I would add that Neoclassical Economics is itself very philosophically rich, and that the source of that scientific respectability can also be found in this richness, given that Utilitarianism -one of its key elements -is itself a philosophical tradition of about 200 years, since Jeremy Bentham (1789) first proposed it in its hedonistic form.
The insights of both these traditions are essential for the legitimation of Neoliberal policies. As previously pointed out, the adoption of Neoliberal policies is perceived by this doctrine to allow for the greatest prosperity to be achieved. Left to their own devices, individuals are generally better able to turn 'private vices into public virtues'. These two doctrines contain, nevertheless, contradictory insights about the workings of the economy. For example, on the one hand, Neoclassical Economics believes that individuals are better left alone because they are rational and have perfect information. On the other hand, in the case of Austrian economic theory, it is exactly because individuals lack knowledge that they should partake in markets, as the latter are the best manner mechanism for processing the knowledge spread amongst them. Free from interventionism, individuals can better interpret the prices of goods and factors and act accordingly.
In spite of these important internal differences, high reliance on the market mechanism is generally (with some exceptions within Neoclassical theory, as seen later) perceived as the means to an end which is the most desirable, e.g. an 'utility-maximizing equilibrium' or one that unleashes the most entrepreneurial potential. From a philosophical point of view, this is thus a consequentialist justification for the implementation of Neoliberal policies, as the end is in itself something that is of intrinsic worth.
The idea that free-market policies are likely to generate these desired ends is based, on the one hand, on the more deductive theoretical/technical elements of the two streams of Neoliberalism, but also in a sort of 'retrotopia', i.e., an idealised past. It is not uncommon to find passionate descriptions of countries going from poor to rich due to their extensive reliance on free-markets and lack of state intervention, even though this narrative is highly erroneous from a factual point of view (see Chang, 2002). But, most importantly for the philosophical legitimation that I want to address here, is the idea that markets are some sort of primary or natural institutions. Even though the notion that just because something exists in a certain form in nature it is somehow ethically right is generally seen as a fallacy, it still holds a high appeal in a variety of social domains (take, for instance, issues of sexuality often portrayed as 'unnatural'), and so this portrayal can be seen as an attempt to provide a naturalistic justification for free-market policies.
In addition to the 'consequentialist' and 'naturalistic' pro-market arguments, others emanating from each of the Neoliberal streams are important to address. On the Neoclassical side, it is essential to address not only the Utilitarian philosophy underlying it, but, most relevantly, its current dominant form within the discipline, associated with Lionel Robbins (1932). Robbinsian Utilitarianism sees utility as a purely subjective phenomenon and, thus, it does not allow for interpersonal comparisons of this matter. So, contrary to the (cardinal) utilitarianism adopted by Pigouvian Welfare Economics (or Benthamite hedonism, for that matter), which adopted decreasing marginal utility functions and justified redistribution of income on this basis, within Robbins' framework this is not possible (also see Martins, 2019 for further elaboration on this topic).
Closely intertwined with the idea above, is the notion of Pareto improvement, stating that a social improvement can only exists if someone is made better off without someone else being made worse off (in preference ranking terms). It is generally considered that freemarkets usually generate Pareto efficient outcomes and that state intervention should exist only when 'market failures' happen, given that they generate Pareto inefficient outcomes.
While the concept of market failure has been to justify even state planning, current Neoclassical Economics, influenced by the Austrian-Libertarian wing, generally adopts a very Pragmatism as a Pillar of the New Developmentalism -João Silva -EARLY DRAFT -DO NOT QUOTE 11 restrictive notion of the term, e.g. focusing on things such as the provision of public goods (Chang, 2001).
The Austrian-Libertarian wing of Neoliberalism, with Friedrich Hayek (1944; and Milton Friedman (1962;and Rose Friedman 1980) at the forefront, also fuels the justification for Neoliberal policies on moral grounds, however, instead of using concepts such as 'Pareto optimal' and the likes of it, it focuses on the ability of individuals to act freely in the marketplace. The market is seen, thus, as the most legitimate mechanism for the allocation of resources because it involves voluntary exchange and individuals are not coerced to make decisions in a manner determined by others. Questioning the outcomes of the market in relation to their 'justice' is meaningless in this regard, as justice is mainly to be found in the procedures that allow for these ends to be achieved. Friedman (1981;1992) saw economic freedom as a precondition to other liberties, and argued that it ultimately was the economic freedom generated by the free market policies adopted by the Chilean authoritarian government that allowed for the politically free regime that ensued.
In spite of the above-mentioned contentiousness, the political discourse mounted on ideas such as freedom was and is extremely powerful. The spread of ideas did not occur spontaneously but was rather as part of a multi-pronged concerted effort by the Mount Administration (Ebenstein, 2007 As internal inconsistency exists at different levels of Neoliberalism, I believe that the great strength of this ideology is not, as it cannot be, its internal coherence, but rather its external appearance of consistency and its malleability to draw from these incompatible streams to address a given situation. A similar observation is made by Oliver (1960:136), who notes that 'Neoliberal writings on allocation shift back and forth between libertarian and utilitarian with the two some appearing interchangeably within a paper or chapter'. In part, this plasticity is also shown by the ability of Neoliberalism to adapt itself to different contexts and create hybrids -see the important work of Cornel Ban (2016).
Not only neoliberalism has this element of plasticity, but its ideas lend themselves to be more acceptable when examined only on the surface-level. For example, the idea of 'freedom' is in itself something that hardly any human being would oppose per se, as it is a word ubiquitously perceived as endowed with an intrinsic value, especially when contrasted with notions such as 'coercion' or 'serfdom'. However, as Mirowski (2013:61) Hayek's ideas in the United States, and even gaining a cartoon format (Caldwell, 2007).
While the influence of Hayek was considerable in popular circles, Milton Friedman was the grand propagandist of this populist philosophy (see Boettke, 2004). From all the neoliberal scholars that won the Nobel Prize, Friedman stands as the most highly cited and his influenced reached a wide array of policy domains, from monetary policy to education (see Boettke, 2004). In spreading the gospel of liberty, Friedman also 'attacked' different audiences. His
Capitalism and Freedom, which sets most of his libertarian foundations, first published by
Chicago University press in 1962, enjoyed some success initially, in spite of hostile environment of the day. However, it was its book Free to Choose (1980, co-written with his wife, Rose Friedman), which really catapulted Neoliberal ideas. Not only was this book the non-fiction best-selling in 1980, with millions of copies reaching consumers, but it also served as the basis for the very popular homonymous TV show (Wynne, 2004), which further help spread the Neoliberal message.
In a 1990 edition of Free to Choose, the Friedmans wondered 'whether the ideas in Free to Choose had become so much part of the conventional wisdom that the book was no longer relevant' Friedman, 1990, cited in Wynne, 2004:3). While the extent to which Neoliberal policies were adopted worldwide is a topic beyond this text, this 'becoming part of the conventional wisdom' reflects a profound philosophical aspect that is beyond the 'ethics' of Neoliberalism -what may be understood as the rebuilding of people's ontological perceptions, i.e., the very way in which people conceive of the world and impute properties to its elements. In fact, not only Neoliberalism has a tendency to expand the commodification of societies (i.e. everything becomes saleable) but the absorption of its ideas also imprint a Pragmatism as a Pillar of the New Developmentalism -João Silva -EARLY DRAFT -DO NOT QUOTE 15 'rationality' into people (Dardot and Laval, 2013:18), 'a deployment of the logic of the market' to the various domains of life.
Undoubtedly, one of the most relevant is the view of society as a collection of entrepreneurs. Entrepreneurship in Neoliberalism is perceived as a faculty possessed by all subjects that is fomented by a market environment of competition, rather than an activity per se (as in Schumpetarian terms) (Dardot and Laval, 2013). Therefore, individuals are conceived of as entrepreneurs in various aspects of their lives, even as entrepreneurs of their very existence -'everyone is an entrepreneur in and of himself' (Dardot and Laval, 2013:118). This logic (also associated with management research) has called for the necessity of building 'entrepreneurial societies' which are characterised by 'adaptability' and 'constant change'.
Being a form of rationality, and put in this manner, Neoliberal wisdom can be applied to a variety of problems, from the political, social, legal, and cultural domains. (Dardot and Laval, 2013), and ultimately making it possible to make tradeoffs among them. In addition, the very conceptualization of some terms changes the very ways in which they are to be approached -for example, perceiving education as 'human capital', shadows its non-economic functions, such as human discovery and enlightenment, and makes people akin to means or factor of productions rather than ends in themselves.
Philosophy and Rhetoric as Central Concerns: Myrdal, Robinson and Hirschman
As I have argued, a strong philosophical foundation is central for the advancement of The arguments presented in The Political Element in the Development of Economic Theory (1990Theory ( [1954), one of Gunnar Myrdral's earlier works, certainly holds sway in the discussion about Neoliberalism, especially with regards to its 'scientificity' -in fact Myrdal was concerned with the arguments of older Swedish laissez-faire economists when he first wrote is book, which employed similar arguments. In this work, Myrdal held the stance that economics could be objective or value-free, an opinion which he later would change. This scientific economics by providing an understanding of social reality could serve as a basis to inform politics.
Departing from the assumption that a value-free economics was possible, Myrdal argued that often case economists who believed themselves to be having a 'scientific' approach were in fact frequently deriving an 'ought' from an 'is', something that should not be possible in a scientific endeavour -analyses in economics were 'yielding laws in the sense of norms, and not merely laws in the sense of demonstrable recurrences and regularities of actual and possible events' (Myrdal, 1990(Myrdal, [1954:4). He illustrates this point by discussing, among others, the 'Theory of Free Competition' which he claims that 'is not intended to be merely a scientific explanation of what course economic relations would take under certain specified assumptions' but also "constitutes a kind of proof that these hypothetical conditions would result in maximum 'total income' or the greatest possible 'satisfaction of needs' in society as a whole'" -that is, a political desideratum. Similar types of reasoning, according to Myrdal, were employed when theorists tried to establish concepts such as 'population optimum' or principles of 'right' and 'just' taxation. In fact, these type of conclusion was attained due to the fact that theories were based on metaphysics derived from philosophies such as Utilitarianism and Natural Law, and theorists were often aware of the effects of these foundations. Myrdal (1990Myrdal ( [1954) saw this 'creeping in' of morality as particularly hazardous for the political process. As he notes '[t]he danger to the unsophisticated theorist, of sliding into normative habits without stating his value premises explicitly, is aggravated by the fact that the same thing is done habitually in popular reasoning' (Myrdal, 1990(Myrdal, [1954:20 Nevertheless, these propositions are not empty of content, but rather they express a point of view which is a guide to conduct. As exemplified by Robinson (1962:8-9) For Robinson (1962:29), one of the most important 'metaphysical ideas' in economics was that of 'value', to which she devotes a large extent of the book. As she notes in questioning the meaning of the term: 'it does not mean market prices, which vary from time to time under the influence of causal accidents; nor is it just an historical average of actual prices. Indeed, it is not simply a price; it is something which will explain how prices come to be what they are. What is it? Where shall we find it? Like all metaphysical concepts, when you try to pin it down it turns out to be just a word.
All the same, problems that have been turned up in the pursuit of the causes of value are by no means empty of meaning.' In fact, the notion of value has been crucial in political economy doctrines. For example, the Labour Theory of Value is a vital component of Marxist theory, not only as a theory of price determination, but because it contains the fundamental insight that the value necessary for the reproduction of the worker (the cost of labour power) is smaller than the one he produces. So, the capitalist appropriates this surplus value, something which he has not created -in other words, he exploits the worker. According to Robinson (1962:32), referring to this insight, '[I]t is much stronger poison than a direct attack on injustice. The system is not unjust within its own rules. For this reason, reform is impossible; there is nothing for it but to overthrow the system itself.' The mechanism by which the Labour Theory of Value plays a role in providing this morality to Marxism, finds a parallel in those on the Utilitarian front -this being particularly relevant when addressing Neoliberalism. Utility, Robinson (1962:48) starts by pointing out, is 'a metaphysical concept of impregnable circularity; utility is the quality of commodities that makes individuals want to buy them, and the fact that individuals buy commodities shows that they have utility'. In spite of such circularity, the concept of utility was extremely important in legitimizing laissez faire 3 , as 'everyone must be free to spend his income as he likes and he will gain the greatest benefit when he equalizes the marginal utility of a shilling spent on each kind of good'. The pursuit of profit, on the other hand 'leads producers to equate marginal costs to prices, and the maximum possible satisfaction is drawn from available resources' (Robinson, 1962:53). But the Neoclassical system, with its Utilitarian focus, also provided a reframing of economic agents and, as Robinson (1962:57-58) notes, had the 'unconscious preoccupation' to 'raise profits to the same level of moral respectability of wages'. So, from the Neoclassical perspective the 'labourer is worth of his hire…Capital was no longer primarily an advance of wages made necessary by the fact that the worker has no property… [and] is somehow identified with the time of waiting, and it produces the extra output that a longer waiting period makes possible. Since capital is productive the capitalist has a right to his portion. Since only the rich save, inequality is justified.' Albert Hirschman, perhaps the most eclectic of all development economists, also paid great attention to how ideas are important to persuasion in social domains -two of his most important works Exit, Voice and Loyalty (1970) and the Rhetoric of Reaction (1991) touch upon this topic. However, it is one insight of his Morality in the Social Sciences (1979) that more closely relates to the other two authors here examined.
In this piece, where Hirschman(1979) ends by making a call for a social science that explicitly incorporates moral considerations -a 'moral-social science' -he discusses Marx's attempt to interpret and change the prevailing social-political order. Hirchman(1979:333-334) argues that Marx 'consistently refused to make appeal to moral arguments', in spite of the moralistic undertone to his work. Rather, Marx proudest claim was to be the father of 'scientific socialism' and to be truly scientific he had to 'shun' moral arguments. As 'true science does not preach, it proves and predicts', so did Marx 'prove' the existence of exploitation and the demise of capitalism as a result of the falling rate of profit. Hirschman (1979: 334) further argues that it was perhaps this odd amalgam of " 'cold' scientific propositions with "hot" moral outrage…with all its inner tensions unresolved, that was (and is) responsible for the extraordinary appeal of his work in an age both addicted to science and starved of moral values'. A parallel to this view on Marx can be found in the success of Neoliberal ideas, as they also combine these two elements -on the one hand the passionate rhetoric of 'freedom' and on the other the respectable scientific apparatus on Neoclassical Economics.
From the ground up
At this point it is important to ask: What philosophy should ND adopt in order to become a stronger contender against Neoliberalism? I argue that ND, due to its practical nature and its interest with concrete problems can never work based on a grand (albeit internally contradictory) narrative such as that underpinning Neoliberalism. Here I adopt the stance of Hirschman (1979:340), who claims that 'an effective integration of moral argument into economic analysis can be expected to proceed rather painstakingly, on a case-by-case basis, because the relevant moral consideration or aspect of human nature will vary considerably from topic to topic'. How should this be done?
Firstly, it is important to remember that moral considerations are dependent on metaphysical propositions that are ultimately held as self-evident (as seen in the previous section). SND already proposes a set of policies based on the premise that they are more likely to produce a desirable end -equitable, human-centred development allowed for by productive transformation. So, the issue really lies on how do we justify the means to achieve these ends Pragmatism as a Pillar of the New Developmentalism -João Silva -EARLY DRAFT -DO NOT QUOTE 20 on moral grounds, other than the consequentialist one, i.e., that a desirable end is likely to be achieved by the implementation of SND policies. And here, I believe that SND should draw from the principles of American Pragmatism. In fact, the word 'pragmatism' is not alien to those working in the developmentalist tradition -as noted in the Mount Holyoke version of ND, the concern of these development economists with practical economists has led them to be labelled as 'developmental pragmatists'. In addition, in the East Asian experiences, so valuable for the SND, the anti-dogmatic nature of political leadership has also figured under this banner (e.g. Lee Kwan Yew as a grand Pragmatism).
American Pragmatism is a philosophical school of thought which emerged in the late 19 th century, associated with authors such as Charles Pierce, William James and John Dewey, whose core element was the rejection of rationalism and empiricism. Although Pragmatism touches upon a variety of philosophical domains, from philosophy of science, aesthetics and philosophy of mind (see Misak, 2013 for an overview of Pragmatism), the concern here will be with what may be called the Pragmatist 'theory of truth', especially the version of William James (2010James ( [1907:44-45), which can be summarized as follows: 'Any idea upon which we can ride, so to speak; any idea that will carry us prosperously from any one part of our experience to any other part, linking things satisfactorily, working securely, simplifying, saving labor; is true for just so much, true in so far forth, true instrumentally. This is the 'instrumental' view of truth.' So, from this point of view, truth is derived from its serviceability to given purposes.
This instrumental view of truth can provide an important insight into the 'metaphysical
propositions' that Joan Robinson wrote about. Even though concepts such as 'value' and 'utility' are held as self-evident and cannot be scientifically tested, as Robinson notes, they do play a role in legitimising certain views, ultimately having practical consequences -in a sense, they help build 'truths' about the world, such as, for example, the existence of 'exploitation'.
So, it is based on these insights that I believe that SND should adopt the metaphysical propositions that facilitate the application of its policies, which are already well-defined.
Instead of looking for a grand metaphysical and moralistic framework, the SND should look at its necessities on a case-by-case basis, and 'work it upwards from there'. I now illustrate how this can be done with reference to some policy positions, with special reference to Thorstein Veblen's ideas.
Veblenian Metaphysics: Redistribution, Power and Serviceability
Inequality is a central concern for the ND. As the correction of inequality often calls for the existence of taxation as a means to redistribute income, the matter of who creates what is being distributed is of the greatest importance. In addition to usually being portrayed as a burden on 'entrepreneurship' and on 'wealth creators' by Neoliberals, thus removing incentives for innovation and growth, taxation is often surrounded by questions of morality, especially when it comes to high incomes. Libertarians see taxation as a form of coercion, and, as a result, believe it should be minimised -this though is linked to the restraint in the provision of public services to 'the minimal state'. Thus, these moral considerations have often been accompanied by a passionately devised terminology, e.g. taxing the 'wealth creators' has often been equated with 'punishment'.
The idea that the market distributional outcomes are somehow attributed according to some standard of productivity can largely be credited to marginal productivity theory. Just like the Labour Theory of Value legitimises the abolition of private property, for it is believed that only labour creates value, marginalism legitimises the incomes accruing to different factors of production. However, in arguing for a more equitable and justly redistributive market economy ones should not be bound to labour or marginalist propositions, and it is perhaps in Thorstein Veblen's production metaphysics where a better solution lies, as this author was critical of both these ideas.
For Veblen, (1898:353), 'the isolated individual is not a productive agent' the best he can do 'is to live from season to season, as the non-gregarious animals'. All production, in the author's view, is a 'production in and by the help of the community, and all wealth is such only in society'. In the history of humanity, 'no individual has fallen into industrial isolation, so as to produce any one useful article by his own independent effort alone', because even where there is no 'mechanical cooperation', humans are 'always guided by the experience of others'.
So, production is always a communal endeavour that is constantly drawing from the stock of knowledge accumulated in the past, in the end, it is not possible to trace the contribution of each individual/factor. occurring -in a sense, the rules of taxation are already part of the 'market game' before agents make their decision and, from this standpoint, they are already acting voluntarily, and thus it does not make sense to speak of redistribution as coercion.
But it is not only in this domain that Veblenian production metaphysics are important.
In the domain of corporate governance, they are also useful, especially when considering the powers and rights of the different agents of the corporate relation. In their more or less successful strategies countries have encouraged the creation of national champions. The problem is that, as these companies grow and gain access to international financial markets, they detach themselves more and more from national political economy structures. Often, in this process, these companies attempt to 'forget the past' portraying themselves not as a product of an industrial policy, but as the result of entrepreneurial effort or the likes of it. Due to its concerns with development with domestic capital, ND is favourable to the implementation of mechanisms that allow this domestic capital to 'remain domestic', e.g.
'golden shares' which allow for state control over strategic decision-making.
Veblenian productive metaphysics facilitate the legitimisation of the above-mentioned mechanisms because companies, from this perspective, are mainly seen as repositories of knowledge of a community, not a function of abstract labour and capital. The capabilities acquired by these companies, be it a Korean Chaebol or the Brazilian Embraer, were only possible due to a concerted effort that involved support directly visible in corporate accounts, such as subsidies and/or capital injections, but, arguably most importantly 'invisible' support, e.g., protectionism, sacrifice of 'consumer surplus', complementary economic support and so on. If these companies are portrayed as products of a community and of its collective efforts the reasoning for creating governance types that 'shackle' them to national interests.
This aspect of Veblen can also provide a better foundation for another 'hot topic' in ND -financialization. The case for shareholder value maximization, whose excesses have led to harmful for financial stability and productive accumulation (see Lazonick, 2014), has benefited from the interpretation that firms are legal fictions which serve as a nexus of contracts, derived from Neoclassical Economics. Underneath this fiction, lies the 'real' relationship of providers of inputs -with capital being considered as a productive factor. However, in Veblen's (various years, cited in Ireland, 2000) framework shareholders were often seen as 'anonymous pensioners' who have 'prescriptive rights to get something for nothing' -there is a clear difference between capital understood as an input to production and a title of property. The dominance of this view in the aftermath of WWII led them to be largely, considered, like bondholders, as functionless rentiers, fuelling the calls to diminish their rights and empower other stakeholders (Ireland, 2000), and should thus be revisited by SND.
Another aspect of financialization that can be touched upon from this basis is the very function that finance should have in society, and how it productively contributes to economic development. In fact, the distinction between 'unproductive' and 'productive' so essential to Veblen and other classical economists (see Barba and Vivo, 2012, also for a reflection on the productiveness of the financial sector), is something that should be at the core of SND. If it is agreed that modern finance is essential for development, allowing the market to freely operate, and create de facto gambling houses, can be a source of financial instability and crises. The legitimacy to control and regulate the financial, as often done in the most successful development experiences, can also benefit from 'the quantity alters the quality', and it can go from serving a productive function to an unproductive or even destructive one.
The classification of activities as rentier can also play an important role at the political economy level. As is well-known, land reform is a means that has historically played an important role in fostering developmental coalitions interested in industrialization (see Di John and Putzel, 2009 for a view on political settlements). In order to promote land-reform, either in 'land-to-tilter' programmes or nationalization, or even to tax land more heavily in a Georgist fashion or to introduce export surcharges on primary products to support agro-processing, the recognition that there is a rentier element in landlordism certainly helps to achieve this end.
Related to this matter is the discussion on luxury consumption. The heavy taxation or restriction of luxury goods was very important in East Asian development experiences (see, Chang, 1998). Facing the need to upgrade technological capabilities, developing countries, generally facing balance-of-payments constraints, are better off by restricting the import of non-basic consumption goods in order to buy foreign machinery. Recognises that there is a 'wasteful' element to luxury goods, something which in Neoclassical, due to the subjectivity of preferences, makes little sense - Veblen (1908:122) made this very point when stating that ''in the hedonistically normal scheme of life wasteful, disserviceable, or futile acts have no place'certainly is important in favouring their restriction, especially in earlier stages of development.
Understanding that the consumption of significant portion of luxury goods is largely influenced by interpersonal comparisons (see the classic Veblen,1997Veblen, [1899), i.e., it has a strong Pragmatism as a Pillar of the New Developmentalism -João Silva -EARLY DRAFT -DO NOT QUOTE 24 'positional' element (Hirsch,1977) also provides further justification for the regulation of their consumption, as consumption can be seen as generating externalities. The notion of wasteful consumption can exist.
Conclusion
As I have argued in this paper, ND has developed a strong and well-grounded body of knowledge, but it still lacks a strong philosophical ground that would help in its dissemination.
Neoliberalism, on the contrary, possesses an internally inconsistent but highly plastic and convincing philosophy (especially when analysed at the surface-level), which facilitates that legitimation of the policies it proposes.
These philosophical matters were central concerns to Classical Developmentalists, such as Myrdal and Hirschman, and also to Post-Keynesians, as exemplified by Joan Robinsontwo streams of thought being which are predecessors of SND. In broad terms, these authors focused on the importance of metaphysical foundations, and how they play a role in legitimising or strengthening certain ideas and agendas.
I have also aimed to provide a way forward to SND, arguing that, following the American Pragmatist tradition, SND should 'chose' its metaphysics instrumentally, as a means to facilitate the implementation of the policies it wants to see put forth. I further exemplified how this can be operationalised by showing how Veblenian metaphysics can provide a to legitimize certain SND policies on moral grounds.
It is true that there will most likely be inconsistencies in building such a project. There might be instances in which the metaphysical proposition picked in one domain will conflict with another. However, this also happens in Neoliberalism, and yet it thrives. We have to know how to live with such contradiction and be knowledge of them, in hopes that SND, as part of the 'moral-social science', flourishes even more in academic and policy landscapes.
|
2020-03-11T08:11:36.173Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "14610ca313ed3f638d32c91a48c1fdb132b7b63e",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.br/j/rep/a/353QMgMLhwKD4CV63sVd8Lj/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "14610ca313ed3f638d32c91a48c1fdb132b7b63e",
"s2fieldsofstudy": [
"Political Science",
"Philosophy"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
256055094
|
pes2o/s2orc
|
v3-fos-license
|
Comparison of the Cannabinoid and Terpene Profiles in Commercial Cannabis from Natural and Artificial Cultivation
Interest in cultivating cannabis for medical and recreational purposes is increasing due to a dramatic shift in cannabis legislation worldwide. Therefore, a comprehensive understanding of the composition of secondary metabolites, cannabinoids, and terpenes grown in different environmental conditions is of primary importance for the medical and recreational use of cannabis. We compared the terpene and cannabinoid profiles using gas/liquid chromatography and mass spectrometry for commercial cannabis from genetically identical plants grown indoors using artificial light and artificially grown media or outdoors grown in living soil and natural sunlight. By analyzing the cannabinoids, we found significant variations in the metabolomic profile of cannabis for the different environments. Overall, for both cultivars, there were significantly greater oxidized and degraded cannabinoids in the indoor-grown samples. Moreover, the outdoor-grown samples had significantly more unusual cannabinoids, such as C4- and C6-THCA. There were also significant differences in the terpene profiles between indoor- and outdoor-grown cannabis. The outdoor samples had a greater preponderance of sesquiterpenes including β-caryophyllene, α-humulene, α-bergamotene, α-guaiene, and germacrene B relative to the indoor samples.
Introduction
Until recently, cultivation and use of cannabis plants for medicinal, industrial, and recreational use were strictly prohibited and there is severely limited scientific research in this field [1,2]. However, due to recent shifts toward legalizing cannabis use in many locations, understanding its chemical diversity is of great importance for consumers and producers of cannabis. The bioactive properties of cannabis are derived from the plethora of secondary metabolites, which include cannabinoids, terpenoids, sterols, and flavonoids. Each of them has been identified and described across cannabis inflorescences, leaves, stem barks, and roots [3][4][5][6]. The chemical profile of particular metabolites has mainly been studied as a function of the plant's genetics and environment. It stands to reason that the physiological effects and therapeutic benefit of different cannabis strains is linked to the diversity and the quantities of these secondary metabolites [7,8].
A common method in cannabis cultivation to avoid genetic variations is to grow genetically identical plants from clones. Moreover, by implementing biotechnological tools such as genetic engineering, it is possible to produce plants with overexpressed genes responsible for the biosynthesis of particular bioactive metabolites [7,9]. Environmental conditions such as mineral nutrition, temperature, humidity, soil bacteria, and light intensity/spectra are important factors affecting the chemical composition and secondary metabolism in cannabis plants [10][11][12][13]. The optimal mineral nutrients such as nitrogen (N), phosphorus (P), calcium (Ca), iron (Fe), and potassium (K) are essential for both the vegetation and flowering development stage of cannabis production [14][15][16][17][18][19]. For example,
Results and Discussion
Cannabis is an annual plant that can be grown efficiently indoors under controlled conditions or outdoors under full spectrum sunlight [11]. Secondary metabolism in cannabis plants is influenced by several environmental cultivation conditions. To date, the effects of outdoor cultivation factors compared to indoor conditions on the cannabinoid and terpene profiles in cannabis have not been fully studied.
During inflorescence, the cannabis plant produces a plethora of cannabinoids and terpenes in the glandular trichome cells [26]. It is staggering and remarkable the number of these molecules that the plant produces. There is also added complexity that the cannabinoids can be oxidized in a multitude of ways. This creates two types of cannabinoids. The first are ones that are intrinsic to the cannabis plant because they are made by a biological pathway in the plant [27]. We refer to these as intrinsic cannabinoids. There are also other cannabinoids that are extrinsic to the cannabis plant that are created through subsequent reactions due to their environment, such as oxidation or photochemistry. We refer to these as extrinsic cannabinoids. The terpenes broadly fall into four main categories: monoterpenes (10 carbon), monoterpenoids (oxygenated terpenes), sesquiterpenes (15 carbon), and sesquiterpenoids (oxygenated sesquiterpenes) [27].
In this study, we used commercial cannabis samples that are cloned from a common parent but which are grown both indoor and outdoor under optimized conditions. The outdoor samples were grown in raised beds using a proprietary mixture of all-natural soil and composts under full sunlight. The indoor samples were grown under artificial light in a proprietary growth medium. The outdoor samples were stickier to the touch and were Molecules 2023, 28, 833 3 of 15 much more pungent than the indoor samples. The morphology and color of the flowers were similar. Each of the samples was from the same season to eliminate issues of large differences in age between the samples. Therefore, we can assess the importance of the two environments on the terpenes and cannabinoids metabolite compositions in two cultivars.
Principle Component Analysis of Cannabinoids
We performed an untargeted LC-MS-based metabolomic to analyze the metabolic profile of two cultivars, CP and RV, each genetically identical produced through clones, grown either indoors or outdoors under optimized conditions (Supplementary Figure S1a). We analyzed three samples of each (n = 3) for both indoor and outdoor samples. The Figure S1b). The tight clustering of the multiple injections of the QC sample implies no drift over time and confirms the reproducibility of the LC-MS system. We further examined the data to ensure that no terpenes or flavonoids were present in this analysis. The PCA score plots in the negative mode indicate the differences in the cannabinoid profiles ( Figure 1). The PC1 (describing the variation between groups) from the two plots was found to be 39.5% and 47.1%, and PC2 (which describes the variation within the groups) was 24.9% and 24.4% for the CP and RV models, respectively. Thus, the PCA score plots represents a distinctive clustering due to metabolic differences. Supplementary Figure S2a,c show the loadings plots, displaying the discriminative ion features localized in the peripheral (extreme values in PC1 and PC2) areas of the plots between indoor-grown and outdoor grown cultivars. and composts under full sunlight. The indoor samples were grown under artificial light in a proprietary growth medium. The outdoor samples were stickier to the touch and were much more pungent than the indoor samples. The morphology and color of the flowers were similar. Each of the samples was from the same season to eliminate issues of large differences in age between the samples. Therefore, we can assess the importance of the two environments on the terpenes and cannabinoids metabolite compositions in two cultivars.
Principle Component Analysis of Cannabinoids
We performed an untargeted LC-MS-based metabolomic to analyze the metabolic profile of two cultivars, CP and RV, each genetically identical produced through clones, grown either indoors or outdoors under optimized conditions (Supplementary Figure S1a). We analyzed three samples of each (n = 3) for both indoor and outdoor samples. The untargeted LC-MS analysis of the samples resulted in the detection of 1001 and 1316 features in the positive and negative ESI modes, respectively. Unsupervised principal component analysis (PCA) of the extracted features showed tight clustering of QC samples and clear serrations of indoor versus outdoor groups (Figure 1 and Supplementary Figure S1b). The tight clustering of the multiple injections of the QC sample implies no drift over time and confirms the reproducibility of the LC-MS system. We further examined the data to ensure that no terpenes or flavonoids were present in this analysis. The PCA score plots in the negative mode indicate the differences in the cannabinoid profiles ( Figure 1). The PC1 (describing the variation between groups) from the two plots was found to be 39.5% and 47.1%, and PC2 (which describes the variation within the groups) was 24.9% and 24.4% for the CP and RV models, respectively. Thus, the PCA score plots represents a distinctive clustering due to metabolic differences. Supplementary Figure S2a,c show the loadings plots, displaying the discriminative ion features localized in the peripheral (extreme values in PC1 and PC2) areas of the plots between indoor-grown and outdoor grown cultivars. Although being genetically identical, the indoor and outdoor samples from either of these two cultivars are completely distinguished by the composition of their cannabinoids. While we are applying this analysis here to different methods of cultivation, this type of analysis of the cannabinoids will enhance our understanding of the effects the environment and genetics have on the cannabinoids produced and also in the understanding of how to classify the multitude of different cultivars of cannabis that are commercially available.
Cannabinoid Analysis
To further understand the differences between the molecular profiles that gave rise to the disparate PCA results when comparing indoor versus outdoor cultivation, we conducted targeted cannabinoid analysis of the primary, intrinsic cannabinoids, CBGA, CBCA, ∆ 9 -THCA, and CBDA for CP ( Figure 2a) and RV (Figure 2b).
Although being genetically identical, the indoor and outdoor samples from either of these two cultivars are completely distinguished by the composition of their cannabinoids. While we are applying this analysis here to different methods of cultivation, this type of analysis of the cannabinoids will enhance our understanding of the effects the environment and genetics have on the cannabinoids produced and also in the understanding of how to classify the multitude of different cultivars of cannabis that are commercially available.
Cannabinoid Analysis
To further understand the differences between the molecular profiles that gave rise to the disparate PCA results when comparing indoor versus outdoor cultivation, we conducted targeted cannabinoid analysis of the primary, intrinsic cannabinoids, CBGA, CBCA, Δ 9 -THCA, and CBDA for CP ( Figure 2a) and RV (Figure 2b). We found little difference between the indoor-and outdoor-grown samples for these primary cannabinoids except CBCA and ∆ 9 -THCA, which are enhanced and depleted significantly in the RV-outdoor samples, respectively. This series is of importance because CBGA is a common precursor to CBCA, ∆ 9 -THCA, and CBDA that occur via three different biochemical pathways by particular synthases, among which the most prominent are tetrahydrocannabinolic acid-, cannabidiolic acid-, and cannabichromenic acid-synthase, leading to the production of THCA, CBDA, and CBCA, respectively [28,29]. IN addition, then they can be decarboxylated through various processes such as light exposure, heating, or through chemical reactions [30].
The corresponding data for the decarboxylation products of these primary cannabinoids is shown in the Supplementary Figure S3 and follows the same trends seen in Figure 2, albeit in much lower amounts. The level of ∆ 9 -THC was lower while the level of CBG was considerably higher in CP-outdoor samples. ∆ 9 -THC is the primary psychotropic metabolite of cannabis and binds to specific G-protein-coupled receptors, cannabinoid CB1 and CB2 receptors [31]. Although there is growing research on the potential value of THC in the treatment of a number of human diseases [26,32], its development as a therapeutic has been limited due to its psychoactive properties. However, CBD, CBG, and CBC have very low affinity for CB1/CB2 receptors and have less psychotropic activities compared to THC. We detected a significantly higher level of CBD in the RV-outdoor samples compared to the indoor-grown RV samples. CBD is one of the most abundant cannabinoids and is well-known for its anxiolytic and antipsychotic properties [33]. It has been shown that CBD has multiple pharmacological benefits in in vitro and animal studies, which makes it a very promising therapeutic commodity in inflammation, diabetes, cancer, and neurodegenerative diseases [34,35]. It is demonstrated that CBG has a promising therapeutic potential in the treatment of inflammatory bowel disease and prostate cancer [35,36]. An early study by Mahlberg and Hemphil showed the higher level of THC in cannabis leaves grown under sunlight than the plants grown under filtered green light and darkness while there was no significant difference in THC content in plants grown under filtered blue and red lights and shaded daylight compared to the sunlight grown plants [37]. Moreover, they showed that the level of CBC was maintained comparable or lower in plants grown under daylight than light stressed conditions. CBC is particularly abundant in young plants or freshly harvested dry-type cannabis and is hypothesized to be a synergist for the psychoactive cannabinoids [26,[38][39][40].
A more detailed analysis of all the extracted signal intensities using volcano plots was performed to visualize independent changes in cannabinoid profile and to discriminate between outdoor and indoor-grown cultivars (Supplementary Figure S2b). The levels of 42 and 32 ion features were remarkably higher (FDR-corrected p-value < 0.05 and fold change threshold of 2) in the indoor-grown CP and RV groups, respectively. Moreover, the relative abundance of 42 ion features was significantly lower in the indoor-grown CP samples (Supplementary Figure S2c). By removing different co-existing adduct ions, in-source fragment ions, and finally matching the MS/MS fragments with available commercial standards or reported in literature, we could annotate 21 unique cannabinoids as shown in Table S1. Intriguingly, we found significant differences in the levels of the cannabinoids between the two cultivation methods that produced through the environment the cannabis is subject to during growth, curing, and packaging, as shown in Figure 3. For both cultivars, the oxidation and degradation products of the primary cannabinoids, including CBN, CBNA, OH-CBNA, CBNBA, CBNDA, CBEA, CBT-isomer 1, and CBT-isomer 2, and others, are significantly amplified in the indoor-grown samples. CBNDA and CBEA are the results of full aromatization and photo-oxidation of CBDA, respectively. CBT isomers are the hydroxylated forms of THC [41]. CBN and its derivatives and analogs are synthesized from the oxidative aromatization of their corresponding THC-type derivatives [34]. The continued exposure of CBN to ultraviolet light in the presence of oxygen or air produces degradation products, OH-CBN. Interestingly, we found over two orders of magnitude more CBNA compared to CBN in these samples (Figure 3), indicating that CBNA produced from THCA and consequently, CBNA becomes CBN through spontaneous or induced decarboxylation.
Certificate of Analysis (COA) for cannabis. Therefore, consumers might be exposed to much more CBN after heating the samples than is depicted on the COA. Significant accumulation of CBNA and the plethora of other oxidation and degradation products such as OH-CBNA and CBNBA in the indoor samples is imperative because many of these oxidized cannabinoids might have diverse biological activities [42]. For example, it is shown that CBN is a strong sedative when it is combined with THC [43]. Furthermore, we found other cannabinoids produced in much greater quantity in the outdoor-grown samples, as shown in Figure 4. This is particularly acute in the samples of RV-outdoor. We observed increased levels of Δ 9 -THCBA as well as CBCA-C1. There is also an indication that the outdoor samples may contain the C6 version of Δ 9 -THCA, but this molecular ion was difficult to validate conclusively due to its similar fragmentation patterns with THCMA compound [44] (Supplementary Figure S3). The THCA derivatives with different length hydrocarbon tails are significant because they could have suitable It should also be noted that CBNA is not currently being tested on the California Certificate of Analysis (COA) for cannabis. Therefore, consumers might be exposed to much more CBN after heating the samples than is depicted on the COA. Significant accumulation of CBNA and the plethora of other oxidation and degradation products such as OH-CBNA and CBNBA in the indoor samples is imperative because many of these oxidized cannabinoids might have diverse biological activities [42]. For example, it is shown that CBN is a strong sedative when it is combined with THC [43].
Furthermore, we found other cannabinoids produced in much greater quantity in the outdoor-grown samples, as shown in Figure 4. This is particularly acute in the samples of RV-outdoor. We observed increased levels of ∆ 9 -THCBA as well as CBCA-C1. There is also an indication that the outdoor samples may contain the C6 version of ∆ 9 -THCA, but this molecular ion was difficult to validate conclusively due to its similar fragmentation patterns with THCMA compound [44] (Supplementary Figure S3). The THCA derivatives with different length hydrocarbon tails are significant because they could have suitable biological activities [45,46]. ∆ 9 -THCBA possesses psychoactive properties but has been reported to have less anxiety associated with it, which is an issue with ∆ 9 -THC in prescribed medicines such as Marinol (dronabinol) for treating nausea and loss of appetite associated with cancer chemotherapy and AIDS [46][47][48]. The ∆ 9 -THCHA has been shown to have antinociception activity in mice [44]. Therefore, outdoor growing and breading plants to express larger quantities of these active components would be beneficial from a medicinal viewpoint.
biological activities [45,46]. Δ 9 -THCBA possesses psychoactive properties but has been reported to have less anxiety associated with it, which is an issue with Δ 9 -THC in prescribed medicines such as Marinol (dronabinol) for treating nausea and loss of appetite associated with cancer chemotherapy and AIDS [46][47][48]. The Δ 9 -THCHA has been shown to have antinociception activity in mice [44]. Therefore, outdoor growing and breading plants to express larger quantities of these active components would be beneficial from a medicinal viewpoint.
Terpene Analysis
The structure and classification of terpenes are based on linking numerous isoprene units, which are mainly classified as monoterpenes and sesquiterpenes. These volatile compounds in cannabis are synthesized alongside phytocannabinoids in the glandular trichomes. Terpenes are responsible for the aroma characteristic of cannabis and have a significant role in the defense system, serving in a range of defense strategies against pests, fungi, and bacteria [49]. Moreover, the resinous content of the trichomes makes them sticky, creating a trap for insects [50]. We measured the terpene levels using targeted GC-MS analysis in both RV and CP cultivars produced either indoors or outdoors. Terpenes' annotations were determined by matching with commercial standards and the NIST library. We identified 9 monoterpenes and 14 sesquiterpenes in CP groups and 10 monoterpenes and 15 sesquiterpenes in RV samples (Tables S2 and S3). In agreement with many previous studies on cannabis [49,51], we also detected the most commonly reported terpenes including myrcene, terpineol, limonene, α-pinene, linalool, humulene, and caryophyllene in both cultivars. Interestingly, we detected fenchone and several sesquiterpenes such as aristolene, selina-diene, trans-sesquisabinene hydrate, γ-elemene, and β-maaliene only in RV samples and β-bisabolene, alpha-bisabolene, bulnesol, and chamigrene only in CP samples, highlighting the different terpenes profile for each cultivar.
Among the significantly differentiated terpenes, we found remarkably higher levels of limonene, β-myrcene, β-caryophyllene, α-humulene, α-bergamotene, α-guaiene, and germacrene B in outdoor samples in both cultivars (p-value < 0.05) as shown in Figure 5. This is particularly acute for the samples of RV-outdoor, where the predominant terpene was a sesquiterpene selina-diene, which is not being tested on the California COA for cannabis. β-Caryophyllene (BCP) is one of the most abundant sesquiterpenes in cannabis plants and extracts, which is well known for its antimicrobial, antifungal, antioxidant, and anticarcinogenic properties [52,53]. It is shown that BCP is a strong CB2 agonist and has anti-inflammatory effects in DSS-induced colitis mouse models [53,54]. The oxidized BCP
Terpene Analysis
The structure and classification of terpenes are based on linking numerous isoprene units, which are mainly classified as monoterpenes and sesquiterpenes. These volatile compounds in cannabis are synthesized alongside phytocannabinoids in the glandular trichomes. Terpenes are responsible for the aroma characteristic of cannabis and have a significant role in the defense system, serving in a range of defense strategies against pests, fungi, and bacteria [49]. Moreover, the resinous content of the trichomes makes them sticky, creating a trap for insects [50]. We measured the terpene levels using targeted GC-MS analysis in both RV and CP cultivars produced either indoors or outdoors. Terpenes' annotations were determined by matching with commercial standards and the NIST library. We identified 9 monoterpenes and 14 sesquiterpenes in CP groups and 10 monoterpenes and 15 sesquiterpenes in RV samples (Tables S2 and S3). In agreement with many previous studies on cannabis [49,51], we also detected the most commonly reported terpenes including myrcene, terpineol, limonene, α-pinene, linalool, humulene, and caryophyllene in both cultivars. Interestingly, we detected fenchone and several sesquiterpenes such as aristolene, selina-diene, trans-sesquisabinene hydrate, γ-elemene, and β-maaliene only in RV samples and β-bisabolene, alpha-bisabolene, bulnesol, and chamigrene only in CP samples, highlighting the different terpenes profile for each cultivar.
Among the significantly differentiated terpenes, we found remarkably higher levels of limonene, β-myrcene, β-caryophyllene, α-humulene, α-bergamotene, α-guaiene, and germacrene B in outdoor samples in both cultivars (p-value < 0.05) as shown in Figure 5. This is particularly acute for the samples of RV-outdoor, where the predominant terpene was a sesquiterpene selina-diene, which is not being tested on the California COA for cannabis. β-Caryophyllene (BCP) is one of the most abundant sesquiterpenes in cannabis plants and extracts, which is well known for its antimicrobial, antifungal, antioxidant, and anticarcinogenic properties [52,53]. It is shown that BCP is a strong CB2 agonist and has anti-inflammatory effects in DSS-induced colitis mouse models [53,54]. The oxidized BCP can alter cancer-related pathways, such as MAPK, STATS pathways, by induction of reactive oxygen species generation in prostate and breast cancer cell lines independent of endocannabinoid system machinery [53,55]. The signal intensity of oxidized BCP was significantly higher in RV-outdoor samples compared to indoor groups. Oral administration of α-humulene (formerly known as α-caryophyllene) in a mouse model of airways allergic inflammation can lessen eosinophilic migration into the BALF and lung tissues by reduction of inflammatory mediators NF-kB and AP1 [56]. The average signal intensity of α-bergamotene, a minor sesquiterpene, was three times higher in RV-outdoor samples compared to the indoor group. It is shown that α-bergamotene secretion by NaTPS38 terpene synthase, in wild tobacco mediates both defenses against herbivores in leaves and pollinator attraction in flowers [57]. Another enriched sesquiterpene detected in outdoor samples, especially in the RV-outdoor group is germacrene B, which is reported to have remarkable antimicrobial activity [58]. Interestingly, the CP-indoor samples lack germacrene B, which could be a reflection of the growth conditions of indoor samples. α-guaiene is a precursor to rotundone, which is an aroma compound reported in some wine verieties [59]. Limonene is a precursor compound to monoterpenoids and shows different pharmacological properties including anti-inflammatory, gastro-protective, anti-nociceptive, anti-tumor, and neuroprotective. It is also reported to be an antidote to excessive psychoactive adverse events produced by THC [26]. Remarkably, we found that the CP samples grown indoor completely lacked β-myrcene. β-myrcene is a major monoterpene and can intensify the anti-stress, anxiolytic, and sedative effects of CBD [60].
can alter cancer-related pathways, such as MAPK, STATS pathways, by induction of reactive oxygen species generation in prostate and breast cancer cell lines independent of endocannabinoid system machinery [53,55]. The signal intensity of oxidized BCP was significantly higher in RV-outdoor samples compared to indoor groups. Oral administration of α-humulene (formerly known as α-caryophyllene) in a mouse model of airways allergic inflammation can lessen eosinophilic migration into the BALF and lung tissues by reduction of inflammatory mediators NF-kB and AP1 [56]. The average signal intensity of αbergamotene, a minor sesquiterpene, was three times higher in RV-outdoor samples compared to the indoor group. It is shown that α-bergamotene secretion by NaTPS38 terpene synthase, in wild tobacco mediates both defenses against herbivores in leaves and pollinator attraction in flowers [57]. Another enriched sesquiterpene detected in outdoor samples, especially in the RV-outdoor group is germacrene B, which is reported to have remarkable antimicrobial activity [58]. Interestingly, the CP-indoor samples lack germacrene B, which could be a reflection of the growth conditions of indoor samples. αguaiene is a precursor to rotundone, which is an aroma compound reported in some wine verieties [59]. Limonene is a precursor compound to monoterpenoids and shows different pharmacological properties including anti-inflammatory, gastro-protective, anti-nociceptive, anti-tumor, and neuroprotective. It is also reported to be an antidote to excessive psychoactive adverse events produced by THC [26]. Remarkably, we found that the CP samples grown indoor completely lacked β-myrcene. β-myrcene is a major monoterpene and can intensify the anti-stress, anxiolytic, and sedative effects of CBD [60]. Moreover, oral administration of β myrcene in mice demonstrated remarkable effects against oxidative damage in peptic ulcers and cerebral ischemic brain injury by increasing the level of glutathione peroxidase and total glutathione in the tissues [61,62]. The main finding is that the outdoor cannabis samples had a greater diversity of terpenes and greater amounts of the ones that are present when compared to indoor cannabis from the Moreover, oral administration of β myrcene in mice demonstrated remarkable effects against oxidative damage in peptic ulcers and cerebral ischemic brain injury by increasing the level of glutathione peroxidase and total glutathione in the tissues [61,62]. The main finding is that the outdoor cannabis samples had a greater diversity of terpenes and greater amounts of the ones that are present when compared to indoor cannabis from the same genetic stock. Moreover, the outdoor samples have a greater preponderance of sesquiterpenes relative to the indoor samples. Therefore, in-depth metabolomic evaluations of cannabis terpene profiles grown in different conditions are important given their potential medicinal and therapeutic values. Moreover, our results suggest that the remarkable differences in the terpene compositions may be a reflection of indoor growers not optimizing growing conditions for terpenes that do not appear in the California testing. It has been well documented that terpene levels in cannabis have been declining over the past decade or so [63,64].
It is not clear why the indoor samples produce more degraded and oxidized cannabinoids. However, this could be related to the synergism that the plant has evolved throughout its history. One of the terpenes' functions in the plant is to act as an antioxidant and can also protect the plants for pest damage [65,66]. When grown indoors in the controlled environment, we found that the terpenes are not expressed in as high an amount. Therefore, there is less of an oxidation shield provided to the flowers in indoor cannabis. This could account for the increased levels of oxidized and degraded cannabinoids in indoor samples. For example, the sesquiterpenes and cannabinoids are produced on different biochemical pathways and we found that the several metabolites related to the sesquiterpene pathway are accessed more effectively in outdoor cultivation. In parallel with this, the outdoor plants are able to express the totality of their biochemical pathways. Terpenes can act synergistically with variations in quantities and ratios and in combinations with other bioactive secondary metabolites such as annabinoids as suggested by the varied medicinal efefcts, known as the "entourage effect" [26]. This synergy could be significant in cultivating and breeding cannabis with greater therapeutic benefits.
Sample Preparation
The use of plants in the present study complies with international, national, and institutional guidelines. Cannabis plants were purchased from commercial suppliers and as such, they were compliant with the state of California guidelines. We chose flowers from the upper third of the plants with similar morphology and size to standardize the sampling. Three independent plant samples were prepared from each sample to address variance in the samples.
Two commercial cultivars of cannabis were analyzed, Red Velvet (RV, Batch #: 210727-1LDX-GEN-RVT) and Cheetah Piss (CP, Bach #210524-1LDX-SJM-CPIS). Each of the samples was from the same season to eliminate issues of large differences in age between the samples. The outdoor samples were part of 2021 seasonal and commercial grow by Ridgeline Farms. They are referred to here as RV-outdoor and CP-outdoor. The indoor samples were grown commercially in 2021 from clones of the same genetic stock as the outdoor samples. The indoor samples were grown by grandifloragenetics.com for RV and by cookies.com for the CP samples. These samples are referred to here as RV-indoor and CP-indoor. The outdoor samples were grown in raised beds using a proprietary mixture of all-natural soil and composts under full sunlight. The indoor samples were grown under artificial light in a proprietary growth medium. Samples of the trimmed and cured cannabis flowers (late flowering phase) were prepared as described previously [42]. Briefly, triplicates of manually ground flowers (250 mg) were weighed into glass vials and extracted with ice-cold ethanol. These extracted samples were filtered twice through 0.44 µm PTFE filters. These solutions were then utilized for both the GC-MS evaluation of the terpene composition or UPLC-MS evaluation of the cannabinoids.
LC-MS Analysis
For the analysis of the cannabinoids, aliquots of the stock solutions diluted 10 and 100 times with ethanol and injected in duplicates in randomized orders onto the LC-MS for analysis. Chromatographic separation was performed on Acquity UPLC H-Class system (Waters Corporation, Milford, MA, USA) using a Kinetex C18 core-shell column (2.6 µm, 100 mm × 2.1 mm) and a ternary multistep gradient. Mobile phase A was consisted of water and mobile phase B consisted of acetonitrile both containing 0.1% formic acid, and mobile phase C included methanol and kept constant at 5% throughout the run. The chromatographic gradient was set as follows: 0-1.3 min 45% A and 50% B, 1.3-2.67 min 28% A and 67% B, 2.67-6.67 min 5% A and 90% B, 6.67-9.33 90% B, 9.33-10 min 45% A and 50% B, 10-14 min 45% A and 50% B. The flow rate was set to 0.3 mL/min and the column temperature was set at 30 • C. The UPLC was coupled to a Xevo G2 XS Q-ToF MS (Waters Corporation, Milford, MA, USA), and operated in both positive and negative electrospray ionization modes. The capillary voltage and sampling cone voltage of 2 kV and 32 V were used in the positive mode. The source and desolvation temperatures were 120 • C and 500 • C, respectively. The desolvation gas flow (N2) was set to 650 L/hr. For the negative mode, a capillary voltage of −1.5 kV and a cone voltage of 30 V were used. Accurate mass was obtained by injections of leucine enkephalin as a lock spray. The data was collected in duplicates over the mass range m/z 50 to 700 Da. Quality control (QC) from a pooled aliquot of samples was injected at the beginning, between the samples, and at the end of the runs in order to monitor for retention time drift and the stability of the MS platform. The QC samples were also acquired in both MS/MS and data-independent MSE mode for the structural assignment of the cannabinoids. The low collision energy was set to 4 eV, and the trap collision energy was ramping from 20 to 45 eV.
GC-MS Analysis
For the analysis of the terpenes, 2 µL of samples described above were subjected to Agilent 7890B/5977B GC-MS system. The samples were analyzed in splitless mode with a DB-5MS capillary column (30 m × 0.25 mm × 0.25 µm; Agilent, J & W Scientific, Santa Clara, CA, USA). High purity helium was used as a carrier at a flow rate of 1.0 mL/min. The injector and ion source temperatures were set at 280 • C, and 230 • C, respectively. The temperature program was conducted as follows: the initial temperature was 35 • C for 2 min; then the temperature was increased to 150 • C at a rate of 15 • C/min, and maintained for 5 min at this temperature; next, the temperature was increased to 290 • C at a rate of 3 • C/min; and finally, the temperature at 290 • C was held for 2 min. The mass spectra were acquired with electron ionization mode at 7 eV in full scan mode (m/z 60-500 Da).
Data Pre-Processing and Statistical Analyses
The LC-MS raw data files were converted to netCDF format using DataBridge tool implemented in MassLynx software (Waters, Version 4.1). Then, they were subjected to peak-picking, retention time alignment, and grouping using XCMS package in R (Version 3.5.1) environment as described previously64. The output data frame included a list of time aligned detected features (m/z, retention time) and the relative signal intensity in each sample. Multivariate and univariate statistical analysis were performed in MetaboAnlyst 5.0 and also in R environment. PCA analysis was performed on auto-scaled and log-transformed data. Group differences were calculated using Welch t-test with FDR corrected p-value < 0.05. The fold change (FC) in each metabolite abundance was calculated by comparing the mean values of the peak areas in each group. The Volcano plot was constructed by plotting the log 2 FC (outdoor/indoor) of extracted features against log 10 p-value. The GC-MS data were processed in MNOVA. Given the multitude of terpenes, terpenoids, sesquiterpenes, and sesquiterpenoids in these samples some of them were identified with the aid of the NIST database. The retention index and mass similarity were considered in terpene assignments. The utility of the identified terpenes and cannabinoids as potential predictive markers to distinguish outdoor-grown cultivars from indoor-grown ones was calculated using a multivariate receive operating characteristic (ROC) curve in MetboAnalyst 5.0.
Conclusions
Our complementary targeted GC-MS and untargeted LC-MS analyses showed significant differences in the terpene and cannabinoid profiles of two cultivars of cannabis grown in two different conditions. One important conclusion of this study is that the consumer is not being given a complete picture of the components in cannabis. Numerous oxidized and degraded cannabinoids are present, and many of them may have adverse or unknown biological indications. Whether the cannabis is grown indoors under artificial lights using artificial growth media or outdoors in living soil with sunlight influences the types and amounts of molecules that are present in the flowers. Indoor samples have a greater preponderance of oxidized and degraded cannabinoids, and the outdoor samples are able to express more cannabinoids with potentially desirable bioactivity. Therefore, a comprehensive understanding of the composition of secondary metabolites, such as cannabinoids and terpenes grown in different environmental conditions, is of primary importance for the medical and recreational use of cannabis. Growing cannabis that expresses the unusual cannabinoids, such as C4-and C6-∆ 9 -THCA, could have significant medicinal benefit. There is also an important conclusion from this study revealing inadequacies in California COA testing to delineate important components of cannabis that are being sold. The lack of testing for many of the important terpenes (e.g., sesquiterpenes), cannabinoids (e.g., THCA derivatives with different length hydrocarbon sidechains), and their degradation products (e.g., CBNA) highlights the deficiency of the California COA testing. The ROC curve analysis using a random forest model revealed that α-guaiene, α-bergamotene, CBN, CBNDA, and CBT could serve as the top 5 potential predictive markers (AUC = 0.995) for these cultivars to discriminate the outdoor-grown from indoor ones. This study is the first to evaluate the impact of natural and artificial cultivations on the profile of cannabinoids and terpenes in commercial cannabis. However, our analysis was limited by the restricted samples sizes and the limited information on the growing conditions of each cultivar in each environment. Further studies with larger sample sizes and different environmental conditions and breeds could enhance our understanding of the bio-chemical diversities of cannabinoids and terpenes with different medicinal and physiological properties.
Supplementary Materials: The following supporting information can be downloaded at: https: //www.mdpi.com/article/10.3390/molecules28020833/s1, Figure S1. Representative total ion chromatograms of untargeted LC-MS analysis (a) and PCA score plots (b) of the extracted metabolic features from all the groups in both negative and positive electrospray ionization modes. CP (Cheetah Piss; n = 3 independent sample), RV (Red Velvet; n = 3 independent sample), QC (Quality Control; n = 6). Figure S2. The loadings and volcano plots derived from extracted metabolic features from untargeted LC-MS analysis in the indoor-grown as compared with the outdoor-grown CP (a,b) and RV samples (c,d). The volcano plot highlights the significantly differentiated metabolic features that increased (shown in red, Sig-Up) or decreased (shown in blue, Sig-Down) with the fold change threshold of 2 and FDR-corrected p-value < 0.05. Figure S3. Comparison of the common decarboxylated cannabinoids, CBG, CBC, ∆9-THC, and CBD detected by LC-MS analysis in the CP (a) and RV (b) samples (n = 3 independent samples per group). Figure S4. The level of tentatively annotated d9-THCHA in RV samples (n = 3 independent sample per group). Table S1. The list of annotated cannabinoids from untargeted LC-MS/MS analysis from outdoor-versus indoor-grown RV and CP samples. Fold change values calculated from average signal intensity of outdoor samples to average signal intensity of indoor samples (n = 3 independent sample per group). ∆ppm = mass error. Table S2. Results from t-test analysis of the detected terpenes by targeted GC-MS analysis from outdoor-versus indoor-grown CP samples. Fold change values calculated from average signal intensity of outdoor samples to average signal intensity of indoor CP samples (n = 3 independent sample per group). Table S3. Results from t-test analysis of the detected terpenes by targeted GC-MS analysis from outdoor-versus indoor-grown RV samples. Fold change values calculated from average signal intensity of outdoor samples to average signal intensity of indoor RV samples (n = 3 independent sample per group). Data Availability Statement: The datasets generated during and/or analyzed during the current study are available from the corresponding author upon reasonable request.
|
2023-01-22T06:16:08.992Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "863920c58d42780b5fcbd45f25baa95195a1f7ab",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/28/2/833/pdf?version=1673619577",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8cc7fb35dd912dc4642b0b5ab4975f8c34617aef",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
271428277
|
pes2o/s2orc
|
v3-fos-license
|
Sex steroid hormones: an overlooked yet fundamental factor in oral homeostasis in humans
Sex steroid hormones (SSH) are extremely versatile molecules with a myriad of physiological functions. Next to their well-known role in sexual development and reproduction, SSH play active roles in practically every tissue in the human body, including the oral cavity. It has long been demonstrated that periodontal tissues express SSH receptors and therefore are responsive to the presence of SSH. Interestingly, SSH not only interact with the periodontal tissues but also with other tissues in the oral cavity such as dental enamel, pulp, cementum, oral mucosa, and salivary glands. Questions concerning the possible physiological functions of these receptors and their role in maintenance of oral health, remain unanswered. The purpose of this scoping review was to gather and summarize all the available evidence on the role of SSH in physiological processes in the oral cavity in humans. Two comprehensive literature searches were performed. References were screened and selected based on title, abstract and full text according to our inclusion criteria. Both searches yielded 18,992 results of which 73 were included. Results were divided into four categories: (1) Periodontium; (2) Dental structure; (3) Mucosa; and (4) Salivary glands. The interaction of these tissues with progestagens, androgens and estrogens are summarized. Sex steroid hormones are an overlooked yet fundamental factor in oral homeostasis. They play important roles in the development and function of the periodontium, dental structure, mucosa and salivary glands. Dentists and healthcare providers should consider these hormonal factors when assessing and treating oral health conditions.
Introduction
When asked about the role of sex steroid hormones (SSH) on the human body, the first thought that comes to mind is their essential and primary function on the sexual and reproductive system.Although this holds true and is indeed the most studied and well-known role of SSH, it is by no means their only known function.These hormones are incredibly versatile and capable of inducing various physiological responses (1).Alongside their primary and well-known role in the sexual and reproductive system, SSH are involved in a variety of biological processes that bear no clear relationship with their main function.SSH -namely progestagens, androgens and estrogens-are known to act as appetite modulators (2), to influence skeletal muscle strength and power (3), to play a role in adipose tissue regulation (4, 5), bone mineral density (6) and regulation of the immune response (7), to name a few.Considering SSH's versatility and ubiquity, it is not surprising that these molecules also play a role in the oral cavity (8).
Synthesis of SSH begins with the enzymatic process known as steroidogenesis (Figure 1), where their precursor -cholesterolis converted into biologically active SSH (10).Cholesterol is transported to the inner membrane of the mitochondria by specific proteins.The steroidogenic acute regulatory protein (StAR), and translocator protein (TSPO), in a complex with proteins such as voltage-dependent anion channel (VDAC), ATPase family AAA-domain containing protein 3A (ATAD3A), amongst others, are involved in this process.Once cholesterol has reached the inner membrane of the mitochondria, it is further converted into pregnenolone by the cholesterol side-chain cleavage enzyme (P450 SCC ; CYP11A1) (10, 11).Pregnenolone will then passively diffuse from the mitochondria to the smooth endoplasmic reticulum for further conversion into either progesterone by 3-beta-hydroxysteroid dehydrogenase (3b-HSD) or 17-alphahydroxypregnenolone by 17-alpha-hydroxylase/17,20-lyase (CYP17) (12).CYP17 and 3b-HSD can respectively convert progesterone and 17-alpha-hydroxypregnenolone into 17-alpha-hydroxyprogesterone.Pregnenolone, progesterone, 17-alpha-hydroxypregnenolone and 17alpha-hydroxyprogesterone compose the progestagens class.CYP17 can go on to further convert 17-alpha-hydroxypregnenolone into dehydroepiandrosterone (DHEA) and 17-alpha-hydroxyprogesterone into androstenedione.DHEA will then be the precursor of androstenedione, androstenediol, testosterone, and dihydrotestosterone (DHT).These four SSH, together with DHEA compose the androgens class.Aromatization of androstenedione and testosterone will originate estrone and estradiol, respectively.Together with estriol, these three SSH compose the estrogens class (10).
In mammals, these molecules are mainly secreted by the testis, ovaries, adrenal cortex and during pregnancy, by the placenta (Figure 1) (13).Interestingly, the biosynthesis of sex steroid hormones has also been observed in the central nervous system, skin, and adipose tissue, albeit at a lower rate than in the reproductive organs (14)(15)(16)(17)(18).
Classical delivery modes of SSH are endocrine, paracrine, and autocrine.The endocrine mode involves hormone secretion and transport via blood vessels to reach distant target tissues.Paracrine delivery involves local synthesis of SSH and diffusion through extracellular fluid, covering smaller distances, usually within the same organ.In the case of the autocrine mode of delivery and action, a cell is activated by its own hormonal signals, becoming thus both hormone source and target (19).
Once SSH are synthesized and delivered, a high percentage binds to plasma proteins (such as SHBG).The remaining nonbound fraction of the hormones can bind to specific intracellular, membrane-associated, or transmembrane receptors, activating a signaling cascade that results in biological effects (20,21).
It has long been demonstrated that periodontal tissues exhibit SSH receptors and therefore are susceptible to the presence of SSH (22)(23)(24).However, questions such as the possible physiological functions of these receptors or how they engage in the maintenance of oral health, remain unanswered.
The purpose of this review is to gather and summarize all the available evidence on the role of SSH in normal physiological processes in the oral cavity in humans.This includes how SSH influence oral homeostasis and what is known about their role on the development and function of different oral tissues.Clinical implications and future challenges will be discussed.
Search and protocol
This scoping review was conducted in accordance with the Preferred for Systematic reviews and Meta-Analyses extension for Steroidogenesis and main sources of sex steroid hormones.Cholesterol acts as the precursor molecule for the synthesis of the three classes of sex steroid hormones: progestagens, androgens and estrogens.Once P450 SCC has converted cholesterol into pregnenolone in the inner membrane of the mitochondria, pregnenolone passively diffuses to the smooth endoplasmic reticulum for further conversion.Reproductive organs are the main source of sex steroid hormones, followed by the adrenal cortex and during pregnancy, by the placenta (Modified from (9).
Scoping Reviews (PRISMA-Scr) checklist explanation (25).In collaboration with a medical librarian (LS), two comprehensive searches were carried out on different dates.The first search, from inception to April 6 th , 2020 was carried out in the bibliographic database PubMed.The second search -aimed as an update from the first one-was run on PubMed and Embase.comfrom inception to October 4 th , 2023.Search terms varied slightly between searches and included controlled terms (MeSH-terms) as well as free text terms.The following terms were used (including synonyms and closely related words) as index terms or free-text words: 'oral tissues' and 'sex hormones'.On the first search, a filter was used to exclude animal studies.On the second search, additional filters were used to further limit the results.The search was performed without date or language restrictions.The full search strategy can be found on Appendix 1.
Screening process and eligibility criteria
References resulting from the search strategy were imported in Rayyan (26), an online application used for screening articles simultaneously and independently by the reviewers.Screening of the search results was done independently by three reviewers (P.C., R.N. and J.B).Articles were initially screened based on title and later based on abstract.On each step, discrepancies were discussed until reaching an agreement.The same process was followed for full-text review.References were managed using Endnote 20 (27).Studies were ultimately verified for their eligibility for inclusion based on full text.
Studies that met the following criteria were included in this scoping review: (1) in vitro or in vivo studies; (2) information about the physiological role of sex steroid hormones progesterone, estrogen and testosterone and their precursors in periodontium, dental structure, mucosa, salivary glands, local immune response and orofacial perception.Studies reporting on: (1)
Data extraction and summary
In this review, the evidence was divided into four distinct categories: (1) Periodontium; (2) Dental structure; (3) Mucosa; and (4) Salivary glands.Their interaction with progestagens, androgens and estrogens were summarized.
Selection of evidence
The screening process and the number of selected articles are summarized in Figure 2. The first search yielded 10,962 records in PubMed.The second search yielded 22,722 results in total (9,941 in PubMed and 12,781 in EMBASE).After comparing both searches from 2020 and 2023 and removing the duplicates using the computer software DedupEndNote (28), our combined searches yielded 18,992 results.From these results, 162 articles were selected by title (from the first search) and 62 (from the second search).During the next screening phase, 92 articles were selected by abstract.Out of 92 records, 19 records were excluded for the following reasons: (1) no full text available; (2) no direct relation to oral health and; (3) main focus did not match the objectives of this scoping review; (4) not in English, Dutch, Spanish, Italian or Portuguese language.The remaining 73 records were considered included in this review.4 The oral cavity as a target organ Periodontal tissues and their response to changes in SSH -such as during pregnancy-have for long been the main focus of oral endocrinology research (29).This special interest on periodontal tissues can be attributed to the clinical changes and associated detrimental consequences for oral and general health (30).The understanding of the physiology and physiopathology of these phenomena becomes fundamental in disease treatment and prevention.However, and as it has already been exposed above, SSH are extremely versatile molecules with a great variety of target organs, which is also reflected in the oral cavity.In addition to the effects on periodontal tissues during periods of fluctuating hormonal levels, SSH have been reported to interact with practically every other tissue type present in the oral cavity.There is evidence of direct effects of SSH on teeth, periodontium, oral mucosa and salivary glands.Also, SSH's role has been described in amelogenesis, odontogenesis, bone metabolism and local immune response.
In this review, the evidence is divided into four categories: (1) Periodontium; (2) Dental structure; (3) Mucosa; and (4) Salivary glands.Their interaction with progestagens, androgens and estrogens will be summarized.Human and in vitro studies were included where only physiologic effects were demonstrated.Studies addressing pathological effects or treatment are beyond the scope of this review.
Periodontium
The periodontium includes the tissues supporting and surrounding the teeth.These tissues are the gingiva, periodontal ligament (PDL), cementum and the alveolar bone.
The gingiva is mainly composed of both epithelial cells and fibroblasts (31).PDL is a connective tissue structure surrounding the root of the tooth, connecting it to the alveolar bone.Cells present in the PDL include periodontal ligament stem cells (PDLSCs), fibroblasts, endothelial cells, cementoblasts, osteoblasts, osteoclasts, tissue macrophages, and stratified epithelial cells.PDLSc have differentiation potential, which has proved to be essential in periodontal tissue repair and regeneration (32).Cementum is the calcified mesenchymal tissue surrounding the root of the tooth which is secreted by cementoblasts.This tissue allows the insertion of the PDL fibers to the tooth and the alveolar bone (33) which is the osseous tissue of the maxilla and the mandible, forming the tooth socket that surrounds the tooth (34).The existing studies reporting interactions between SSH and the periodontium are listed in Table 1.
Receptors
In order for the periodontium to communicate with molecular messengers such as SSH, specific receptors should be present in the target tissues.Several studies have demonstrated the presence of SSH receptors in gingiva, PDL, cementum and alveolar bone.In gingival tissue, receptors for progesterone (35), androgens (22,24,36,37) and estrogens (23,38,39) have been described in both epithelial cells and fibroblasts.Flowchart of the screening process.In green, the search from 2020 and in yellow/brown, the search from 2023.Finally, all articles from both searches were combined and assessed for inclusion.
Conversion of androstenedione to testosterone by 17b-hydroxy-C 19steroid oxidoreductase (44).
Metabolism of testosterone and androstenedione into reduced forms in hGF (42,43) and in male and female gingival tissue (45).Identification of different testosteronereducing enzymes in gingiva (46).
Estrogen levels correlate to the expression of cytokeratin 5 (49).
Cytokine production and inflammation markers
Progesterone inhibits (Gornstein et al Cementoblasts are known to express estrogen receptors a (ERa) and b (ERb) (75).
Osteoblast-like cells -responsible for synthetizing bone tissueexpress progesterone (77), estrogen (78) and androgen receptors (79).This also applies to alveolar bone.Previous publications have extensively reviewed the expression of sex steroid receptors and the physiology of the interaction between bone tissue and SSH (80, 81).Addressing this topic falls outside the scope of this review.Therefore, only papers specifically focusing on alveolar bone will be discussed.
All four tissues of the periodontium express receptors for SSH, confirming their susceptibility to sex steroid hormones.
Metabolism
Periodontal cells present enzymes capable of metabolizing SSH, resulting in the conversion of sex hormones into different byproducts.The first in vitro evidence of the presence of enzymes in gingiva able to metabolize progesterone was reported in 1971 (82).Metabolism Metabolism of testosterone and androstenedione (67).
Collagen and DNA synthesis of hPDLCs is not influenced by estradiol (68)
Cellular growth, differentiation, proliferation and migration
Progesterone stimulates hPDLCs proliferation and differentiation under osteogenic conditions.Also, progesterone can enhance alkaline phosphatase activity and the expression of genes coding for mineralization processes (57).
Alveolar bone
See Cellular growth, differentiation, proliferation and migration of PDL for overlap between hPDLCs and alveolar bone on: differentiation, alkaline phosphatase activity and expression of genes related to the mineralization process.
See Cellular growth, differentiation, proliferation and migration of PDL for overlap between hPDLCs and alveolar bone on: osteogenic differentiation, osteoclast formation.Osteocalcin production, alkaline phosphatase activity, mineralized nodule formation, upregulation of OPG and down-regulation of RANKL.El Attar, demonstrated the presence of 5a-reductase, 3bhydroxysteroid-dehydrogenase, and 20a-hydroxysteroiddehydrogenase in gingival tissue from patients with periodontitis, concluding that the metabolism and degradation of progesterone could contribute to the state of health of the gingiva.The capacity of healthy gingiva to metabolize progesterone (40, 41), androgens (44-46) and estrogens (47) has also been reported.Interestingly, inflamed gingiva has proved to be significantly more active than healthy gingiva at metabolizing SSH.Notably, the rate of metabolic conversion of certain SSH can in turn be influenced by the presence of other SSH (42).In cultured inflamed human gingival fibroblasts (hGF) from primary cells, supplementation of different concentrations of either estradiol or progesterone have shown antagonistic effects.While estradiol stimulated androgen conversion, progesterone inhibited it.When combined, an initial increase in androgen conversion was followed by an inhibitory effect, directly related to the increasing concentration of both estradiol and progesterone combined (43).
Other compounds such as cytokines, growth factors, prostaglandins and medication are capable of influencing the synthesis of SSH by gingival cells by either promoting (83-87) or inhibiting (87) the synthesis of SSH by gingival cells.
PDL cells are known to metabolize testosterone and androstenedione in vitro, indicating the presence of reductase enzymes (67).
Bone metabolism and SSH have been previously reviewed (88) and will not be addressed.Shortly, SSH metabolism by osteoblastlike cells has been reported (89, 90) and there is enough evidence that confirms a central role of SSH and bone resorption and apposition (91).
Based on the existing evidence, gingival cells and PDL are capable of metabolizing SSH into different by-products.Also, gingival cells are susceptible to both the presence of SSH and diverse external factors, which can modulate SSH metabolism.Research on PDL and hormone metabolism is still very limited, but the existing evidence is in line with what has been reported for gingival tissue.There is to our knowledge no available evidence on metabolic activity of SSH by cementoblasts.
Cellular growth, differentiation, proliferation and migration
Evidence about the expression of SSH receptors and metabolic enzymes in the periodontium supported the hypothesis that SSH were capable of inducing physiological changes in periodontal tissues.This was eventually tested in the early 2000s by different research groups that assessed the response of human gingival fibroblasts (hGF) to different SSH during different cellular processes (36,48).Effects such as changes in cellular growth, differentiation, proliferation and migration were found for all three types of sex steroids.In gingival tissue, direct effects of testosterone and estrogen have been described.Testosterone can stimulate cellular proliferation and migration (36) and estrogen has been described to induce hGF proliferation and decrease protein production (48).
PDL cells have been the focus of broader research.As previously mentioned, periodontal cells have differentiation potential and have thus the ability to form bone, cementum and collagen fibers (32).PDL cells bridge the interaction between tooth and alveolar bone and by this, they play an active role in bone-remodeling and the physiology of the periodontium.SSH progesterone and estrogen can modulate this process.When exposed to progesterone in vitro, PDL cells have shown enhanced proliferation and osteogenic differentiation (70).Different responses of PDL cells to estradiol have been described.Estradiol has been found to either inhibit the growth of PDL cells in a dose-dependent manner (56) or not exerting any measurable effect on proliferation and collagen formation (68).Contrary to these results, estrogen, like progesterone, has been reported to enhance the proliferation and osteogenic differentiation of PDL cells (62, 73) and also mediate osteogenic differentiation in PDL stem cells (65, 66).Also, estrogen is capable of inhibiting the formation of osteoclast-like cells, which are responsible for bone resorption in cocultures of PDL fibroblasts and peripheral blood mononuclear cells (69).In cementoblasts, estradiol has been reported to enhance their proliferation (76).
Additional effects of SSH on PDL cells have also been described.Estrogen has been reported to promote osteocalcin production (70,71) and mineralized nodule formation (72).Additionally, estrogen has been described to enhance alkaline phosphatase activity (61) and increase the expression of osteoprotegerin while decreasing the expression of RANKL in PDL cells via ERb (61).These findings indicate an active role of estrogen in bone-remodeling and the physiology of the periodontium.
The described effects support the hypothesis of a relevant physiological role in homeostasis of SSH in the oral cavity.
Sex steroid hormones and cytokine production in the periodontiuma bidirectional interaction
Several studies have reported immunomodulating effects of SSH via different mechanisms.Also, different oral cells are capable of producing different cytokines when exposed to sex steroids.A number of studies have reported the up and downregulation of cytokine production by SSH in gingival fibroblasts with contradictory results.Progesterone has been described to inhibit the production of IL-6 in a dose-dependent manner (37, 92) but also -together with estradiol-increase the production of IL-6 and IL-8 together with an increased secretion of vascular endothelial growth factor (VEGF) (50).This has also been reported in non-oral cell types (93, 94).Also, in immune challenged gingival fibroblasts, progestin and estradiol are capable of downregulating various inflammatory cytokines (95).
Androgens are also capable of modulating the production of certain cytokines and prostaglandins by gingival fibroblasts.Testosterone (36, 37) and DHT (37, 52) downregulate the production of IL-6.Testosterone has also been described to downregulate the synthesis of prostaglandins (54), suggesting an anti-inflammatory effect in gingival tissue.In PDL cells, testosterone and estradiol seem capable of downregulating the production of IL-17 (53).
Cytokine expression and its modulation by SSH in LPSstimulated PDL cells has also been studied.Shu et al. tested the effects of estradiol on both spontaneous and LPS-stimulated cytokine expression (55).Results showed that estradiol did not have a big effect on spontaneous cytokine production, but it did actively regulate cytokine expression when co-cultured with LPS.TNF-a, IL-1b, IL-6 and RANKL which are normally upregulated by LPS, were significantly suppressed in the presence of estradiol.Osteoprotegerin (OPG) was upregulated.This suggests a modulation of the stimulatory effects of LPS on these cytokines.This phenomenon has also been observed in the expression of certain chemokines, which are either up or downregulated by estradiol when PDL cells are challenged by exposure to LPS (96).
Just like SSH are capable of regulating cytokine production, certain cytokines have been described to modulate the conversion rate of SSH in periodontal inflamed tissue.In particular, IL-1 has been described to enhance the conversion of androstenedione and testosterone to dihydrotestosterone (DHT) -a potent androgen-in gingival tissue and PDL cells (83, 97).Although research regarding this phenomenon in healthy tissues is not available, it is important to acknowledge the possibility that healthy tissues may respond in a similar manner.
An increase in androgen conversion (DHT and 4androstenedione) by healthy gingival tissue in the presence of prostaglandins was reported in one study (86).Prostaglandins are lipid-derived molecules involved in the regulation of inflammation.Conversely, another study reported that exposure of gingival fibroblast to either progesterone, estradiol or both, upregulated the expression of COX-2, an enzyme involved in the synthesis of prostaglandins (51).These responses by gingival cells provide further insight in the effects of SSH and adaptative changes in hormone metabolism during inflammation.
Dental structure
The tooth is composed of four distinct tissues, enamel -the outer layer of the tooth-and the underlying layers dentin, pulp and cementum.
Each of these tissues is composed of distinct cell types.Enamel is the mineralized tissue secreted by ameloblasts.Dentine is a matrix secreted by odontoblasts.Pulp is composed by a complex multicellular organization including fibroblasts as well as odontoblasts, immune cells, neural fibers, amongst others.Cementum is secreted by cementoblasts (33).In all these tissues, direct or indirect evidence has been found on the presence of SSH receptors for one or more sex steroids.The existing evidence is listed in Table 2.
Progestagens Androgens Estrogens
Enamel Receptor polymorphisms and effects on the enamel Genetic polymorphisms in ER are associated with developmental defects of the enamel (98), higher caries experience (99) and a higher incidence of fluorosis in children (100,101).
Freshly isolated pulp tissue expressed AR, which was significantly more abundant in male subjects.Its expression increased when exposed to estradiol or androstenedione, while exposure to testosterone decreased its expression (104).
Cellular differentiation
DHT upregulated the expression of several genes involved in odontogenesis and odontoblast differentiation in hDPCs (103).
Receptors
Evidence of SSH receptors on enamel development and maturation is scarce.The presence of androgen (113) and estrogen receptors (114) in ameloblasts and their involvement in the developmental process of enamel has been reported in rats.In humans, only indirect evidence has been described, linking the presence of estrogen receptor's polymorphisms to the incidence of clinical changes in the enamel.Developmental defects of the enamel (DDE) (98), a higher caries incidence (99), a higher incidence of fluorosis in general (100) and in high-fluoride-exposure areas (101) indicate an important role of estrogen and estrogen receptor in amelogenesis.
The expression of androgen receptors in the pulp can apparently change when exposed to certain SSH.In vitro, the addition of androstenedione or estradiol increased the expression of androgen receptors (AR).On the contrary, the addition of testosterone reduced its expression (104).This indicates an active role of SSH in the pulp responsiveness to androgens, not only by direct interaction with sex steroid receptors but also by manipulating the expression of AR.Pulp estrogen receptors can also be up or downregulated during osteogenic differentiation (105).
Cellular growth, differentiation, proliferation and migration
As previously mentioned, pulp tissue includes different cell types including stem cells.It has been reported that the differentiation potential of these pulp stem cells can be enhanced by estradiol.An increased alkaline phosphatase activity (ALP), mineralization capacity and upregulation of odonto/osteogenic differentiation markers have been described (74,108).This has also been observed in stem cells from the apical papilla (portion at the apex of the root) of pulp tissue from immature teeth (74,110,111) and to a lower degree in stem cells from the dental follicle (tissue surrounding an unerupted tooth) (74).In human dental pulp cells, estradiol has been also reported to increase proliferation (109) as well as odontoblastic differentiation (103,109).Genes directly involved in odontogenesis and osteoblast differentiation such as AMBN, IFT88, TP63 are upregulated by estradiol and the androgen DHT (103).An increase of osteoprotegerin (OPT) -an essential protein in bone remodeling and homeostasis-has also been reported to be susceptible to estradiol, which enhances its expression (112).
Most studies addressing the effects of SSH on pulp tissue have focused on the effects of estradiol, possibly motivated by the onset of bone resorption as estrogen decreases (115).Estradiol enhances odonto/osteogenic differentiation in different cell types of the pulp and surrounding tissues, evidencing its role in tissue formation, repair and adaptation to changing external factors, all important aspects in homeostasis.
Oral mucosa
The oral mucosa is an important protection barrier against microbes, toxins and mechanical and chemical damage through its physical and immunological functions (116).Like gingival tissue, oral mucosa stems from two distinct embryonic layers.The epithelial layer originates from the ectoderm and the deeper layers stem from the neural crest ectomesenchyme (endoderm) (117).The interaction of these layers is fundamental for the proper development and function of the oral mucosa, and sex steroid hormones also play a role in these processes.The existing evidence is listed in Table 3.
Receptors
Traditionally, oral mucosa has not been regarded as a target tissue for SSH in healthy individuals.The existing studies have focused on complaints manifesting around and after the onset of menopause, where women experience discomfort in different oral mucosal tissues, and not on the physiological role of SSH in oral mucosa (123).Limited research has been done on the expression of SSH receptors in oral mucosa in humans with some studies reporting on the presence of androgen and estrogen receptors.Androgen receptors have been described in buccal mucosa of healthy individuals from both sexes (118).Estrogen receptors have been described in buccal mucosa of young and postmenopausal women (119) and in buccal mucosa of men and women (39).The limited available evidence supports the notion that oral mucosa is also a target tissue for SSH.
Wound repair
The process of wound repair has been previously studied, and histatins have been found to play a fundamental role in restoring the integrity of damaged mucosa (124).Nonetheless, some studies have addressed the role of SSH and mucosal wound healing.This has been tested in vivo by inflicting a small wound on the hard palate of 212 individuals from both sexes ranging from 18-35 years old and 55-88 years old (120).As expected, age negatively influenced wound healing regardless of sex.However, wound healing in females was significantly slower regardless of age.A follow-up to this study used the same methodology in a larger cohort (121).A group of 329 individuals (age range 18-43) and a smaller group of 93 individuals (age range 55-88) were inflicted a small wound on the hard palate and videographed every
ER is expressed in buccal mucosa of preand post-menopausal women (119).ERb is expressed in buccal mucosa of men and women (39).
Wound healing
Wound healing in vivo decreases with age regardless of sex and is significantly slower in females than in males regardless of age (120).
Wound healing in vivo is faster in young individuals with low testosterone levels.In post-menopausal women, a faster healing correlates with higher testosterone levels in blood (121).
Results showed that lower testosterone levels in the younger group related to faster wound healing whereas higher testosterone levels in post-menopausal women related to faster healing times.
In cutaneous wound healing, women have a significant advantage compared to males (125).In other mucosal surfaces, estrogen has been put forward as a protective factor in mucosal injury (126).This data suggests that sex hormones and wound repair could be tissue specific.When contrasting the existing in vivo research with studies assessing wound healing in other tissues it is reasonable to suggest that wound healing is a complicated process that not only involves migration of cells but also the host's immune response, which -as it will be soon discussed-can be modulated by SSH.
Sex steroid hormones and cytokine production in oral mucosa
As previously mentioned, the oral mucosa has a gatekeeper function which is extremely important for the host.Moreover, the oral mucosa also exerts a regulatory control over the local immune response (127).This modulation of the local immune response has been reported during the interaction of the commensal oral flora with the mucosal surfaces (128), during recurrent mechanical damage (129) and when exposed to mucosal sensitizers (130), to name some.However, little is known about the potential role of hormones during this process.
What we know so far is that estradiol might have a regulatory effect on the expression of certain cytokines in oral epithelial cells (122).When these cells were exposed to IL-1b, the mRNA and protein expression of Human b-Defensin 2 (hBD-2)-an antimicrobial peptide-, IL-6 and IL-8 were upregulated.In the presence of estradiol, a downregulation of these molecules was observed, suggesting and anti-inflammatory effect of estradiol in oral mucosal epithelial cells.
Salivary glands
In humans, saliva is produced by major and minor salivary glands, which originate from the oral epithelium.Major salivary glands consist of three pairs of glands, namely the parotid, submandibular and sublingual glands.Minor salivary glands can be found throughout the oral cavity (131).The effect of sex steroids on the structure, salivary production amongst others have been widely studied using animal models (132-135).In humans, some studies have been published on the effects of different SSH on salivary glands.These are summarized in Table 4.
The existing literature confirms the susceptibility of salivary glands to androgens and estrogens.
Metabolism
Only one study has addressed the capacity of salivary glands to metabolize SSH.Blom et al. measured several sex-steroid byproducts after incubating salivary gland samples with progesterone and testosterone in vitro (139).Based on the detected metabolites, it was concluded that these cells presented steroid-reductase, dehydrogenase and hydroxylase enzymes.
Sex steroid hormones and cytokine production in salivary glands
Cytokine production and modulation by SSH is salivary glands is an extremely understudied topic.To our knowledge, only inhibition of IFN-g-induced ICAM-1 expression by estrogen was reported in one study (138).
SSH-mediated tissue interactions and maintenance of oral health
In this review we have provided a comprehensive overview of the scientific literature concerning the interactions between sex steroid hormones and oral tissues.By this, we aimed to clarify the physiological roles of sex steroids within the oral cavity, their role in oral homeostasis and in the development and function of different oral tissues.Our analysis of the existing body of evidence confirms the enormous versatility of SSH in contributing to various aspects of oral health maintenance (Figure 3).Their contribution can be
Salivary glands
Receptors AR are expressed in parotid, submandibular and minor salivary glands (136).AR are expressed in minor sebaceous salivary glands (137).
ER is expressed in submandibular, parotid and minor salivary glands of preand post-menopausal women (119).ERb is expressed in submandibular, parotid and minor salivary glands of men and women in acinar and ductal cells (39).Expression of ERa, ERb1 and ERb2 was detected in minor salivary glands (138).
Metabolism Parotid and submandibular glands are capable of metabolizing progesterone and testosterone in vitro (139).
Cultured salivary gland cells exposed to estrogen showed a significant inhibition of IFN-g-induced ICAM-1 expression (138).But what happens when the balance is disrupted?
Summary of presence and roles of SSH in the oral cavity.
In the event of abrupt hormonal fluctuations, the systemic repercussions are notable.Temporary hormonal perturbations can be assimilated by the body: its inherent resilience and immune fitness allow a swift return to homeostasis without significant consequences (e.g.pregnancy).However, the persistence of such hormonal alterations can hinder the body's adaptive capacity, progressively weakening physiological functions and eventually resulting in further instability of the system and the onset of disease.Moreover, disease does not affect males and females in the same way.
Sexual dimorphism in oral disease
Numerous diseases exhibit variations in prevalence, progression, and manifestation between males and females.These and other biological differences between males and females are referred to as sexual dimorphism.Regarding oral diseases, males are more prone to severe periodontal disease than females (140, 141).This may be attributed -in part-to differences in hormonal levels and their impact on immune modulation.Several examples for different pathologies can be found in the literature.
A recent systematic review investigated the expression of sex steroid hormones (SSH) in oral squamous cell carcinoma (OSCC), a condition observed to be twice as prevalent in males than females (142).This review concluded that sex hormones and sex steroid receptors play a significant role in influencing the physiopathology of OSCC.
Other pathologies such as Sjögren syndrome (SS), burning mouth syndrome (BMS) and temporomandibular disorders (TMD) are more prevalent in females than in males (143)(144)(145).SS is an autoimmune condition that affects the lacrimal and salivary glands, negatively affecting saliva production, which results in severe dry-mouth symptoms (146).Low levels of dehydroepiandrosterone (DHEA) have been associated with SS manifestation and hormonal supplementation has proved to improve dry mouth symptoms (147).BMS has also been associated with lower levels of DHEA (148) and TMD has been associated with low estrogen levels (149) and estrogen receptor gene polymorphism (150).
The role of SSH on the incidence and progression of conditions in the oral cavity is not completely understood, but sexual dimorphism in disease has been observed in several conditions (151).
Aside from genetical and behavioral aspects that also influence the differential response to disease (151), factors such as SSH receptors, the interplay and balance between different sex steroids and their conversion in biological tissues play an active role on disease (152,153).In endometrial cancer, the loss of progesterone and estrogen receptors is associated with an advanced disease stage (154) and hormonal imbalances can result in changes in gene expression, further influencing disease progression (155).Also, changes in SSH signaling can contribute to lung disease (156) and changes in the enzymatic expression of aromatase in tumorassociated macrophages have been found to correlate with the severity of breast cancer (157).This also exemplifies the influence and relevance steroidogenesis, and the subsequent cascade of events can have in the maintenance or loss of homeostasis.
Changing hormone levels are closely related to aging (158).Aging increases the risk and severity of various diseases, and the oral cavity is no exception (159).This is particularly evident as SSH levels sharply decrease with the onset of menopause in women and gradually decline due to reduced androgen levels in men (160,161).These decrease in SSH significantly impact the immune system, among other effects (162).Several studies have been conducted on the changes of SSH, sex, aging and oral diseases (163)(164)(165)(166)(167), confirming the significant influence of these factors on oral homeostasis.
Sexual dimorphism is present in several diseases and therefore sex differences should be included as a variable in research.Recognizing and understanding the different roles of SSH in oral physiology are fundamental for preventing and treating oral diseases.
Clinical implications and future challenges
This review has focused on the role of SSH in the maintenance of oral health and their involvement in different physiological processes in the oral cavity.Nevertheless, the microbiome and its influence on the maintenance of homeostasis in the oral cavity has not been mentioned.It is beyond the scope of this review to discuss this; however, it is important to stress that both homeostasis and dysbiosis are multifactorial biological processes.Therefore, future research should also consider the interplay between SSH, the oral microbiome and the host, and how they influence homeostasis and dysbiosis in the oral cavity in humans.
It is important to emphasize that hundreds of studies on this topic have been conducted using animal models.These studies have significantly contributed to our understanding of how sex steroid hormones interact with complex biological systems.However, results from animal studies are not always easily translatable to human physiology.We therefore focused this scoping review only on studies related to humans.Nonetheless, animal studies have served as a crucial steppingstone, allowing several hypotheses to be tested in humans, as presented in this paper.
Conclusion
Sex steroid hormones are an overlooked yet fundamental factor in oral homeostasis.They play important roles in the development and function of the periodontium, mucosa, dental structure and salivary glands.Nevertheless, it is important to acknowledge that the onset, progression and ultimate resolution of disease are multifactorial processes, and the presence of hormonal imbalances alone is insufficient to account for their etiology and eventual reinstatement of homeostasis.However, it is important to consider sex steroid hormones as a relevant factor in oral health maintenance, which is complex and can vary from person to person.Hormone levels can fluctuate due to factors like age, hormonal treatments and medical conditions.Research on this topic is ongoing, and the precise mechanisms and clinical implications are areas of active study in the field of dentistry and oral health.Dentists and healthcare providers should consider these hormonal factors when assessing and treating oral health conditions.
summarized as follows: ( 1 )
Sex steroids exert a significant influence on periodontal health by playing an active role in bone remodeling and its immune response; (2) Several sex-hormone-mediated inflammatory mediators are up or downregulated by SSH, providing protection to external disturbances; (3) Sex steroids significantly contribute to the proper development of enamel, a fundamental process that ensures adequate protection of the dental structure; and (4) SSH assume a vital role in preserving mucosal health and integrity by promoting vascularization and regulation of inflammation, thus facilitating optimal tissue turnover and wound healing.These processes and balance could not be attained without the finely orchestrated interplay between SSH and their biosynthesis, conversion, transport, and consequent signaling to achieve the desired biological effects.All things considered, it is evident that sex steroid hormones assume a pivotal role in the physiological maturation and maintenance of oral tissues in the absence of physiological disbalances.
TABLE 1 Studies
reporting effects and interactions of SSH and periodontium.
TABLE 2
Studies reporting effects and interactions of SSH on the dental structure.
TABLE 3
Studies reporting effects and interactions of SSH and oral mucosa.
TABLE 4
Studies reporting effects and interactions of SSH and salivary glands.
|
2024-07-25T15:17:05.892Z
|
2024-07-23T00:00:00.000
|
{
"year": 2024,
"sha1": "cbbc0401efc70e14f4ff882f22934c81d71308e6",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3389/fendo.2024.1400640",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d382554d14e8d99a96bd6a9db088658e295e344a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
53104353
|
pes2o/s2orc
|
v3-fos-license
|
The Multiobjective Based Large-Scale Electric Vehicle Charging Behaviours Analysis
In the paper, the effect of the charging behaviours of electric vehicles (EVs) on the grid load is discussed. The residential traveling historical data of EVs are analyzed and fitted to predict their probability distribution, so that the models of the traveling patterns can be established. A nonlinear stochastic programming model with the maximized comprehensive index is developed to analyze the charging schemes, and a heuristic searching algorithm is used for the optimal parameters configuration.With the comparison of the evaluation criteria, the multiobjective strategy is more appropriate than the single-objective strategy for the charging, i.e., electricity price. Furthermore, considering the characteristics of the normal batteries and charging piles, user behaviour and EV scale, a Monte Carlo simulation process is designed to simulate the large-scale EVs traveling behaviours in long-term periods. The obtained simulation results can provide prediction for the analysis of the energy demand growth tendency of the future EVs regulation. As a precedent of open-source simulation system, this paper provides a stand-alone strategy and architecture to regulate the EV charging behaviours without the unified monitoring or management of the grid.
Introduction
With the gradually deteriorated air quality, environments and energy crisis caused by the fuel-powered vehicles around the world, renewable vehicles, i.e., electric vehicles (EVs) are greatly promoted by all the governments, and many policies have been issued related to their development, where EVs will become the main transportation tools in the future along with the increasing improved technologies and infrastructure construction.However, the continuous energy demand growth of EVs puts a heavy pressure on the power grid with serious uncertainties in grid regulation [1].Generally, one EV connection to the grid is roughly equal to the load requirement from one small household, where the construction of the grid in California, the most popular states of EVs in America, is facing the grid update problem currently [2].The effect of EVs on the distributed power grid has been investigated in [3][4][5], and the involved impact factors can be summarized as traveling patterns, battery characteristics, charging schedule, and EV penetration, as shown in Figure 1.
From the analysis of the Danish national transportation survey data, the EV traveling model can be established with the driving distance and driving periods as the statistical data, so that the power demand and expected charging time from EVs can be determined [6].The data demonstrate that the average driving distance of the EVs is 29.48 km, while the daily driving distance of 75% of the EVs is less than 40 km.Therefore, 40 km driving distance can be used to determine whether the EV battery capacity can meet the daily traveling requirement.Moreover, EVs can be considered as the mobile load connected between the power system and the transportation system, where the random mobility and charging of the EVs are dependent on the stochastic traveling patterns of the EV owners [7].The graph theory was used to analyze the traffic and power network based on the driving patterns, and the randomly mobilized characteristics of the EV charging load was modelled accordingly.
Besides the requirement of the power quality (i.e., voltage and frequency) during EV charging, the required charging power is the most concerned for the power supply utilities [8].According to the standard J1772 of the Society of Automotive Engineers [9], charging power can be divided into three levels, where grade 1 (1.5-3 kW) and grade 2 (10-20 kW) are suitable for family use, and grade 3 (40 kW and above) is more suitable for EV fast charging, by which EV charging can be completed within one hour; however, it is mostly used in centralized charging scenarios due to overly high power.
The charging plan is normally determined by two factors, charging availability (charging situation) and charging strategy [10].Since the energy consumption of the daily EV journey is less than the battery-rated capacity, it is unnecessary to charge the EV on daily basis.The EV can start to be charged only when its battery state-ofcharge (SOC) is less than the threshold.The EV charging strategies play an important role in the evaluation of the EV influence on the power system [11], which can be divided into (1) simple charging (dumb charging), i.e., the unplanned "plug and play" pattern, generally charged at the end of the trip in one day or when the charging facility is available; (2) tariff-driven charging, i.e., charging during off-peak periods with cheaper cost; and (3) intelligent charging, i.e., the extra battery energy can be used as the energy resource to provide assistant service during high-peak hours, which is beneficial to the stable grid operation.
Another important aspect to analyze the EVs influence on the grid is to design the evaluation index quantitatively.In [12], based on the insular power system of Crete in Greece, it studied the impact of plug-in EVs with smart and direct charging patterns on the grid energy scheduling and cost.An EV aggregator that participates in the market is proposed with the price-taking approach for solving optimally self-scheduling problem.Compared with different scenarios of EV penetration, an irregular performance would be presented on the unit commitment schedule while the number of EVs reaches 21,000.Paper [13] introduces an architecture for the intelligent energy management system, which is composed of an admission control, a pricing module, and a power scheduling module that determines the charging sequence of EVs.The queuing problem of power dispatch is formulated as a stochastic dynamic programming problem, and a threshold admission and greedy scheduling method is adopted to minimize the electricity cost.
Based on the review of EVs impacts on the grid, it can be shown that EVs popularization will bring nonneglectful new electric load.At present, however, most research conclusions about large-scale EVs charging are drawn from simulation experiments.There are obvious discrepancies of gas station facilities and EV supporting facilities among different cities.How to improve the credibility and universality of the prediction results is still a challenge.Therefore, many involved issues should be addressed to improve the load prediction accuracy, i.e., driving and charging models with appropriately practical condition, traveling probability distribution model, and multiobjective EV charging patterns.
Furthermore, most works merely consider the price incentive in the charging modes, and the relation between EVs traveling behaviours and charging schemes has not been accurately described by mathematical models [14,15].This paper investigates large-scale EV charging behaviours as a multiobjective optimization problem, and the main contributions are summarized as follows: (1) From the benefits of the EV users and power suppliers perspectives, the comprehensive evaluation index system has been developed with three key factors: load peak value, charging bills, and traveling rate, which are used as the objective to describe the The remainder of the paper is organized as follows.Section 2 analyzes the energy usage status of the EVs, and the optimized indexes are determined.The statistical model of the traveling pattern and the charging strategies of the EVs are introduced in Section 3. Monte Carlo simulation algorithm is used to calculate the solution of the nonlinear programming model in the same section.In Section 4, a multiobjective charging strategy is developed with optimal parameters, and the proposed strategies are compared via simulation experiments as well.Conclusions and future work are given in Section 5.
Problem Formulation
2.1.System Description.According to the traffic survey in Beijing [17], the traveling periods of private EVs during working days are concentrated among 6:00-9:00 (early peak) and 16:00-19:00 (evening peak), and 40.3% of the parking time (slack time >5 hours) of EVs are distributed among 18:00-21:00, which are depicted in Figure 3.If EVs charging is performed without any guidance, peak load would be generated due to the large-scale charging concentrated at parking periods.According to the 2020 Shenzhen EVs growth plan [18], the current power grid will be difficult to satisfy the energy demand of EVs, which would result in power supply tension, or even endanger the safety of the grid operation [19].Furthermore, more investment should be put to satisfy higher load peak requirement, resulting in large-scaled equipment upgrade and capital budget pressure.On the other hand, due to the constrained battery capacity and limited charging piles, the EVs traveling plan would be hindered if without appropriate charging schedule, which is currently one of the most important reasons to limit the popularity of EVs.
Therefore, an objective to study the effect of large-scaled EV charging on the grid load (hereinafter referred to as "EV charging problem") is summarized as: based on the routine traveling habits of EV users, the optimal charging strategy is proposed to reduce the grid peak load, lower the cost of EV users, and guarantee the success of traveling plan, which can be ascribed as one of the NP-hard problems.
First, the relationship between the traveling patterns and charging behaviour are analyzed so as to extract the involved 3 Complexity key factors and determine the evaluation indices.The total required power of the EV charging from the grid system at the specific moment is determined by the four main impact factors, i.e., battery characteristics Cap ∼ SOC , charging piles W c , users behaviour (F d ∼ Q c ), and the current EV number N ev .As it is shown, the characteristics of the battery pack, charging piles, and current EV number can be assumed as known variables.The variables describing user behaviours are unknown variables which will be determined by mathematical models.
For this type of NP-hard problems which cannot be solved directly, the MC simulation method is adopted to simulate the EVs operation system under different charging strategies for the purpose of optimal EV charging mode obtainment.Based on the statistics, different probability distributions are selected to describe the EVs driving behaviours to improve the credibility of the MC simulation results.Finally, several commonly used optimization algorithms are introduced to adjust the parameters configuration of the charging modes so as to realize the optimal comprehensive index.
According to the technical specifications of the small EVs and common characteristics of urban residents' vehicle traveling habit, several assumptions have been made to describe the complex EVs charging problem without loss of generality.
(1) The EV battery and parameters related to the charging piles are set similar to those of the commercial products (2) Each day (24 hours) will be divided equally into 96 intervals, i.e.,15 minutes as the sampling period (3) The traveling frequency, departure-parking time, and driving mileage are the main factors directly affecting the EV charging behaviour, i.e., the charging period (T c ), initial SOC (SOC ini ), and the required charging quantity (Q c ).So the traveling patterns and the charging strategies are the two key factors to be focused (4) In order to guarantee the success of the expected traveling plan, it is assumed that the battery packs will be fully charged at each charging Replacing the input module (shown in Figure 2) with specific traffic data, the problem formulation and models can be extended to other urban areas or holiday/weekend driving patterns.Thus, the designed framework has the advantage of solving a universal problem and benefits to obtain reasonable results in different situations.
2.2.
The Evaluation Index System.The different evaluation indices are listed in Nomenclature after Conclusions, which are developed from the perspectives of power suppliers W peak , W average , APR, R load and EV owners C, C max , R save , R trip .Since the average-peak ratio (APR), saving rate (R save ), and traveling rate (R trip ) are the most concerned indices for the energy suppliers and EV owners [10,20], they are used to form the comprehensive index Y, which is defined as where α, β, and γ are the positive weight coefficients (≤1 0), and they can be set flexibly.Here, α = 0 4, β = 0 3, and γ = 0 3, denoting load balance index taken as priority.
The higher value of Y means higher system performance.
⋅ is used to normalize the index value by equation ( 2), and X min , X max are bound of X.The normalization can improve the sensitivity of the optimization algorithm to the index variation so as to avoid premature convergence.
The Ev Load Estimation Based on Monte Carlo Simulation
After the optimal objective and key impact factors are determined, the EV charging problem can be configured as a nonlinear programming model.Under the constraints of the EV battery characteristics and user traveling patterns, the objective functions can be established based on different demands from the power suppliers and EV users so as to obtain the optimal charging schemes.The undetermined variables in the model, i.e., W i , are quantified via MC simulation tool to simulate the daily EV traveling, charging, and slack state.Furthermore, the statistic models of the traveling habit and EV charging behaviour have been established to improve the MC simulation credibility level.
3.1.The Probability Distributed Fitting Model for the Traveling Variables.According to the data collected from the GPS installed on 112 private EVs in Beijing [17], totally, 4892 data from June 2012 to March 2013 are recorded and used as the relevant information are outlined in Table 1.
Here, from the analysis of the EVs driving data in working days, the probabilistic density function (PDF) and its parameters of the traveling variables can be determined as follows (see Figure 4): (1) To select the PDF from the generally used probabilistic functions, i.e., exponential, gamma, normal, and Poisson distribution functions, which are called "PDF-x" (2) To estimate the scale and shape parameters of the "PDF-x" through the maximum likelihood estimation (MLE) [21] (3) To verify whether the generated 1000 groups of data accept the original hypothesis at a confidence level of 95% (significance level α = 0 05) by the K-S, F-test, and T-test such statistical methods [22] (4) According to the P value results, the fitting degree between the "PDF-x" model is obtained (5) To evaluate the best fitting of the "PDF" as "PDFbest" after iterative comparison Then the obtained "PDF-best" of each variable is applied to the simulated system so the input values satisfied specific probabilistic distributions are in accordance with the actual vehicles traveling patterns.Although there are certain biases existence at F d D am t and D pm t (shown as Figures 5(a) and 5(d)), different hypotheses testing methods (including chi-square test, Kolmogorov-Smirnov test, and t-test) are adopted and verified that these deviations are acceptable with sufficient statistical samples, the detailed results are achievable in our open-source project [16].Therefore, the designed MC simulation model is quite reliable with high credibility.
(1) The daily traveling frequency, F d , follows Γ distribution, where Γ a is the gamma function, x is the random variable, f ⋅ is the PDF of F d , and a and b are the shape parameter and scale parameter, respectively.Through the MLE, a = 3 71, b = 0 64, the expectation mean = 2 39, and the variance var = 1 24 can be obtained.The comparison between the actual data and the fitted curve of the traveling frequency is shown in Figure 5(a) (2) The driving mileage of each traveling, M d , follows Birnbaum-Saunders (BS) distribution, where f ⋅ is the PDF of D am t , Γ ⋅ is the gamma function, and μ, σ, ν are the location parameter, scale parameter, and shape parameter, respectively.Through the MLE, mean = μ = 8 36, σ = 1 08, ν = 2 16, and var = 3 98 can be obtained.
Through the MLE, the expectation μ = 18 2 and the standard variance σ = 2 84.The fitted curve with probability density distribution and the actual data curve are shown in Figure 5(d).
Modelling the Charging Strategies.
EV users would select different charging strategies, i.e., to set a specific target or to charge at optimal periods.For instance, under the incentive of the time-of-use (ToU) electricity prices [23], the users would charge EVs during parking periods with lower electricity prices, which is called the tariff-guiding strategy.The usually used three charging strategies are described in Table 2.
Equation ( 7) describes the unified quantization formula of the users' charging motivation.By changing the weight coefficient W i i = 1,2,3 of the three charging strategies, the charging strategy models can be derived quantitatively as follows: where P is the charging priority to be arranged at certain period, i.e., EV starts charging when P > 0 5; R is the random priority factor following the uniform distribution with R ∈ 0, 1 , in order to simulate the random charging behaviour; T charge and T slack are the predicted time intervals required for the fully charged and the slack time intervals, respectively; U s denotes that if the current SOC of the EV, SOC curr ≤ SOC min , the EV would be charged immediately.In order to ensure the reliability of the traveling plan, SOC min = 0 2. C and C TOC are the average charging cost at certain periods and the average charging cost of one day, which can be calculated as follows: where T is the continuous charging duration and C i is the ToU price at the i th moment, as listed in Table 3; here, C TOC = 0 57 Yuan/kWh.Through the comparison of the priority level at each interval during EV slack periods, the charging moment can 7 Complexity be regarded as the highest priority (e.g., the maximum random value, the least cost, or the longest parking time) to achieve the maximum benefit.The pseudocode of the charging procedure is shown in Algorithm 1.
Monte Carlo Simulation.
Based on the EV traveling and charging statistics, the MC is used for numerical simulation to solve the output performance index of the different charging schemes.
3.3.1.
The Basic Concept of the MC Method.MC method is known as a stochastic simulation method or statistical testing method to use random sampling for mathematical function estimation.MC has statistical convergence where the fitting deviation converges to a certain threshold.
The predicted model of the EV charging capacity calculation is developed based on MC random sampling tests.The capacity of the battery charging is calculated per day evenly divided into 96 intervals, and the total charging capacity in the i th time interval is described as Table 2: Description of the charging strategies.
Randomness
When W 1 = 1 0 and SOC curr > SOC min , charging at P = R > 0 5 denotes that charging is a random behaviour that accords with the uniform distribution.
Tariff guidance
When W 2 = 1 0 and SOC curr > SOC min , charging at P = 1 5 − C/C TOC > 0 5 denotes that the charging starts when the current average cost of the charge is lower than that of the daily average cost, i.e., C < C TOC , that encourages users to charge EVs at the parking period with the lowest electricity prices.
Charging at parking
When W 3 = 1 0 and SOC curr > SOC min , charging at P = 1 5 − T charge /T slack > 0 5 denotes that the charging will be started immediately after the arrival if the parking time is longer than the required full charging time, i.e., T charge < T slack ; since T slack decreases over time, the maximum priority value P can be achieved at the just parking moment.Complexity where h ij d represents the charging capacity of the j th EV in the i th time slot on the d th workday, N represents the number of the EVs acquiring power supply from the grid in the i th time period, and D is the total counted days.Several constraints are set in the simulation.Consider the condition that the battery should be in fully charged status before driving, the starting time of the battery charging should be limited as where T 0j and T 1j are the starting and ending slack status of the j th EV, ΔT j is the maximum continuous charging duration, Qc j is the charging capacity, SOC ini,j is the initial state of the j th battery capacity that is related to the driving distance per trip, and t j delegates the starting charging moment under fully charged conditions.The condition of convergence adopted in MC sampling model is expressed as where β i is the variance coefficient of the system indicator at the i th moment; V i , y i , and σ i are the variance, expectation, and standard deviation.The repeated times in MC simulation is at least 100, and the variance coefficient β i is set to less than 0.5%.
The Comprehensive Index Calculation via MC
Simulation.Firstly, according to the users traveling model, the daily driving duration and mileage are determined, while the EV charging time period is dependent on the traveling status, characteristics of the battery, and the adopted charging strategy.After the determination of the EV driving/ charging period, the total driving mileage, charging power, and charging bills can be calculated in the 96 time intervals per day.Finally, the comprehensive index and related variance coefficients can be studied based on equations ( 1) and ( 12) to complete the whole day simulation.
According to the Shenzhen EVs demonstration and promotion plan, the number of private EVs will reach 240,000 in 2020, which will be used for the charging load calculation.Based on the probability distribution model of the traveling pattern and three types of charging strategies, the simulated large-scale EVs operation procedure via MC simulation is shown in Figure 6.The related indices of the predicted grid load per day in 2020 are shown in Table 4.
It can be seen from Table 4 that there are large difference for the peak loads or APR with different charging strategies.Compared to the traveling habits of EV users, the EVs charging behaviours are more controllable via certain guidance.Hence, the charging strategy is one of the most feasible optimization objectives.
Multiobjective Charging Strategies for the Large-Scale Evs
According to the government report, the grid load of the Shenzhen exceeded 15598.6MW at 11:10 AM of 27th June 2016, and the EV total charging load will exceed 5000 MW per day in 2020.Moreover, with random and parking-charging patterns, EV owners select to perform charging during peak periods, i.e., 9 AM-14 PM and 20-23 PM, which is 1.57% and 1.97% of the historical peak value, respectively.From the grid safety perspective, it could easily cause peak superposition of the electricity usages, thus putting heavy pressure on the grid accommodation capacity obviously.
On the other hand, EVs charging with tariff guidance can avoid the charging during high-peak periods but could generate another charging peak value 824.64 MW in shorter periods between 23:00-2:00, which is shown in Figure 7. Thus, with only the adoption of ToU pricing mechanism, it would result in other concentrated charging periods and abruptly steep high-peak electricity usage would endanger the stability of the grid operation seriously.
From the comprehensive index perspective, the three charging strategies cannot provide satisfied performance under single factor consideration.For instance, the charging at arrival pattern can guarantee the success of traveling but also can produce higher bills; moreover, charging too frequently would be harmful to the battery life.If only the pricing-based charging mode is adopted, the tariff could be least during midnight; however, in addition to a sharp peak would be caused in short period of time, the required electricity could not be guaranteed if the emergency traveling is required.
Here, considering the impact factors involved in the system indicator in equation ( 1), the three single indices and the compressive index are configured as the objective set, which is denoted by f obj .Then a combined multiobjective charging strategy can be proposed with the developed nonlinear programming model, which is defined as where W i are the coefficients as described in equation ( 7); SOC min directly impacts the required energy supply and the traveling distance of next trip.Given that EV starts charging when P > 0 5, so δ is added as the model bias to make the charging decision threshold more reasonable.These parameters are set to be optimized.
Complexity
For the multiobjective problems, previous studies suggest the gradient-based solution [24,25].As described in [25], when the objective function is nondifferentiable, gradientbased optimization techniques are not suitable for dynamic programming models with constraints.Moreover, there are many potential local extrema in the charging model, and the gradient descent algorithm can easily lead to pseudoconvergence.Considering that our problem has constraints of equality and inequality, the objective function is not required to be continuous and differentiable in heuristic algorithm; the 10 Complexity multiobjective optimization algorithm based on feature selection is proposed to find the global optimal combination of system parameters.First, the objective function is established as f obj ; three most popular heuristic algorithms, i.e., particle swarm optimization (PSO) [26], genetic algorithm (GA) [27], and simulated annealing (SA) [28] are selected to search the nondominated set.Then the maximized comprehensive index, max Y, can be extracted from the Pareto-front space, so that a unique optimal solution can be obtained via the proposed algorithm.
4.1.Optimal Searching Algorithm under Constraints.According to the constraints expressed in equation ( 13), a feature selection-based algorithm is proposed to solve the problem from continuous optimization transformed into discrete optimization.First, the involved variables are coded into binary form.Then the feature selection is applied to optimize the feature code of each individual so as to improve the quality of the variables combination under constraints.The related open-source algorithms can be referred in [16], and the detailed steps are described as follows.
Step 1. Preprocess each input variables, W i , SOC min , and δ with 8-bit binary coded data, as Algorithm 2 Step 2. Initialize the particles swarm; each is expressed by an 8-bit × 5 (variables) Step 3.With the adoption of feature selection algorithms, e.g., BPSO, optimize the feature code of each individual so as to improve the quality of the variable combination Step 4. Perform the inverse procedure of Step 1 and transform the binary code of each individual into variables set form in order to calculate the individual fitness.The pseudocode (Algorithm 3) are displayed to describe the equation constraint Step 5. Implement the MC procedure to acquire the fitness, and the converged negative index (1 − f obj ) is regarded as the individual fitness.The related parameters are set to speed up the calculation and guarantee convergence as days = 30, EVs = 100, and variance = 0 005 Step 6.If the optimality condition is satisfied or the maximum number of iterations has been reached, the procedure stops and the optimal variable combination solution is obtained; otherwise, go to Step 3 to continue the next iteration Compared to the general constraint-handling methods, such as penalty function [29], the proposed method possesses simplicity and high efficiency by transforming the continuous combination problem into a binary feature optimal problem.It can avoid invalid searching due to a variable out of bound in each iteration.Therefore, binary feature optimization can be applied in multiple intelligent optimal algorithms.
Complexity
The PSO, GA, and SA are the typical algorithms for the solution of multiobjective problems with constraints.Combined with the basic feature selection method, they are used to solve the objective in equation ( 13) with multiconstraint.The involved parameter settings are listed in Table 5.
Experimental Results and Analysis.
In order to investigate the access impact of the large-scale EVs to the grid load features, the simulations are kept running for 3 days with two cluster servers, whose hardware are listed in Table 6.The EV charging behaviours are analyzed with multiple charging strategies, and the performance index difference with multiobjective schemes are also discussed.13)) can be converged lower than 0.5%, where the reliability is satisfied with the MC simulation requirement.
Based on the traveling model with certain credibility and different charging modes, MC is used to simulate the 240,000 EVs driving in 100 working days.After the convergence of the simulated system is achieved, the daily average charging load curves can be acquired with four different strategies, as shown in Figures 9 and 10(a).
The Performance Comparison of the Charging
Strategies.The index comparison of different charging schemes is illustrated in Figure 10.First, under the random strategy, there will be new loads added as a superposition during the peak periods of daytime, and the scheduled trips of EV could be cancelled with higher probability due to insufficient capacity (see Figure 10(c)).Moreover, a sharp peak load could be generated in the midnight with the tariff-guiding strategy.As for the parking-charging strategy, a peak load as well as the highest electricity payment during the evening period could be generated (see Figure 10(b)).
However, the multiobjective charging strategy can achieve a compromise among different indicators (see Figure 9).The essence of the multiobjective is to balance the resource allocation among indicators, and the selection of weights in the formula is rather subjective.Therefore, heuristic algorithm is used to determine the Pareto optimum of variables.
Table 7 summarizes the indexes of different charging strategies during daily driving.The APR is the highest with random charging mode; the charging tariff is the least with tariff-guiding mode; the highest rate of success traveling can be achieved via parking-charging model; the best comprehensive index is achieved via the multiobjective charging strategy.The results demonstrate that the combination model described in equation ( 13) and the parameters configuration obtained via SA can guide the charging behaviour effectively and improve the performance indices greatly.
Observations and Discussion. The experimental results
shown in Figure 10(a) indicate that the power load will reach 245.01 MW during peak periods in 2020 with 240,000 EVs charging without any guidance in Shenzhen.The power suppliers have to take measures to update the current grid infrastructure and develop the smart grid to reduce the obviously peak-valley difference.Thus, based on the feature 12 Complexity selection, three optimal searching algorithms considering constraints are proposed and applied for solving multiobjective charging problem.As listed in Table 7, under the multiobjective strategy and its optimized parameters configuration, the APR is increased by 11%, charging cost is saved by 66.2%, and the successful traveling rate is 99.5%.Therefore, the multiobjective combination of charging strategy can provide a maximized comprehensive benefit for the EV users and power suppliers.From the users perspective, it is suggested that the EV should be charged timely if its SOC is less than 56 9%.
Differentiating from other literatures that concern about the accuracy verification of the load prediction [30,31], this paper designs a universal model to determine the optimal periods for stand-alone charging behaviours, which has no direct communication between the grid and EVs [11,13] or the online adjustment on the electricity price [12].Referring to the multiobjective strategy, the on-board programmable control system can estimate the optimal periods to charge the EV automatically during EV slack periods set by the EV owners.Not only the traveling plan can be guaranteed but also the electricity bills saving and off-peak energy usage can be achieved simultaneously.
Conclusions
This paper investigates the EVs large-scale charging behaviours and discusses intelligent charging strategies.First, the probability distribution model of the traveling pattern based on the actual traffic data is established to improve the credibility of the prediction.Considering the benefits for the EV users and power suppliers, the evaluation indices are selected, including load peak value, charging bills, and traveling rate.Under the constraints of the EV battery features and users traveling statistics characteristics, the multiobjective charging strategy has been developed, where the Pareto optimum of 13 Complexity the parameters are determined via a novel constrainthandling method with feature selection.Monte Carlo tool is adopted to simulate the EVs activities, and an open-source system with a universal framework is released to promote the research on access impact of the large-scale EVs to the local grid load.
In summary, the adopted probability distribution statistics, MC simulation, evaluation indices, and EV charging The driving mileage of each traveling, unit: km T d : The duration of each traveling D t : The departure time of each traveling T c : Charging periods SOC ini : The SOC at the initial charging moment Q c : The required charging power, unit: kW/h Q c = 1 0 − SOC ini × Cap × Vol.
EV scale
N ev : The amount of EVs.
Evaluation index
W peak : Daily charging peak value W average : Total load/time interval APR: W average /W peak R load : Load rate = W peak /grid peak C: Charging cost = electricity price × power consumption C max : The highest price × charging quantity R save : Saving rate = 1 − C/C max R trip : Actually driving mileage/planned mileage Y: The comprehensive index; the higher value means higher system performance.
Multiobjective optimization model
f obj : Three single indices and the comprehensive index make up the objective set W i : The coefficients of the factors (i = 1,2,3, i.e., randomness, price guidance, and slack time) SOC min : The lowest limit of the battery SOC δ: The model bias.
Figure 1 :
Figure 1: The impact factors of EV charging to the power system.
Figure 2 :
Figure 2: The framework for EVs traveling and charging behaviours analysis.
( 5 )
The traveling patterns of the personal EVs on the working days are considered; other types of EVs or driving patterns under holidays/weekends will not be addressed unless otherwise specified(6) The urban residents in Shenzhen and in Beijing share the same traveling pattern
Figure 3 :
Figure 3: Daily traveling and parking behaviours of EVs in Beijing.
4
Complexity where f ⋅ is the PDF of the BS distribution; β and γ are the scale parameter and shape parameter, respectively.Through the MLE, β = 10 57, γ = 0 97, mean = 15 52, and var = 15 09 can be obtained.The probability density distribution curves of the actual data and the BS fitting curve are shown in Figure5(b) (3) The duration of each traveling, T d , follows Γ distribution.Suppose each traveling duration also obeys gamma distribution as expressed in equation (3), the parameters can be acquired via MLE, i.e., a = 1 87, b = 18 35, mean = 34 4, and var = 25 12.The fitted curve with gamma distribution and the actual data curve are depicted in Figure 5(c).
(4b) The departure time of each traveling (PM), D pm t , follows normal distribution,
Figure 4 :
Figure 4: Diagram of the distribution function selection of the involved traveling variables.
Figure 5 :
Figure 5: Fitting analysis of the EVs traveling historical data.
Figure 6 :
Figure 6: The MC simulation of the EV driving and charging behaviour per day.
4. 2 . 1 .
The Convergence Analysis of the MC Process.The results of the EVs charging in 100 days under different charging strategies via MC simulation are shown in Figure 8.The variance coefficient of the comprehensive Y value (calculated by equation (
Figure 9 :
Figure 9: The hourly charging load via the multiobjective strategy.
Figure 10 :
Figure 10: Index comparison of the hourly charging under four different charging strategies.
Table 1 :
The involved five factors for EV traveling pattern analysis.
Table 3 :
Time-of-use electricity prices per day (unit Yuan/kWh)
Table 4 :
The EV daily charging load index prediction in 2020 (unit: MW).
Table 5 :
Parameters settings for three heuristic algorithms.
Table 6 :
The hardware environment of the experiments.
Table 7 :
The index comparison of different charging strategies during daily driving.Charging strategies Total load (MW) APR Charging bill (Yuan) Saving rate Aborted trips (times) Traveling rate
|
2018-11-03T13:04:33.096Z
|
2018-10-16T00:00:00.000
|
{
"year": 2018,
"sha1": "dd4d1cba27a19f01bf1a88d2b3c5ead5598cd582",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/complexity/2018/1968435.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "dd4d1cba27a19f01bf1a88d2b3c5ead5598cd582",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
139782078
|
pes2o/s2orc
|
v3-fos-license
|
Dependence of Lattice Distortion and Dielectric Response of Zinc Aluminate on Milling Frequency
Zinc aluminate (ZnAl2O4) samples were prepared using nanomilling based solid state reaction method for several potential applications. Effect of milling frequency on structural and dielectric behavior of ZnAl2O4 has been explored systemically. Investigation of crystal structure reveals that change in lattice parameter by milling does not alter the cubic lattice of ZnAl2O4. This milling frequency at the nanosize resulted in a gradual decrease in the particle size, which can be attributed to the inhomogeneous defects. Grain size in nanometers has been calculated by XRD using Debye-Scherrer formula. Dielectric measurements performed in the range of 20Hz-20MHz confirms the Maxwell –Wagner two layer model which is consistent with the Koop’s theory. High value of ac conductivity indicates that milling blocked the ionic transport. As a result of Nyquist plots, a single semicircle was obtained which indicated the leading role of grain (bulk). The variation in the semicircle radii for different samples is due to the influence of milling frequency.
Introduction
In the ceramics, spinal family is an ideal material to form the ternary oxides because of its significant electrical, mechanical, magnetic and optical properties. The common chemical formula of spinal structure oxides is AB 2 O 4 . The compounds which possess spinal structure are capable to host extensively divalent, trivalent, and tetravalent cations. It is observed that such kind of compounds have wide band gaps which make these compounds attractive for photoelectronic and optical applications [1]. High melting temperature, high strength, and high resistance to chemical attack are exceptional properties of these compounds. Among the spinal family compounds, zinc aluminate (ZnAl 2 O 4 ) has motivated attention because of its unique properties such as high thermal and chemical stability, low acidity, a hydrophobic nature high mechanical resistance and high quantum yields [2][3][4][5]. Therefore, it can be used as a catalyst for the fabrication of polymethylbenzenes, preparation of styrene initiation from acetophenons, double bond isomerisation of alkenes, dehydration of saturated alcohols to olefins, methanol and higher alcohol preparation [6,7]. The wear resistance and mechanical properties of white ceramic tiles can be enhanced by introducing ZnAl 2 O 4 as a second level in glaze sheets [8]. Furthermore, ZnAl 2 O 4 is appropriate for UV optoelectronic application because it is transparent beyond 320 nm wavelength [9]. It is necessary to consider the microstructure of material to achieve the desired targets/objects because electrical, mechanical, magnetic and optical properties are determined by particle size.
Several methods such as solid state reaction [10][11][12], co-precipitation [13,14], sol-gel [15,16] and hydrothermal [17,18] have been employed to prepare ZnAl 2 O 4 powders. Every method has some merits and demerits. However, milling based solid state reaction is an easy way to prepare composites material due to its specific features such as low cost, small energy consumption and time saving. In milling process, milling balls rotating with high speed in closed vessel create variation in the material properties. Hard materials like steel and tungsten carbide are used to fabricate the milling balls. Milling balls crush the materials into small size from millimeter to nanometers. The crystal defects such as dislocation density, vacancies, deformation network are induced by milling. The quantity of interfaces increases and particle size reduces in the range of nanometers due to the fracturing process [19]. Milling frequency and time are important parameters of the milling materials. The required energy and concentration of the milling materials are directly related to these parameters. In the present study, ZnAl 2 O 4 samples were prepared by nano-milling based solid state reaction method. Effect of milling frequency on the crystal structure and the dielectric properties of ZnAl 2 O 4 specimens were explored.
Experimental Details
Zinc aluminate was prepared by solid state reaction. Mixture of ZnO (99.99% pure) and Al 2 O 3 (99.99%) was prepared at room temperature. The following reaction is straightforward to attain AB 2 O 4 type compounds.
Five balls at frequency of 20 Hz ground the mixture in the ceramic container for two hours. After milling, pellet was formed by hydraulic press. 27.58 MPa pressure was applied for 2 minutes to prepare pellet. The milling frequency, area and thickness of pellets are given in Table 1. After this, all samples were heated at 1000 o C in oxidizing environment for two hours. The prepared samples were characterized by X-Ray Diffraction (Bruker D8) and impedance analyzer (Wayne Kerr 6500 B) to determine the crystal structure and frequency dependent dielectric parameters, respectively.
Results and Discussion
Fig . 1 shows the x-ray diffraction patterns of samples A-F. The diffraction peaks corresponding to (220), (311), (422), (511) and (440) indices reveal the cubic structure of ZnAl 2 O 4 as reported in previous studies [20,21]. The presence of shoulder peaks located at 31.65° and 36.55° in unmilled sample (A) indicates that reactants are not completely incorporated with each other without milling because ZnAl 2 O 4 as well as ZnO and Al 2 O 3 have characteristic peaks at the same level of 2θ [22,23].
The plots of crystallite size and dislocation density as a function of milling frequency are illustrated in Fig. 2. Variation of crystallite size from 23 to 31 nm was observed with increase in milling frequency from 0 to 40 Hz. The values of crystallite size decreased with increase in dislocation density [24]. This observation clearly points out the relationship between crystallite size and dislocation density because of substitution of various anion and/or cation by milling. During milling, the crystallite size depends upon the induction of defects. Therefore, inhomogeneous defects are produced that change the crystallite size. In addition, neighboring grains have different energies and these neighboring grains define the grain size. It is possibility that milling induces such defects in polycrystalline material by continuous crushing which break up the crystals into small crystallites thus going towards nano size. In the beginning, lattice deformation due to milling is tolerable then continuous milling produced plastic deformation in the lattice. As a result of it, surface diffusion barrier and grain boundary diffusion energy increased which produced delay in grain boundary diffusion. Therefore, crystallite size reduced at high milling frequency.
218
Advanced Materials -XV Frequency dependent dielectric constant of samples A-F are displayed in Fig. 3. Dielectric constant decreases with increase in frequency and becomes almost constant at high frequency. It is clear that dispersion behavior is independent of applied field at higher frequency. Several research groups [21,25,26] have reported same results in ceramic materials. Such a dielectric dispersion trend can be explained on the basis of Maxwell Wagner polarization theory [27]. According to it, two layers of grains and grain boundary form an inhomogeneous dielectric specimen. Grain boundaries have low conductivity whereas the grains are more conductive. Initially at low frequency region, resistive grain boundaries play a leading role as compared to polarization in describing the dielectric properties. Space charge carriers associated with dielectric specimen need finite time to align their axes according to the alternating electric field resulting in decreased dielectric constant at high frequency. It was found that sample D milled at 30 Hz possessed high dielectric constant values as shown in determining high dielectric constant are not only the grain and grain boundaries but also some intrinsic factors such as chemical substitution which modify the electronic structure showing a significant role. This chemical substitution may be produced due to milling. Dielectric results are in good agreement with XRD findings. Lattice is deformed by milling. 5 illustrates the tangent loss (tanδ) as a function of frequency of samples A-F at room temperature. Variation in dielectric constant and tangent loss with frequency of applied electric field exhibits the same trends. In a low frequency regime, role of grain boundaries is dominated. Therefore, high energy is required for electrons to move from one ion to other present at different sites. As frequency is increased to the higher values, low energy is required for hopping of electrons due to low resistivity of grains resulting the tan loss become minimum. AC conductivity is plotted against frequency as shown in Fig. 6. AC conductivity is increased with an increase in frequency. In general, dielectric material does not possess any conductivity due to movements of free charges. Conductivity in dielectric material is only due to the hopping of bound charges. In hopping mechanism, charges move between bound states at different sites. This hopping mechanism speeds up at high frequency resulting in an increase in conductivity.
220
Advanced Materials -XV Fig.7 represents the Nyquist plots of samples A-F. Impedance spectrum can be explained by Nyquist plots. It contains different mechanisms related to grains, grain boundaries and electrode contribution. When a straight line is formed from impedance data, it indicates the insulating nature of material. Formation of semicircle points out the conducting nature of specimen having contribution of grains, grain boundaries and electrode [28]. It is found that an arc of sample D has minimum radius which shows the more conducting nature of sample as compared to others. This observation is consistent with dielectric results. However, in present study, spectrum contains a single semicircle which represents the contribution of grain interior (bulk) [29]. An equivalent circuit consisting of resistance of grains (R g ), resistance of grain boundary (R gb ), capacitance of grains (C g ) and capacitance of grain boundary (C gb ) were suggested and values of R g , R gb , C g and C gb , were determined as listed in Table II. Samples A-F show different center of semicircles to indicate the variation in grain resistance by milling. Consequently, it confirms the non-Debye behavior of material. It is recommended that different relaxation time exist due to lattice distortion. This observation is consistent with XRD and dielectric results. Fig.8 represents a complex impedance data of sample D at different temperatures varying from 30 to 260 °C. The values of R g , R gb , C g and C gb have been evaluated by fitting the suggested circuits. The specimen is more resistive at room temperature. It is observed that arc of semicircle decreases as temperature increases which indicates the bulk resistance decrease at high temperature [29]. It gives the clue that resistance of grains is the main barrier for movement of ions. Furthermore, effective concentration of acceptor and oxygen vacancy is increased with temperature which is confirmed by thermally stimulated polarization.
ConclusionS
ZnAl 2 O 4 samples were prepared by solid state reaction technique based on nano ball milling. Milling frequency was changed 0 to 40 Hz. X-ray diffraction studies revealed cubic structure. The structural parameters such as crystallite size, dislocation density, lattice constant and unit cell volume were calculated. XRD analysis explained lattice distortion due to milling. The dielectric constant, tangent loss and ac conductivity exhibited increasing behavior with increasing frequency well explained with the help of Koop's theory. Impedance spectroscopy highlighted the grain effects using equivalent circuit to describe electrical mechanism inside the material resulting in non-Debye type relaxation existing in the material. Nyquist plots at different temperature suggested that conduction mechanism in material was due to the increase in number of grains and thus the resistance.
|
2019-04-30T13:08:12.831Z
|
2018-09-01T00:00:00.000
|
{
"year": 2018,
"sha1": "ca2c7281600d9a2dc25094e8b0f50146416e94c2",
"oa_license": "CCBY",
"oa_url": "https://www.scientific.net/KEM.778.217.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "e482b689fd3777b3680a3e70dfd1760fcf617d81",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
236950809
|
pes2o/s2orc
|
v3-fos-license
|
Fair Division of Goods in the Shadow of Market Values
Inheritances, divorces or liquidations of companies require common assets to be divided among the entitled parties. Legal methods usually consider the market value of goods, while fair division theory takes into account the parties' preferences expressed as utilities. I combine the two practices to define a procedure that optimally allocates divisible goods with market values to people with easily elicited preferences. Imposing exact equality in the bundles' monetary values may produce unacceptable solutions. I drop the tight requirement and suggest a procedure in which the differences in the monetary values are explained in terms of satisfaction per monetary share as perceived by the agents. A robustness study shows the consequences of misspecification in the model parameters.
Introduction
Law and mathematics traditionally do not get along very well, but opportunities for interaction are becoming more and more frequent. In September 2017, the author joined the "Conflict Resolution though Equitable Algorithms" (CREA) consortium: a two-year project funded by the E-Justice programme of the European Union, which saw the collaboration of law, mathematics and computer science researchers, together with Stakeholder Associations, from eight countries in the European Union. One of the project's purposes was, in the words of the proposal, to use fair division theory "to introduce new mechanisms of dispute resolution as a tool for helping with legal procedures for lawyers, mediators and judges, with the objective of reaching an agreement between the parties". The present work describes in detail one of the procedures developed exclusively by the author in the context of this project, with the valuable feedback of the participants in the "De Aequa Divisione Workshop" within the project.
The theory of fair division dates back to the end of the Second World War. It was devised by a group of Polish mathematicians, Hugo Steinhaus, Bronis law Knaster and Stefan Banach, who used to meet in the Scottish Café in Lvov (see Knaster [19], Steinhaus [26] and [27]). For an account of the many results that followed, I refer to the books by Brams and Taylor [12], Moulin [20] and to the review papers of Bouveret et al. [11], Moulin [21], Procaccia [24] and Thomson [29] and [30].
Many methods rely on point allocation methods. Quoting its Wikipedia page, the Adjusted Winner procedure, one of the most popular fair division procedures proposed by Brams and Taylor [12] explains that "each player is given the list of goods and an equal number of points to distribute among them. He or she assigns a value to each good and submits it sealed to an arbiter." In an actual division of goods, however, the role of money and the market value of the disputed items involved cannot be ignored. This happens for several reasons: i) The law in many countries requires that the allocation of goods in an inheritance, a divorce or a company liquidation yields bundles of equal market value (or proportional to each agent's entitlement). No reference is usually made to the agents' satisfaction.
ii) Money itself plays a role in the division of goods. This can happen for several reasons: (a) Money can be used to level out the disparities arising in the distri-bution of the other goods, if they are indivisible, or if they are not perfectly divisible 1 (because they contain indivisible "lumps"). In a similar manner, if a good has to be split among several agents, but the division is not practical or it is unsatisfactory, an agent may buy the shares of the other agents involved and become the sole owner.
(b) In the search for a better solution, one or more goods may be sold to external agents, and the resulting cash would replace the item in the division. A model that makes explicit reference to this selling option, and to the comparison of the agents' utilities with a market value in the two-agent case is provided by Karp et al. [18].
(c) The previous case is just one of the many situations where money is one of the goods to be divided. Early examples of models in which money and a set of indivisible goods are allocated are given by Alkan et al. [2] and Bevia [7].
In all the examples above, agents need to compare the satisfaction of each good with that of money -and, therefore, they define a personal monetary evaluation for each contended good.
iii) Even when money is not explicitly evoked, asking the agents to assign a monetary value to each good is a reasonable method for eliciting their preferences.
How can we take account of the agents' satisfaction? A first answer has been proposed by Bellucci and Zeleznikow [5], with the Family Winner procedure, which was later perfected by Bellucci in [6] and by Abrahams et al. in [1] as the Asset Divider procedure. The procedure combines the goods' market values and ratings, which are then turned into a fixed number of points allocated among the goods, in the spirit of Brams and Taylor's Adjusted Winner procedure. The output is an allocation that is equal in the market values of the bundles. In the cited references, no formal code is given, but only a verbal description of the procedure. The Asset Divider allocates one good at a time, starting from the goods with highest number of points for each side, and after each transfer modifies the preferences of the goods still unassigned. It proceeds until the dollar amount of the bundles does not exceed the percentage split indicated by the mediators. Goods are indivisible, but the procedure may indicate a money transfer to make the division fairer. The optimality properties of the suggested allocation are not examined in detail as they do not fall within the scope of the work. As a consequence of this mathematical indeterminacy, I note from the examples that illustrate how the Asset Divider procedure works that, while the bundles of the two agents are equal in their monetary value, they may differ in terms of utility, with no explanation provided. The procedure deals with the two-agent case, with no easy generalization for a larger group of agents.
Another contribution that deals with the two-agent case is the above mentioned paper by Karp et al. [18]. The authors examine the boost in terms of efficiency that results from introducing the option of selling some of the contended goods and distributing the resulting cash -instead of allocating that good to one of the two parties. Utilities and monetary values are measured on the same scale. Therefore, the utility is the willingness to pay for a given item, while the selling price is defined as a constant fraction of the minimum between the two agents' evaluations. The most recent addition is provided by Bogomolnaia and Moulin [10]. Here, two new procedures are designed to allocate indivisible goods with cash transfers. In Sell&Buy, agents bid for the role of Seller or Buyer: with two agents, the smallest bid defines the Seller who then charges a price constrained only by their winning bid. In Divide&Choose, agents bid for the role of Divider, then everyone bids on the shares of the Divider's partition.
In the European project that financed the present work, another procedure was designed in which agents are asked to place bids from a virtual personal budget. The agent's utility is set equal to the bid and the Nash/Competitive solution is computed. More details about this other procedure can be found in [16].
A New Proposal
I recognize the importance of dealing with two distinct sources of information: the market values of goods and the agents' preferences, defined here as the willingness to receive a good. The approach that I adopt, however, is quite different from that of Bellucci [6]. First of all, I consider divisible goods, instead of indivisible ones, and I devise a procedure that works for any number of agents.
More importantly, though, the goal is to overcome the difficulties highlighted above, and frame the problem against a solid mathematical background. The first challenge is to find a proper way to model the agents' preferences when both the market values and the agents' likes and dislikes must be taken into account. There is a consolidated stream of literature on how to represent preferences for resource allocation problems. Preferences can be cardinal (i.e. utilities) or ordinal (rankings, typically), they may regard single goods valued independently, or they may refer to bundles -thus underlining the dependencies that may occur upon the goods' reception. A recent review of the different approaches is given by Bouveret et al. [11].
A general underlying principle holds: the quality of information behind each of these methods varies, and the higher the quality, the more difficult it is to elicit the corresponding piece of information from the agents. For instance, it is easier for the agents to evaluate single goods, rather than all the possible bundles or a large part of them, even if this means losing some information about the dependencies among the goods. I stick to the simpler settings and assume preferences to be separable or additive. With regard to single goods, the market value is a cardinal piece of information with high informative value: assuming a positive dependence between the agents' preferences and the goods' market value, I propose a framework where personal taste affects the market value by modifying it. The value distortions take place by means of a simple rating feedback: agents are asked to rate each good on a fixed range with an odd number of levels -typically 5. The median rating will make the market value and the utility coincide, while higher (lower, resp.) will increase (decrease, resp.) the utility by a constant factor applied a number of times given by the distance from the median rating.
Choosing a small range of levels (5, or even 3) may lead to a utility which is an approximation of the exact measure for the agent's fondness for a given item. The emphasis is, however, on the simplicity of the elicitation process. Once the agents communicate their ratings for each good, their task is completed. Moreover, even if the introduction of a discrete rating range may suggest an intermediate level between cardinal and ordinal preferences, the quality of information provided by market values is too valuable to be downgraded as ordinal, and the market value modification mechanism is considered instead. Once the market value of goods and the agents' preferences are put together into a single utility function, linear programming is used to return the solution of an optimization problem, in stark contrast to the Asset Divider's approach.
By setting an optimality criterion properly, I address (and overcome) a difficulty arising from the fact that optimally fair divisions that provide equal (or proportional) market shares may fail to exist, while imposing both criteria may not only produce outcomes which are unacceptable for the agents than those resulting from arrangements with unequal market shares, but may also return too many split items. The proposed solution drops the requirement of exactly equal (or proportional) market value shares, but fully justifies the differences that may arise in terms of different average ratings of the goods per monetary unit received: if an agent receives a larger market value share than that of another other agent, they will receive goods with different average satisfaction per monetary share (smaller for the agent who got a larger market share). A rigorous framework will be provided to measure those discrepancies. As an additional benefit, the procedure always guarantees the minimal number of split items, which is given by the number of agents involved in the division minus 1.
Most of the known available procedures, including the Adjusted Winner and the Asset Divider procedures, enjoy the property of scale invariance: if all the ratings by a player are multiplied by a constant, the global outcome is unchanged. While sound in principle, this property has little cogency in many situations where a "scaled" profile is not suitable. Consider the case when utility is based on a discrete scale, such as when an agent allocates an integer number of points to each good. In this situation, it is therefore often impossible to obtain a new profile from an old one by using a simple scalar multiplication (especially when the scalar is less than 1) and it is impossible to compare different profiles by invoking scale invariance. I consider a different property, translation invariance, in which, when all ratings are increased, or decreased by the same amount, the outcome is unchanged. This property holds whenever the adopted utility model is applied to a scale invariant solution, and has proved easy to convey to audiences of law researchers and practitioners.
Notation and Assumptions
I consider a (finite) set of g valuable items G (which may also be referred to as goods) to be divided among a (finite) set N of n agents. The share of a good a assigned to agent i is denoted as z ia ∈ [0, 1]. For each i ∈ N , vector denotes agent i's allocation. The entire allocation is grouped as a matrix z = (z 1 , . . . , z n ). We assume that only one unit per good is distributed and that, since all goods have a market value, only allocations that assign the entire good to one or more agents are considered (this is often referred to as the "nonwastefulness" hypothesis). These allocations are defined feasible and the set of such allocations is denoted as Typically, no pair of agents values the same good equally. The degree of appreciation for good a ∈ G by agent i ∈ N is described by a non-negative number u ia , conveniently arranged in vectors u i = (u i1 . . . , u ig ) T ∈ R G + and then in a matrix u = (u 1 , . . . , u n ). A division problem is then fully characterized by the triplet Q = (N, G, u). I now assume that utilities are additive (the utility of receiving a bundle of goods equals the sum of the utilities of the single goods) and linear (the utility of receiving the fraction of a good equals the same fraction of the utility for receiving the good in its entirety). Consequently, an allocation z ∈ Φ(N, G) will give a utility of . . , U n (z)) T be the utility profile corresponding to an allocation and let IP S(N, G, u) (or simply IP S if the context is clear 2 ) be the set of utility profiles corresponding to feasible allocations. Notice that the utility of receiving the set of all goods need not be the same for all the agents, i.e. utilities are not normalized. All agents are usually assumed to have the same importance in the allocation. In practical situations, agents may have different weights that measure their importance. Consider, for instance, the different degrees of kinship in an inheritance case, or the different contribution to a marriage that might induce a judge to assign a larger share of the joint assets to one of the ex-spouses. Let w = (w 1 , . . . , w n ) T be the vector of agents' weights, with w i > 0 for every i ∈ N and i∈N w i = 1. When weights are not mentioned, it is assumed that w i = n −1 for every i ∈ N .
Measuring fairness
What makes an allocation fair? The question does not come with an easy answer, and this is what makes the whole topic interesting. The proposed allocation often stands out as a solution for an objective function which measures the agents' global satisfaction. Among the many proposals, two functions convey an idea of fairness particularly well, and have resurfaced over and over again in the specific literature for theory and applications: • The Egalitarian solution (which is derived from the Egalitarian Equivalent allocation by Pazner and Schmeidler [23]) defined as whereŪ i (z) = U i (z) a u ia is the normalized utility for agent i ∈ N . The solution name comes from the fact that, under mild conditions such as the requirement that every good has some positive value for every agent, i.e. u ia > 0 for every i ∈ N and a ∈ G, it turns out (see Corollary 5.8 in Dall'Aglio [13]) that the solution guarantees an equal and normalized utility for all agents, namelȳ • The Nash/Competitive solution (after Nash [22]) defined as Nash introduced this solution in the context of bargaining problems. The solution is also referred to as "competitive" because it can be obtained as a competitive equilibrium in the exchange economy, where each agent is endowed with an amount of money proportional to the agent's weight (see Bogomolnaia et al. [8] and [9] for an updated review of these results).
As may be expected, no criterion prevails over the other. First of all, both solutions satisfy the following invariance by scale property: suppose that the utility profile of each agent is multiplied (scaled) by some constant λ i > 0 for every i ∈ N , i.e., u ia = λ i u ia for every i ∈ N and a ∈ G, then Q = (N, G, u) and Q = (N, G, u ) yield the same solution sets.
In the case of the Egalitarian solution, this is true because normalized utilities in the objective function are considered. For the Nash/Competitive one, U i (z) = λ i U i (z), and thus, since a logarithm appears in the objective functions, the two problems Q and Q simply differ by a constant and are equivalent.
The Egalitarian and the Nash/Competitive solutions share other important properties, such as: • Fair Share Guarantee: everyone is guaranteed at least their fair share of all of the assets. In formula: • Efficiency: The allocation implements an efficient utility profile, a vector U of utility values that is not Pareto dominated, i.e. it cannot be improved upon by another allocation that assigns greater utility to at least one of the agents. In formula: The two solutions differ in the fulfilment of other important properties: as already stated, the Egalitarian solution, under mild conditions, satisfies (2), while the Nash/Competitive one typically fails the same test. Conversely, the latter always satisfies In other words, every agent values the received bundle of items at least as much as the bundles assigned to the other agents.
The Egalitarian solution satisfies the same property for the two-agent case, but may fail to do so when the number of agents increases to 3 or more (see p.84 in Brams and Taylor [12] and Theorem 3.10 in Dall'Aglio and Hill [15]).
The comparison may continue and I refer to Bogomolnaia et al. [8], [9] and Moulin [21] for a recent and thorough comparison of the two solutions in the linear setting adopted here, and in more general frameworks. My choice for the procedure that I am going to illustrate will not be based on an a priori opinion, but will be justified by the properties that it guarantees, given the assumptions and the data available for any instance.
A Procedure With Market Values with Preferences
The utility for any agent of any good is expressed as a positive number.
Since in the present model, goods have a market value, their utility should be comparable to this value. Agents' inclinations may make the goods more (or less) valuable than their face value, but, in any case, the good's utility for any agent should not be too distant from the good's monetary value, because valuable goods can usually be traded outside the division context. Under these assumptions, it is reasonable to put bounds on the utility of a good by defining an interval of reasonable values that includes the market value.
Economics models take great care in the mathematical description of the agents' utilities, but often overlook the process of eliciting the agents' preferences inside the model. Eliciting preferences typically requires a careful balance between precision in the specification of the agents' preferences and simplicity in the definition of rules that can be understood by a vast audience of non-specialists, with little or no background in mathematics or economics. The method that I propose supports the latter feature, while maintaining, under some proper assumptions, a sufficient degree of the former. It uses an old intuitive method, which is experiencing a resurgence in popularity: a rating system induced by the repetition of a given symbol, typically a star. The method dates back to an 1820 guidebook by Mariana Starke [28]. Since then, it has been used by critics to grade artworks (books, movies, theatrical performances, and so on) or by travellers or institutions to evaluate facilities (hotels, restaurants, and so on). More recently, it has become a popular method used by major internet goods and services retailers (such as Amazon, eBay or TripAdvisor) to let customers provide feedback on their consumption experience for other customers to make more informed decisions. Its popularity should put every potential user of the procedure at ease. The rating system selects a finite number of values from the continuous range of plausible choices.
where m a is the market value of good a, r ia ∈ {1, 2, . . . , 2q + 1} is the rating by agent i of good a and K > 1 is the constant multiplicative factor.
Note that the range of plausible values for a good's utility is given by the interval [K −q m a , K q m a ], and 2q + 1 values are selected in the interval. The median rating q + 1 will yield u ia = m a , while higher (lower, resp.) ratings will increase (decrease, resp.) the utility by the constant factor K (K −1 , resp), applied a number of times given by the distance from the median rating.
The discretization process defines a finite grid of values that the utility may assume. Moving from a continuous range to a discrete approximation may cause a distortion of the agents' utlities. Clearly the grid may be dense in the real numbers by means of a proper choice of the parameters K and q, but this would aggravate the elicitation process. For instance, the average user may be unable to distinguish between ratings 22 and 23 in a range of 87 levels. The robustness study that follows supports the use of a small grid. A rating system with 5 levels, i.e. q = 2, is rich enough for most purposes: it encompasses the 3-level system by considering a restricted grid in which only ratings of 5 stars, 3 stars or 1 star are permitted). I will use this range for all the examples and the applications that follow.
Giving the maximum rating for a good increases the chances of receiving that good, but it does not grant any of it. This happens because other agents may give maximum ratings too and, more generally, because fairness requirements may mean allocating the goods in a different way.
When agents rate the goods, a simple invariance principle could guide the agents in revealing their genuine preferences. If agents understand it, they should assign high ratings only to the goods they really care about, rather than assign a lot of stars in the vain hope of receiving a richer share than that of the others. Scale invariance, the most popular among these notions, is hard to put into practice. Each agent should be able to scale up or down a profile of preferences by a constant factor in order to be able to compare different sets of utilities. In the case of a discrete range of ratings and, a fortiori, with such a limited number of levels, the principle becomes impossible to implement.
Instead of scale invariance, the DPR model applied to a scale invariant solution satisfies the following: Translation invariance: Adding or removing one star to the ratings of all of the goods will not change the outcome.
The principle applies to fractions of stars as well, if such ratings are allowed by the system. To exemplify things in a 5-level rating scale, according to the translation invariance principle, the profiles in Table 1 yield the same outcome.
Considering a more extreme case, all of the profiles described in Table 2 indicate indifference towards the goods. In other words, adding too many stars will result in an expression of indifference among goods. What counts, instead, is a correct profiling, where the agents are able to indicate which goods they really care about.
Choosing the right objective
When monetary values are explicitly defined, the worth of what each player gets cannot be ignored. Definition 2. The market or monetary value µ i (z) of the bundle received by agent i ∈ N is defined as In the present model, I look for an allocation with the following properties: (i) it is fair according to an established criterion and (ii) it provides the agents with bundles of equal market values (or proportional to the entitlement shares). According to the short review in Section 2.1, the Egalitarian or the Nash/Competitive solutions are good candidates for property (i), while property (ii) can be taken care of by considering a restricted class of allocations.
Let Φ P M V (N, G, m, w) denote the set of PMV allocations for the allocation of divisible goods in G with market prices m to agents in N with entitlements w.
It is therefore natural to consider the following optimal allocations.
Similarly, a proporional Nash/Competitive solution z c P M V is defined as The following simple example shows why such a natural proposal turns out to be a bad idea. Example 1. Consider two agents with equal importance: I and II who divide 2 items A and B of equal market value between themselves. The items' market values, together with the agents' ratings are shown in Table 3, with K > 1.
To compute the (unconstrained) Egalitarian allocation, note that the allocation is efficient, and equality in the utility is reached only when item A is assigned to agent 1, while B is split between the two. Therefore, the allocation must be sought in Some algebra is required to show that agents get equal utility when b e = ( Therefore It can be shown that splitting item A yields a product of utilitities lower than that obtained by splitting item B. Therefore, the unconstrained Nash/Competitive allocation must also be sought in Φ B . This product is maximal when b c = 1+K 2K . Therefore The PMV allocations when w = (1/2, 1/2) T and m = (1, 1) T are Equality between utility is obtained when a = 1/2. Thus, This solution is strongly dominated by the unconstrained Egalitarian solution -and it is therefore inefficient.
Moving to the Nash solution among PMV allocations, the productŪ 1 (z(a))Ū 2 (z(a)) is quadratic in a and achieves its maximum in (K 3 +K 2 +K −1)/(2(K 3 −1)). Since a is constrained in [0, 1], the maximal product is achieved by The normalized utilities for the allocation arē It can be easily verified thatŪ 1 (z c P M V ) < 1/2 for any K > 1 and, therefore, the Proportional Nash/Competitve solution fails the Fair-Share Guarantee. Figure 1 shows the IP S with normalized utilities in the example when K = 1.2.
The above example shows an instance where the Proportional Egalitarian or the Proportional Nash/Competitive solutions do not meet the minimal requirements for being considered acceptable. In fact, the Proportional Egalitarian solution is inefficient, while the Proportional Nash/Competitive one does not satisfy the Fair Share Guarantee.
There is another important disadvantage that arises when equal market values are imposed: whether optimal or not, the solution may require a larger number of items to be split. Proposing a solution with many split items poses many practical challenges: When an item cannot actually split, an arrangement has to be found among the agents who receive a portion of the good to either manage the good together or find a satisfactory resolution. The larger the number of split items, the more difficult it is to achieve a peaceful outcome and the concern is especially important if the total number of items is small. In Example 1, all the allocations which are proportional in the market values and satisfy the Fair Share Guarantee require both the items to be split. This fact must be compared with a recent result regarding the division of homogeneous and divisible goods. Lemma 2.5 in Sandomirskiy and Segal-Halevi [25] state that any Pareto efficient profile deriving from the division of goods among n agents, including those corresponding to the Egalitarian or the Nash/Competitive solutions, can be obtained by splitting at most n − 1 goods. Therefore, the pursuit of an allocation that is Proportional in the Market Values and satisfies the Fair Share guarantee may conflict with a further goal: keeping the number of split items to a minimum.
The Egalitarian solution for the DPR utility model
Example 1 provides strong arguments for rejecting any procedure aimed at satisfying fairness and perfect equality in market values. Instead of imposing new requirements for the solution, I return to the unconstrained model. The well-known solutions (either the Egalitarian or the Nash/Competitive) will not, in general, return bundles of equal market value -as pointed out by Example 1. In my proposal, I will let the bundles diverge in their market value, but I will find a key to explain those differences. I begin by applying the known solutions to an example. For the computations in this example and the ones that follow, I used the Mathematica 12.3 software.
Example 2. Consider a situation with 3 goods (A, B, C) and 4 agents (I, II, III, IV ) that evaluate the goods on a 5-level rating scale. The goods' values and the agents' ratings are given in Table 4. For values of K close to 1, the Egalitarian and the Nash/Competitive allocations do not differ too much in their structure, since the goods or their fractions are allocated to the same agents. For instance, if we set K = 1.1, we obtain the Egalitarian and the Nash/Competitive, denoted as z e and z c , respectively, as As expected, the monetary values of the bundles received by the agents differ in the two solutions. In the Egalitarian solution, these values are all different, while three out of four values coincide in the Nash/Competitive solution, as shown in Table 5. The monetary values of the agents' bundles of K larger than or equal to that threshold, the Nash/Competitive partition remains constant and equal to This fact explains the flat part in the graph in Figure 2b.
In what follows, I am going to provide an interpretative key to the Egalitarian solution: differences in the market value of the bundles will be compensated by the average satisfaction that each fraction of monetary value associated with the goods will provide to the agent. For this reason, I propose an allocation which is Egalitarian in the agents' normalized utilities. In making this choice, I do not claim that the Egalitarian solution is better than the Nash/Competitive one. I simply could not find a similar justification for the market values attributed by the Nash/Competitive solution. In Example 2, why does the market value of agent I differ from that of the other three agents, and why do the values for agents II, III and IV coincide?
I now describe a procedure for obtaining the Egalitarian solution.
Egalitarian solution for the DPR utility model 3 i. A mediator defines: a. The number of levels in the ratings; b. The market value for each good; c. The rate of appreciation for each additional star in the rating (The parameter K > 1 in the model).
ii. Each agent independently expresses a personal appreciation of each good at its market value using the defined rating scale.
iii. The Egalitarian solution is computed.
The Egalitarian allocation for the DPR utility model can be computed by means of the following linear programming problem.
The formulation arises from the equivalence between the following two problems and from a rescaling of the inequalities in the second optimization problem. Moreover, I do not need to consider the median rating, because the utilities are normalized and, while an inequality would be expected for each of the first n constraints, each of them can be replaced by an equality, since (2) holds in this context. The feasible set is non-empty because the variables z ia = w i , ∀a ∈ G, i ∈ N and t = 1 satisfy all the constraints. I now illustrate how the linear program is applied to one of the previous examples.
Example 1 (continuing from p. 13). The linear program for this example becomes, after proper simplification It can easily be verified that the solution (7) and t = 2K 2 /(2K 2 − K + 1) is feasible. The solution verifies the primal-dual complementarity conditions and is therefore optimal.
Wilson [34], as reported by Sandomirskiy and Segal-Halevi [25], proved the existence of an egalitarian allocation of goods with n − 1 sharings (i.e. split items). The result is unpublished 4 and for ease of reference, I prove a statement that works for the linear program defined in (9).
Proposition 1 (Wilson [34]). An optimal basic solution for (9) identifies a solution with no more than n − 1 split items.
Remember that in a basic solution for a linear program, only a restricted group of basic variables equalling the number of constraints can be nonzero, and the simplex method moves from one basic solution to an adjacent one that differs from the first by only one variable, until the optimal allocation is reached.
Proof. Problem (9) has ng + 1 variables and each basic solution has no more than n+g nonzero variables. In the optimal solution, the variable t is among the basic variables, because its value equates the nonzero utility to weight ratio level, and no more than n + g − 1 variables in the optimal allocation z e can be greater than zero. Since each good a ∈ G has to be assigned and, therefore, there exists i ∈ N such that z ia is a basic variable, the number of split items cannot exceed the number of basic variables on z minus the number of goods and there cannot be more than n − 1 split items in the optimal solution.
In principle, equalizing the agents' utilities may not seem the wisest choice because it requires a comparison of interpersonal utilities, which is a highly debated and questioned principle. In the present situation, however, agents' utilities are magnifications or contractions of the goods' market value and an allocation with equal (normalized) agents' utilities yields bundles of approximately equal market values for everyone, exact equality being problematic. A more detailed analysis reveals that differences in the bundles' market values can be explained in terms of the agents' satisfaction relative to the fraction of monetary value received. Some simple definitions are needed in order to provide a formal statement of this fact.
Definition 5. Given a feasible allocation z, the monetary share of agent i is given by the ratio µ i (z)/M , while the monetary share to entitlement (MSE) ratio for the same agent is given by The MSE ratio compares the monetary share with the entitlement, and a value of 1 denotes an exact correspondence between the market value received and the entitlement in the division. Definition 6. The average utility per monetary share, or in short, the utility-per-money (UM) index of the feasible allocation z for agent i, ν i (z), is defined as the ratio between the agent's normalized utility and the corresponding monetary share.
The index indicates how much -on average -an amount of a good worth 1% of the total market value of goods contributes, as a percentage, to the normalized utility of an agent. If an agent receives all goods, their UM is 1, and this is a benchmark for evaluating any other allocation: the higher the UM of an agent, the happier they are with each percentage of monetary share received. Note that none of the indices introduced in the previous two definitions depend on the choice of currency used, since they are ratios of relative quantities.
If z * is an Egalitarian allocation, then, for any two agents i, h ∈ N The simple formula illustrates the inverse relationship between UM and MSE ratios: If ν i (z * ) > ν h (z * ), then σ i (z * ) < σ h (z * ) (and conversely). Most of all, it shows that the agents' UM indices and MSE ratios are listed in the opposite order in an Egalitarian allocation. A monotone transformation of the UM index describes the quality of the allocated goods in terms of the agents' ratings, but some preliminary work is needed. In order to make the agents' ratings comparable, they have to be centered around a reference value.
Definition 7. The central rating for agent i ∈ N ,r i , is defined as The rating difference from the central rating of a good a ∈ A for agent i ∈ N is defined by r ia −r i .
The difference will be a gain over the central rating, if positive -or a loss, if negative. Note that the central rating satisfies the following identities Also note that the magnitude ofr i gives only an indication of the rating levels: an agent that assigns high ratings to valuable goods will have a higher central rating than that of one who assigns lower ratings or another one who assigns high ratings to less valuable items. The difference, however, will not mean that an agent has rights to a larger or smaller share in the division procedure.
The following result gives an alternative characterization of the UM index: it is the average of the personal magnifying factors for the goods, weighted with the monetary values of the portions of goods received by the agent.
Proposition 2. For any feasible z, we have Proof. Considering a rating scale with 2q + 1 steps, we have a∈A m a z ia For the first equality, I apply the definition of ν i , for the second, I multiply and divide by K q+1−r i and for the third, I apply (14) and the definition of µ i (z).
The previous result maks it possible to characterize the quality of the bundles in terms of ratings.
Definition 8. The average difference from the central rating per monetary share, or, in short, the rating difference (RD) index of the feasible allocation z for agent i is defined as: Following Proposition 2, the RD index is the average of the goods' standardized rating weighted with the monetary values of the portions of goods received by the agent. Recall that the RD index for the whole set of goods is 0, and, consequently, also a random allocation of goods (and fractions thereof) worth a fixed amount of market value yields a null average. The index ρ i (z) will therefore indicate the average difference between the agent's rating obtained from the allocation z and that obtained from a random bundle of the same value.
Since the RD index is a monotone transformation of the UM index, both measures rank the players in the same order. When all the agents have equal entitlements, monetary values and MSE ratios also agree in the ranks; however, given equation (12), the two pairs of indices rank agents in reverse order. I now apply the indices just defined in the previous examples.
Example 1 (continuing from p. 13). For any K > 1, the Egalitarian solution is defined by (7) with the common utility obtained by the agents given by (8). Now,r 1 =r 2 = 3 and the indices that measure the quality of the Egalitarian solution for each agent are described in Table 6, where the general formula and the numerical values for K = 1.2. As expected, the differences in monetary values are compensated by the difference in perceived quality from the received items. When K = 1.2, the smaller monetary share received by agent II is compensated by a larger RD index. By receiving the least valuable item, the agent will get an average increase of 1.2974 stars over any random allocation worth 0.848. Conversely, agent I will receive a bundle worth 1.152 that, when compared with a random bundle of the same value is worth 0.3801 stars less. pected, the curves' rank in both graphs is identical and it is opposite to that of the monetary values -and the lines of agents I and II cross at K ≈ 1.2391.
A positive RD index denotes a gain over the central rating. Example 1, however, shows that an RD index can be negative -denoting a loss over the central rating.
From a global perspective, however, the loss cannot affect all the agents.
Now, according to (17) and to the fact that i∈N µ i (z e ) = M , I can writē In the same spirit of the previous definitions,ρ(z) = log K (ν(z)) can be considered a valid centrality index for the RD indices. By Proposition 3, ρ(z e ) is always non-negative.
Assessing a value for K
The implemented procedure relies on determining a value for the parameter K. How should it be chosen? First of all, I examine what happens when K approaches the extreme values of its range. As K gets arbitrarily large, each agent's utility narrows down to goods with the highest rating. Let G max i be the set of goods that received top ratings from agent i. It is easy to prove the following: Proposition 4. When w i = n −1 for every i ∈ N , as K → ∞, the normalized utility of good a for agent i converges tō Considering the asymptotic values of the agents' optimal utilities, I focus on a closed formula for the special case where agents have equal weights and there is a common set of goods, G max c , which are top-rated by all the agents, and if two agents gave their top rating to a good, then all the agents did, i.e. G max i ∩ G max j = G max c for every i, j ∈ N . Let z e (K) be an Egalitarian solution associated with a given value K.
Proposition 5. When w i = n −1 for every i ∈ N , as K → ∞, Proof. As K → ∞, the normalized utility of good a for agent i converges as indicated in (18). As K grows, in order to have an efficient allocation, with the normalized total utility of all the agents equal, goods in G max i \ G max c must be assigned to agent i, goods outside G max i can be assigned to any agent since their contribution in utility becomes negligible, while goods in G max c must be distributed (and possibly split) among all the agents in order to keep the total utilities equal. As K → ∞, the normalized utility for agent i of the goods converges to d i on G max To keep equality among the utilities, the fraction t i of G max c that is given to each agent i is computed as the solution of the following linear system It turns out that To show that (21) is the solution of (20), we rewrite the first n − 1 equations in the system as To verify the last equation in (20), the sum of all the denumerators in (21) gives the denominator, and therefore i∈N t i = 1. The common utility level 1 − d i + t i d i yields (19).
When there is no overlap in the top ratings, i.e. G max c = ∅, thenŪ i (z * ) → 1 as K → ∞. Furthermore, when n = 2, then (19) becomes For all the other cases, i.e. when some goods received a top rating from some (but not all) the agents, or when agents have different weights, the asymptotic value can be computed by means of a system of linear equations, but providing a closed form formula is too complicated and probably useless for the number of different cases that should be considered. When K is large, only the top rated items count, and items with lower ratings are unable to compensate for gaps in the required utility level. As K grows, a paradox may occur: items can be assigned to agents with infinitesimal utility. This is particularly apparent in the case of unequal entitlements. Example 3. Consider agents I and II with weights w I = 6/7, w II = 1/7 and 3 goods (A, B, C). The goods' monetary values and ratings are given in Table 7.
When 1 < K ≤ K 0 ≈ 1.14727, agent I gets item B, agent II gets item C and the two agents share item A, with the share of the first agent increasing to 1. When K > K 0 , agent I gets items A and B, while the two agents share item C, with the share of agent I converging to 5/6 as K → ∞. While agent I gets most of item C as K grows, the utility of this item for the agent becomes negligible, as it becomes O K −4 as K → ∞. With a value as low as K = 2, for instance, agent I receives a share of item C approximately equal to 0.66892, ashare that will count for an approximate amount of 0.00607 of the normalized utility of the agent. 100 B 800 C 100 The above example shows an instance where setting K even less than one unit away from 1 may not be a good idea. To find out what happens at the other end of K's range, the following continuity result is helpful.
Proof. It is easy to show that both the linear programming problem (9) written in canonical form (i.e., with all the constraints expressed as inequalities together with the nonnegativity of all the variables), and its dual, satisfy the hypotheses of Proposition 8 in Wets [33]. Then, Theorem 2 from the same reference holds, and the optimal value of both the primal and the dual is continuous with respect to the parameter K.
If K = 1, every piece of information about the agents' preferences is lost, and any division of the goods in which the bundles' monetary value is proportional to the weights is optimal. When K approaches 1 from the right, the agents' normalized:utilities for the Egalitarian solution converge to the corresponding entitlements.
Proof. Apply Proposition 6 and note that K → 1 + , the IPS shrinks to a line, and the utility of every good coincides with its market value.
Consider a sequence {K h } with K h → 1 + as h → ∞. If z e (K h ) → z * as h → ∞, then, according to Proposition 6, the limit Egalitarian solution has all the goods with the same utility for every agent (given by their monetary value). If the allocation remains the same for h large, the limit solution keeps the same structure.
The solution looks reasonable for the previous example but may not distinguish between goods with high or low ratings as long as the difference in the agents' ratings remains the same. From the previous discussion and examples, it emerges that K should be chosen close enough to 1, so that even goods with lowest rating retain enough value, but not too close or exactly equal to 1, so that differences in the ratings count. I propose a simple elicitation procedure in which agents are asked to evaluate the interval of admissible utility values and compute the ratio between the interval's end points. Agents may evaluate this ratio jointly or separately and, in the latter case, the average value is computed. Denoting such a ratio as R and considering a rating scale with 2q + 1 steps, it would be natural to set K = 2q √ R. For the applications that follow, I will use a 5-star rating system, thus q = 2. I also assume that a top-rated good is worth 50% more than a least-rated one. Therefore, I will assume K = 4 3/2 ≈ 1.10668.
In the proposed procedure, agents rate each good individually, with no constraint on the total number of stars that each agent can assign. The following example illustrates why putting bounds on the total number of stars may be a bad idea.
Example 4. Consider an instance with 4 agents (I, II, III and IV ) and 4 goods (A, B, C ad D) -each with a price tag of 100. Suppose that each of the first three agents is very interested in one good: agent I in good A, agent II in good B, agent III in good C, and they all have a mild interest in good D. Finally, suppose that agent IV is interested in goods A, B and C, but has very little interest in good D. If agents were able to assign the ratings freely, the preferences shown in Table 8a are what we would expect.
If, instead, agents were only allowed to distribute 10 stars among the goods, the profile for agent IV would change into the one described in Table 8b. Let z u be the Egalitarian allocation resulting from the market value and the rating profiles in Table 8a, together with K = 4 3/2, and let z r be the Egalitarian allocation, in which the ratings profile for agent IV has been modified as in Table 8b, due to the constraint. We get Based on the data symmetry, other solutions for the unconstrained problem are obtained by assigning good D to agents I or II, with the fractions described in (23). It would be hard to describe the outcome z r as satisfactory for agent IV . Note that if Table 8b were the true preference profile for agent IV , solution z c would be justified by the fact that the difference between the ratings of good D and the other three goods is moderate -and would explain the assignment of the least favoured item in return for a larger market value.
Model Robustness
Every model that aims at describing the utility of a group of agents using a common interpretative key relies on assumptions that allow for an elicitation process that is bearable for the agents.
The inevitable cost of this construction is a simplification and a homogeneity that may cut out some details of the rational process.A reality check is particularly important for the DPR utility model introduced here since it relies on facilitating the revelation of preferences by the agents, with the aim of addressing the largest possible population of potential users and making them aware of the procedure's functioning.
A validation of the model's credibility must rely on a proper experimental setting. A first encounter with the economics labs can be found in [14]. The experiment refers to an early stage of the project, in which students at Luiss University expressed their preferences for a set of goods and were asked to choose between the Egalitarian and the Nash/Competitive solutions -or to reject the solution altogether and "go to court" (i.e. receive a curtailed payoff). 5 A resumption of the laboratory work to test the validity of the DPR model is out of the scope of the present work. Instead, I rely on simulations to test the parameters' robustness. I therefore assume the DPR model to be valid, but with a misspecification of the chosen parameters. I assume the goods' market values to be assessed by an impartial expert -or to be agreed upon by the agents themselves. In both cases the agents should not or could not complain about the outcome. The focus is on two other issues: 1. How wrong is the model when too few rating levels are specified? 2. How wrong is the model when the magnification factor K is different from the "true" one describing the agents' utility?
Regarding the first question, I examine what happens when the agents' utility cannot be captured by the usual 5-levels rating scale, because the agents are capable of calibrating their preferences over a richer discrete range. I consider a larger rating scale with 11 levels. The choice is motivated by the fact that, if the magnifying factors are chosen to guarantee the same utility for the highest and lowest ranking in both rating scales, then only 3 levels of the largest rating scale have a perfect equivalent in the smallest scale. These are levels 1, 6 and 11 in the larger scale which correspond to levels 1, 3 and 5 in the smaller one. The other 8 levels of the larger scale do not have a perfect matching in the smaller scale. Nevertheless, a discerning and rational agent who is asked to use the smaller scale, will proceed by proximity, grouping the richer rating scale as {{1, 2}, {3, 4}, {5, 6, 7}, {8, 9}, {10, 11}}.
Using Mathematica 12.3, I ran a simulation where the following situation is repeated 10, 000 times: 8 items, each with a random market value ranging between 100 and 1, 000 (euros) are contended among 4 equally important agents, whose ratings are randomly assigned on the 11-level rating scale. Each utility is transformed into the 5-level rating scale by proximity, but is evaluated according to the original larger scale. The normalized utilities of the 4 agents are no longer equal to each other, and the allocation is no longer optimal because one or more agents show a utility level lower than the Egalitarian value computed for the larger rating scale. The gap between the utility of the Egalitarian solution for the true model and the lowest of the agents' utilities (i.e. the value of the suboptimal allocation according to the objective (3)) is recorded. To facilitate the comparison of errors among the simulations, the gap is divided by the Egalitarian level to return a relative loss. Figure 4 shows a histogram of the distribution loss in the simulations. The data shows an average loss of 2% on the true optimal value. I turn now to the misspecification of the magnification factor K. Again, 8 items, each with a randomly assigned market value between 100 and 1, 000 (euros) are divided among 4 equally important agents whose randomly assigned ratings are correctly described by a 5-level rating scale. The announced value is K = 4 3/2, but the true value differs by a random gap. Allocations are computed using the announced value, but evaluated by the true value of K. Again, the computed allocation is no longer optimal and agents have different true normalized utilities, and the relative error is computed as the difference between the (true) Egalitarian value and the lowest reported utility divided by the Egalitarian value.
A first batch of 20, 000 random instances with a gap in the range [−0.1, 0.1] was launched and the results are shown in Figure 5. Figure 5a plots the relative errors against the magnitude and the direction of the deviation. Figure 5b compares the distribution of errors from positive deviations with those from negative ones. The Kolmogorov-Smirnov Test rejects the hypotheses that the two sets of data are generated by the same probability distribution. However, the descriptive statistics are very similar: the distribution of the errors from positive deviations has a mean of 0.024813 and a variance of 0.0003706, while the distribution from negative deviations has a mean of 0.026702 and a variance of 0.0004393.
A second batch of 20, 000 simulations considers a positive random deviation not greater than 0.4. The data is then divided into 4 groups according to the magnitude of the deviations. The histograms of the four distributions are shown in Figure 6. The descriptive statistics for the four distributions are shown in Table 9. Both the mean and the variance grow with the gap, with a linear growth for the mean.
In the following example, I investigate what happens in the most extreme case, i.e. the one achieving the largest error, out of the 20, 000 instances. When K = 4 3/2, the following allocation is returned If 4 3/2 is the correct value for K, agent III receives only items rated three stars -the highest ratings for those goods. If K grows, but the allocation remains the same, the normalized utility of those goods decreases (because the same agent assigned 5 stars to other goods). Instead all the other agents received items rated 4 or 5 stars -and their normalized utility increases with K. The utility levels, together with the value of the "true" Egalitarian allocation for the corresponding K, are shown in Figure 7. Clearly, allocation (24) is not optimal for 4 3/2 and the sub-optimality grows with K.
When the "wrong" magnifying factor is chosen, not only does the alloca- Figure 7: The utility values for allocation (24), together with the correct Egalitarian value, as a function of K.
tion become suboptimal, but other properties may fail. For instance, Figure 7 shows that, for a sufficiently large K, allocation (24) fails the Fair Share Guarantee property. A similar simulation study was undertaken to verify the model's robustness against inefficiencies when the parameter K is misspecified. As before, the utility of the optimal allocation with respect to the announced value K = 4 3/2, is computed under a randomly generated true value of the same parameter. As before, 8 items with random market values between 100 and 1, 000 are divided among 4 agents whose ratings are randomly generated. The utility vector is then compared against a cloud of 20, 000 randomly generated points in the IPS generated by the true K. Half of these points correspond to randomly generated integer allocations and their convex combinations, while the remaining half comes from randomly generated fractional allocations in which each z ia is drawn from a uniform [0, 1] distributions, and then the allocation is normalized to become feasible. If dominating points are found, the size of domination, averaged over the agents, is computed and the maximum of these values is recorded. If no dominating point is found, a null value is recorded. The positive values may underestimate the domination magnitude and the simulation may report falsely undominated allocations, although the cloud of points is fairly large. Figure 8 shows the outcome of 500 simulations, with randomly generated goods' marked values and agents' ratings. Dominated solutions are indicated by blue points: the x-coordinate indicates the size and direction of the deviation between the announced and the true value of K, while the y-coordinate indicates the estimated average domination size. Orange points on the x-axis show the undominated cases.
For positive deviations in the value of K, the plot shows a predictable behavior in which the size of domination grows linearly with the size of deviation and the undominated cases become sparses and disappear before the deviation size reaches 0.1. The negative deviations, instead, show a quite surprising pattern: with the exception of two cases in which the absolute deviation size is negligible, all the simulations show that the optimal values obtained from misspecified parameters are undominated. This behavior deserves further investigation. The simulations show that an improper choice of the rating scale may affect the optimality of the allocation, but it will only marginally distort the utility values. The choice of the magnifying factor is more critical because the sub-optimality of the proposed allocation is proportional to the gap between the true and the proposed value of K. The misspecification of the parameter also affects the efficiency of the proposed solution, Therefore, great care must be employed in its value's calibration.
Legal applications: Company law
The legal workgroup of the European project described 36 different cases where the fair division of an asset among two or more parties is required by law in two different areas: family law (inheritance, divorce) and company law (liquidations, termination of partnerships). I now illustrate how the procedure illustrated in Section 3.2 can help solve one of these cases regarding the liquidation of a company in which the three partners are entitled to different shares. It is important to notice that the described case took place before the procedure was set up and the project ended before I could interact further with the legal workgroup. Therefore, I interpreted the agents' preferences based on the case description and, since the descriptions of the agents' preferences were rather succinct, I enriched them with some additional elements. The example, therefore, does not describe the actual behaviour of the agents but rather their possible actions, given the reported elements. Here is the case description provided by the legal workgroup.
I, II and III concluded a partnership contract in 2006, agreeing to contribute their work and/or property to achieve a common objective -a small carpentry factory and a store for selling goods. They had different stakes/contributions that determined their shares as joint owners. Person I was a carpenter with experience especially in kitchens and bedrooms. He contributed equipment (valued at 35,000 euros) and of course his "knowhow" and experience. Person II had business premises large enough for the factory and for the store, and this was his contribution. Person III contributed 30,000 euros in cash. After the financial crisis, the business began to deteriorate, so person II proposed changing the purpose of their business to stocking and selling electronic appliances that would be directly imported from China. II still thinks that he is the only one who can decide about the purpose of the business premises. I was disappointed because they did not need him or his work anymore. III only cares about profit. The content of their common asset (joint ownership) changed during the decade. They bought new machinery but they also had a special website for selling furniture with the possibility of online interior design as an additional service. To set up this website, they had to spend 4,500 euros and they pay 1,200 euros monthly for software licences and website maintenance fees. They decided to dissolve the joint ownership and the first step that the court had to make was determining their shares. The court decided that I has 3/9, II has 5/9 and III 1/9 of the business. By determining their shares, joint-ownership was transformed into co-ownership. Upon the dissolution of co-ownership (in May 2016), the assets consist of all of the above mentioned items but also includes new machinery (valued at 20,000 euros, store items valued at 30,000 euros, and a profit of 15,000 euros).
In the process of partitioning co-ownership, I wants all the machinery, but also a part of the property where the factory was located because he wants to continue running the same business by himself. II wants a part of the profits to start with his idea and all the business premises. He is also interested in the website because he wants to sell online. III is interested in money alone and proposes to sell the business as a whole.
In addition to the given description, I further assume that the three partners agree on a value of 25,000 euros for the website. Moreover, the agents agree to leave money as one of the disputed items. Table 11 gives a list of the items and their value and ratings by the partners, compatible with their statements. By setting K = 4 3/2, the central ratings are Table 12 characterize the division. The MSE ratio ranks the agents in the order I, II and III, indicating that agent I gets a monetary share, weighted with the corresponding entitlement, which is larger than that of agent II. This, in turn, is larger than that of agent III. The unequal treatment is compensated by a reversed order for the RD indices: The allocation provides agent III with a gain of 3.1492 over the central rating. In turn, the gain of agent II with respect to the central rating is only 0.6827, and that of agent III is even smaller. Notice that the satisfaction that III gets for receiving the most treasured good is compensated by a lower market value for the received goods.
The procedure for two agents
When the division takes place between two agents only, Brams and Taylor's Adjusted Winner (AW) procedure can be used. The procedure asks the two agents to distribute a fixed number of points -or utilities -among the contended items. Items are then ordered figuratively on a line according to the utilities' ratios. Finally, a splitting point is tentatively sought, so that equality in utility is obtained by assigning goods on the sides of the point, one to each agent and by a proper allocation of the items on the point which may require at most one split item. For more details on the procedure, we refer to Chapter 4 of Brams and Taylor [12].
In order to use AW, ratings should be converted into utilities, by using the power rating formula (4), and then normalized. The resulting utility values will be expressed as positive real numbers -not integers. This preliminary work is not needed, except for the real numbers relaxation, because items can be directly arranged on the ordered line according to the rating differences. In fact, when the power rating model holds, for any a ∈ G and goods keep the same order on the line -whether utilities' ratios or rating differences are considered. As in the general case, the appropriate indices (UM and RD) measure the quality of the optimal allocation in terms of monetary value and rating difference.
Legal Application: Divorce
I examine another instance provided by the legal workgroup of the European project. As in the previous application, the project conclusion prevented me from interacting further with the legal workgroup. I therefore include some additional descriptions regarding the agents' likes and dislikes, and the ratings reflect my interpretation of the agents' intentions, rather than their recorded behaviour. This is the case as reported by the legal workgroup: W , wife of H, asks for the statement of cessation of the civil effects of the marriage, three years having passed since the judgment of personal separation.
The goods in common are: 8. A sum of money equal to 1, 500, 000 euros.
The spouses are both professional financial operators in the risk capital market and are involved in several types of entrepreneurial activities. For this reason, both have an interest in retaining company holdings. The wife is also interested in the valuable furniture for herself, as part of her entrepreneurial activity involves the buying and selling of works of art. For his part, H requires the assignment of works of art and vintage cars, as he is a collector.
To better outline the preferences, I further assume that the wife is interested in the family house and has some interest in the seaside resort apartment, while the husband has agreed to live in the inherited apartment.
Money (item 8) can either be considered as an item of the division or it can be distributed in equal parts between the parties. In the previous example, I considered the first option, while here I choose the second alternative to point out the fact that the procedure returns a satisfactory division without money transfers. Based on the short description, I set the ratings of the two parties in Table 13. Figure 9 shows the assignment of the goods to groups and the utility ranges for the two agents. For each point, two ranges are computed: the utility of goods on the left (right, resp.) for agent 1 (agent 2, resp.), excluding and including the goods located at the point. Starting from a central group (typically, the one in which the difference is zero, if it is nonempty), the procedure verifies whether the two ranges cross. If they do not, the splitting point is moved in the direction of reducing the distance between the two ranges. When the two ranges finally cross, an allocation that assigns all the items to the left (right, resp.) of the splitting point to agent 1 (agent 2, resp.). The items at the point are allocated so that utilities are equalized. If necessary, one item is split between the agents. Starting from the items in group 0, I move leftward because the range for agent W is above that of agent H. At "+1" the two ranges cross, and equality is attained by assigning items 2 and 4 so as to obtain an Egalitarian allocation. Since this can be done in two ways while splitting one item, I get the following optimal allocations: The indices in Table 14 help define the quality of the division as perceived by the two agents: In both solutions, the larger market share obtained by W is offset by a slightly lower RD index. I note that the sum of money given to the two agents at the beginning of the procedure can be used to assign the only split item to one of the agents. In both solutions, it seems reasonable for W to buy the smaller share originally assigned to H.
What the Agents Should Know
Whatever the number of agents is, an important issue for implementing the procedure is the degree of knowledge that the agents should have about the division process before they participate in it. Every detail of the procedure should be public knowledge among the agents, though it is hard to imagine each of the participants being fully aware of every mathematical detail of the algorithm that returns the allocation. On a more practical level, we can think of a list of clues to be given to the agents in order to make them more aware of the rating process. These indications are not meant as a full replacement of the procedure's description, nor they are meant as axioms. The two-agent setting, with the horizontal line described in step c) of the procedure and pictured in Figure 9 helps explain why these clues work.
Indications for the agents 1. Agents should assign high ratings to the goods they wish most for themselves, and low ratings for the goods they are ready to leave to somebody else.
2. Suppose an agent has tentatively decided on a set of ratings for all the goods. Before passing those ratings to the mediator, the agent may review the ratings with the following indications in mind: (a) If an agent decides to raise (lower, resp.) the rating of a goodwhile keeping all the other ratings fixed -the chances of receiving the good increase (decrease, resp.). This holds because the good changes position on the horizontal line of AW. For instance, if agent 1 increases the rating, the good moves leftward on the horizontal line, and the increase in the rating indicates the movement distance. We use the term "chance", because the allocation depends on the difference of the two ratings -not just on the single agent's rating. The statement can be made more rigorous by assuming that the rating of the other agent is generated by a fixed probability distribution. The probability of getting the good increases with the item's rating.
(b) If an agent increases or decreases the ratings of all the goods by the same amount (provided this is allowed by the tentative ratings) nothing changes. This happens because all goods are moved in the same direction and by the same distance on the horizontal line.
3. At the same time, obtaining goods with higher ratings has a cost: if there are two Egalitarian solutions, then the solution that contains more goods (or parts of them) with higher ratings has a lower market value.
The last indication is formally justified by the following result, which holds for any number of agents.
Proposition 8. Suppose that for one agent i ∈ N , r iA > r iB , A and B being two of the contended goods. Furthermore, suppose that z and z are two Egalitarian allocations with z iA ≥ z iA , z iB ≤ z iB (25) with at least one of the inequalities strict. Moreover z ia = z ia for every a = A, B.
Proof. Since z and z are Egalitarian allocations in the same problem, their normalized utility must coincide. ThusŪ i (z ) =Ū i (z ) and, by (26), Dividing both terms by K r iB −r i and rearranging the terms, it becomes By (25) both terms are positive and, since r iA > r iB holds, This, together with (26), implies µ i (z ) < µ i (z ).
In the legal application, two solutions, z e 1 and z e 2 , are found. On comparing the two solutions, W has a larger part of good 4 and a smaller part of good 2 in z e 1 and good 4 has a higher rating than good 2 for W. All the hypotheses of Proposition 8 are satisfied and, as expected, µ W (z e 1 ) > µ W (z e 2 ). The same is true for H, but with the roles of z e 1 and z e 2 inverted and, therefore, µ H (z e 2 ) > µ H (z e 1 ).
Concluding Remarks and Further Work
This work illustrates a procedure for allocating divisible goods among agents in such a way that both the agents' preferences and the goods' market values contribute to defining the fairness criterion. The procedure is simple, as agents rate the goods on a simple discrete range, and is computationally fast, since only linear programming techniques are involved, and this keeps the number of split items to the minimum needed to achieve perfect equitability in the agents' utilities. The procedure has been used in an applied project which counts people in professional roles from many fields, many of them with little or no mathematical background. As was to be expected, though, the applied work raises many new research questions. Several open problems have been pointed out in the previous section. Other directions include: a) The choice of the parameter K. In the present model, the addition of a star to a good's rating increases its utility by the factor K > 1. By setting K = 4 3/2, I obtained quite reasonable results in the examined applications. Can an optimal value for K be defined based on theoretical and/or behavioural and experimental observations? b) Increased Model Flexibility. The DPR utility model is assumed to work with the same parameters for all the agents. Agents may have different discernment abilities. A refinement of the present model should include personalized parameters for each agent.
c) Analysis of Special Cases Can the procedure be made easier for some instances? For instance, when all the goods have the same value, does the procedure coincide with AW? d) Divisibility Constraints. The model assumes all goods to be divisible. We have already seen that this is not a major problem even for inherently indivisible goods, because a good's split can be mimicked by means of ownership shares or by monetary compensations among agents.
In any case, imposing one or more goods to be allocated in their entirety to one of the agents can easily be incorporated as additional integer constraints in (9). The new solution, however, may fail to satisfy perfect equality in the agents' utilities that the described procedure guarantees.
A remedy for this problem is to define new properties that apply to the context of mixed divisible and indivisible allocations of goods. An example of this change of perspective is provided by the recent work of Bei et al. [4], where a novel notion of envy-freeness is found that is guaranteed to exist in the hybrid situation just described.
e) Division of goods and bads. The list of items to be divided may contain liabilities ("bads" in the economic jargon) such as debts, obligations or duties to perform.
The recent works by Bogomolnaia et al. [8] and [9] reveal that introducing bads is not a simple corollary of the case for goods, but surprising and unexpected results are quite common. It would be interesting to extend the procedure to the mixed case of goods and bads, making use of the already known paradoxes.
f) Market value and the Nash/Competitive solution. Finding a relationship between the Nash/Competitive solution and the market value, using the DPR utility model or any other proposal would incorporate the many strengths of the solution with the practicality of tying together the agents' subjective satisfaction with money.
|
2019-10-03T17:35:11.000Z
|
2019-10-03T00:00:00.000
|
{
"year": 2019,
"sha1": "330f983d0ef3d5250bf9bfa3552d095765b35fd0",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "43b6377d9ff8df0ad17312a0384e5cd83084a671",
"s2fieldsofstudy": [
"Economics",
"Law"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
238981428
|
pes2o/s2orc
|
v3-fos-license
|
Discussion on controlling measures and the influence of engineering change
This paper discusses the engineer change of the main causes and influence on engineering cost, calculated the reasonable method to avoid and control engineering change and approach, in order to achieve the purpose of scientific management to effectively control investment through analyzing the engineering change in detail, and combined closely with the project cost.
Introduction
The change or optimization of the construction materials, construction technology, construction method, structural layout, use function, component size, technical index and project quantity of part or all of the project in accordance with the procedures agreed in the construction contract due to the change of site construction conditions, the requirements of the construction unit, the optimization of design scheme, the change or the instruction of the supervision engineer, all these are called engineering change.
Engineering change has a wide range and complex relationship, which makes the calculation and audit of engineering change more complex and difficult. Good control of engineering change can timely grasp the realtime investment information of engineering project, and engineering change cost estimation is one of the main basis for engineering change decision-making. Therefore, good control of engineering change is related to the smooth progress of the whole project investment, and it is more conducive to do a good job in investment change estimation and timely send dynamic investment information. The data shows that under the traditional construction mode, different engineering changes will greatly affect the price, cost and project operation (Table 1). Therefore, it is of great significance to do this work well. At present, there are some construction units that do not pay reasonable attention to and supervise the design work, and there are still a large number of engineering designs that do not carry out design bidding, do not select the design scheme of survival of the fittest, and lack the awareness of high-quality products; in order to save the cost of design and consultation, some construction units may design the engineering construction drawings through informal ways of private design, It is likely to lead to incomplete drawings, mismatches or errors, and great differences with the actual situation on site, resulting in frequent design drawings modification, resulting in engineering changes, seriously affecting the construction progress, and at the same time, causing difficulties in project cost control. In addition, there may be some design personnel with low quality who only focus on the technical aspects, but ignore the economic, social and environmental benefits of the project construction, resulting in unnecessary design changes in the construction process, delaying the construction period and increasing the project cost.
Changes due to improvement of construction standards
In the actual project, there are often changes in the number of projects, construction materials, construction technology, etc. due to the unilateral requirements of the construction unit to improve the construction standards. This will make the project cost greatly exceed the project budget, beyond the investment limit. For example, the original design of a real estate sales center and club decoration project is to lay ceramic tiles on the ground and wallpaper on the wall. At the end of the project construction, the leader of the construction unit went to the site for inspection, which did not meet the decoration effect of the lobby floor and wall that had been completed according to the drawing. It was required that the floor should be chiseled and replaced with marble, and the wall should be replaced with modeling wall, resulting in engineering changes.
Changes caused by non strict implementation of project construction procedures
In the actual engineering construction, some construction units do not implement the necessary engineering construction procedures, and do not make scientific and feasible preparation for the project, so the project starts in a hurry. It results in the phenomenon of design, construction and change at the same time. In the process of construction, once problems are encountered, they can be changed if they want to. After the change, the necessity and rationality of the project change can not be effectively controlled, managed and supervised. There is no corresponding responsibility restriction for the large loss and waste caused by the project change. In addition, in order to maximize the probability of winning the bid, the construction unit often uses the method of lowering the price or deliberately quoting at a price lower than the normal market price, or using the unbalanced quotation method. The bad result is that the project cost greatly exceeds the approved amount of investment, and even causes the problem of capital gap, which leads to various problems.
Design change caused by imperfect management of development organization
In the actual engineering project, there are often some construction units, in order to obtain more profits, usually want to use various methods to cause unnecessary engineering changes. Some construction units even make changes without the consent of Party A, resulting in the fact that the construction house has to be recognized. If the construction site managers of the development organization are not strong sense of responsibility, lack of experience and ability, and the management is not perfect, it is easy for the construction unit to "drill" all kinds of loopholes. At the same time, in the actual construction process of construction projects, there are often some reasons that the construction unit unilaterally in order to change the design effect of the building, change the use function, adjust the plane layout and decoration methods and so on, resulting in the increase or decrease of engineering quantity.
Supervision does not perform its duties
In the actual project, some supervisors do not control the project and the project cost. They often just act as if they are going through the motions. For example, the project changes proposed by the construction party are generally approved, signed and sealed after they are not reviewed. Some of them even sign the changes after the project is completed and the project is hidden, so there is no further research on the site. Because the supervision work is irresponsible, not serious and rigorous, and the signature of the changed project is not taken seriously, which is also the reason for the frequent occurrence of engineering changes.
Unreasonable administrative intervention
At present, some leaders of the relevant competent departments of construction projects and the leaders of the construction units often only stay on the total cost of budget and settlement, which leads to the lack of comprehensive and systematic project cost management, and the lack of the whole process, multi-faceted, realtime and dynamic management. In the stage of project preparation and construction, it is not careful to organize the construction party, the construction party, the supervision party and other parties to participate in the joint review of drawings. When problems are found in the project construction or after the construction is completed, it is necessary to discuss and solve the problems, put forward changes or modifications to the project, and even remove the parts that are under construction or have been completed, resulting in the delay of construction period and the waste of a lot of materials and people power.
Strengthen the preparatory work of construction project
It is one of the effective ways to reduce the engineering changes in the process of engineering project construction to do a good job in the preparatory work and investigation and analysis of engineering project. In particular, before the design of the project plan, the discussion of the construction project plan should be held many times, so that the design unit can fully understand the process requirements, functional requirements and corresponding special requirements of the building, and determine the overall design concept, design style and overall layout, so as to avoid engineering changes.
Control the design of drawings
Good design drawings are the foundation of the smooth progress of the project, and good design drawings are the first key to control the engineering change and design modification. Through the comparison of several schemes in engineering design bidding, the advantages and disadvantages are selected to make the engineering design more scientific, complete, applicable and economical. Secondly, after the preliminary design drawings come out, the drawings should be reviewed to avoid the design changes in the construction process. Third, do not report for approval, design and construction at the same time. Before the project has not been approved, the drawings have not been reviewed, and the relevant environmental protection procedures have not been completed, the relevant government departments must resolutely prohibit the construction of engineering projects.
Strictly grasp the relevant rules of engineering change
The construction project related personnel must adhere to the spirit of high responsibility and strict scientific attitude towards the engineering change. The engineering change should be carefully investigated in advance, with relevant graphic data, in line with the existing design requirements, in order to meet the construction requirements.
The engineering design change must be carried out strictly under the constraints of the relevant provisions of the construction contract, and any engineering change cannot make the contract invalid. The project change shall be strictly implemented in accordance with the procedures required by the contract terms and shall be submitted for examination and approval. The project change without the approval of the construction unit and the design sheet shall not be constructed without authorization.
Reasonable management of engineering change
From the beginning of the project to the completion of the settlement, the engineering change runs through the whole construction process of the construction project. How to manage the engineering change is an important link to reduce the adverse impact of the engineering change on the whole project.
Standardizing the management of project bidding and construction contract
The project bidding shall be carried out in strict procedures. After a construction unit wins the bid, the construction contract shall be signed in accordance with the provisions of the bidding documents, and the provisions in conflict with the relevant provisions of the bidding documents shall be strictly signed. We should pay attention to and strengthen the examination and management of construction contract.
Accurate preparation of bill of quantities
In the bidding stage of the project, the preparation of the bill of quantities should be done well. The prepared bill of quantities should be reviewed by the relevant departments to check the omissions and make up for the deficiencies, so as to reduce the deviation in the calculation of the base bid price or the errors, omissions and weight of the bill of quantities as far as possible.
|
2021-05-22T00:02:51.882Z
|
2021-03-01T00:00:00.000
|
{
"year": 2021,
"sha1": "9ea7f86442eb8f8019c95678ad233bc0fd8ad1c1",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/24/e3sconf_caes2021_03054.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "736e8ce0d7fd42a54e000f6dfed8267b5bafca10",
"s2fieldsofstudy": [
"Engineering",
"Business"
],
"extfieldsofstudy": []
}
|
264941348
|
pes2o/s2orc
|
v3-fos-license
|
Moving Beyond Resistance and Readiness: Reframing Change Reactions as Change Related Subject Positioning
ABSTRACT In this paper, line managers’ experiences of, and discursive subject positioning in, a participatory work environment initiative in four nursing homes called ‘The Health Circle Project’ is examined. We focus on line managers’ change related subject positioning by interviewing the managers of the four workplaces before and after the initiative and conduct a comparative case study from a discursive psychology frame. The aim of this paper is to focus on change reactions from managers and move beyond a reductionistic dichotomy of change resistance/readiness. Instead, we focus our analysis on the change related subject positioning the managers engage in, and how they position both themselves and their subordinates. Hence, we examine how the line managers experienced the participatory Health Circle intervention, and how they reacted to potential loss of power to discursively construct and define work environment problems caused by the initiative. The study exemplifies how the line managers experienced the Health Circle intervention as both confirming and challenging their subject positions as capable managerial subjects. Finally, in the light of the analysis, the potential unintended consequences of engaging in participatory work environment intiatives and similar activities are discussed. MAD statement Resistance to change is one of the most frequently used explanations for why change processes fail. The current study presents a more nuanced theoretical concept, change-related subject positioning and explores how increasing employee participation, can elicit unintended reactions from managers. This study hence contributes to our understanding of how a, in principle, positive change process of empowering employees to improve working conditions leads to a multitude of change related subject positioning from managers. Some managers embrace the change and position themselves in line with the employee participants. Others feel threatened which in extreme cases leads to negative positioning of their subordinates.
Introduction
When change initiatives fail to meet the expected outcomes and implementation failures cannot fully explain the absence of positive findings, resistance to change or lack of change readiness are often considered as likely explanations (Dobson, 2001;Drzensky et al., 2012;Weiner, 2009).Resistance to change and readiness for change are in that sense core concepts regarding the psychological foundation underpinning organizational change success and failure.Substantial practitioner interest has been directed towards managing these two concepts in order to ensure that change processes lead to intended outcomes (Kotter, 1995;Maurer, 2010).There are numerous reasons for this interest.Studies show that on the one hand change resistance leads to both unsuccessful change processes (Lines, 2004;Pardo del Val & Martínez Fuentes, 2003) and poor wellbeing (De Jong et al., 2016), and on the other hand securing change readiness and support for novel initiatives makes success more likely (Lines, 2004).
The resistance and readiness literatures often focus on how employees hold and express change related attitudes, but the attitude of the line manager towards change initiatives is likewise of paramount importance for change success.For instance, studies of participatory initiatives to improve working conditions show how line managers' change readiness is linked to employees change readiness, which in turn is related to the outcome of the initiatives (Nielsen & Randall, 2012).These issues become especially relevant as there are debates as to the prevalence of line manager change readiness and resistance towards the change initiatives in their organizations.
Though it may seem paradoxical that managers are not supportive of the changes that they, either themselves initiate or are set to implement, not being ready to accept changes in positioning and power relations is unsurprising (Edwards & Potter, 1992;Thomas et al., 2010).In particular when changes incur redistribution of authority from managers to their subordinates.The present study contributes to nuance the understanding of line managers' change reactions during a particular type of change process, namely participatory work environment interventions.
This paper presents a qualitative comparative case study using interviews with line managers from four danish nursing homes.All four nursing homes took part in a work environment intervention which focussed on promoting employee involvement and empowerment.The differences in how line-managers experienced and reacted to the participatory intervention makes this study an interesting case for an exploration of line-managers reactions to participatory change initiatives.
Participatory Work Environment Interventions as a Particular Type of Change Process
Work environment interventions constitute a particular type of change processes.While they are largely comparable to other change events, they also have unique characteristics.First, work environment interventions are often initiated and conducted as a collaborative effort of both employees and managers.Second, though they are organizational initiatives they often build on the existing organization rather than change organizational structure.And third, whereas organizational change often entails a risk of negative impact on the work environment and psychological well-being of employees (De Jong et al., 2016), work environment interventions, obviously aim for improved well-being (Semmer, 2011).
One aspect both endorsed by practitioners (ETUC, 2004) and researchers (Abildgaard et al., 2018) as essential to achieving work environment intervention success is having employees participate in implementation of the intervention.When it comes to methods to improve psychosocial working conditions there are substantial arguments for why participatory methods would be ideal.Many interventions, including the one in the present paper, view the employees as experts in their own work environment, hence drawing on their expertise regarding the working conditions will likely produce action plans with a better fit to the organizational context (Nielsen & Randall, 2015).It has also been argued that being part of an initiative to improve the workplace will empower the employees and increase their collective efficacy (Abildgaard et al., 2020) and increase interest in further improving working conditions.
Line managers play a pivotal role in interventions as they are both the gatekeepers to upper level management and are managerially responsible for the employees participating in the activities.How important line managers are has been shown by studies with both positive and negative findings, i.e. studies that show the crucial role of supervisors for the successful implementation of initiatives (Lundmark et al., 2017) as well as studies that show how implementations fail when the support by line managers is missing (Aust et al., 2010;Karanika-Murray & Biron, 2015).A study by Abildgaard et al. (2020) likewise showed how the same initiative achieved different degrees of managerial support in the three participating organizations.
Although there is large agreement about the important role of line managers' support for the implementation of work environment initiatives, we are still lacking crucial knowledge about why some managers act more supportive than others.Why do some resist improvement initiatives that others engagingly are ready to support?In this article, we examine the change reactions of line managers more closely.
Reactions to Change: Empirical Findings and Theoretical Explanations
In this section, we will briefly review previous empirical findings and theoretical explanations for the commonly studied reactions in relation to change; change resistance and readiness, in particular focusing on line managers.Change resistance can happen due to a number of reasons.Guth and Macmillan (1986) argue that line managers may resist change because they feel they lack skills to successfully implement the suggested changes, due to doubts of the potential effectiveness or because they may perceive the goal of the change to be in conflict with their own goals.
It has further been suggested that managers can be reluctant to pass on the necessary skills and tasks to subordinate team members either because they fear that they may lose power or because they do not trust employees (Parker & Williams, 2001).
Theoretically, the phenomenon of resistance is contemporarily addressed from a variety of different perspectives.A widely employed theoretical frame is seeing resistance as intrinsic and dispositional.This entails a psychological approach in which resistance towards change is treated as a product of individual negative attitudes (Choi, 2011;Oreg, 2006), or as a dispositional trait related to personality factors (Oreg et al., 2011).Dent and Goldberg (1999) have demonstrated how pervasive it has been to explain resistance to change as being caused by individuals fear of change or resistant 'personalities'.Change resistance has hence often been seen as an individual level phenomenon, which interferes with change implementation.
Resistance and Readiness as Subject Positions
An alternative perspective on change resistance, which we build on in this study, can be found in the social psychology inspired by the 'process' (Weick, 1979) or 'discursive' (Potter & Wetherell, 1987) turns.In this paradigm, phenomena such as attitudes, identity and personality are predominantly seen as relational and situational.Resistance and readiness are not understood as stable attitudes but instead as temporary positions.Such positions are labelled subject positions, a concept defined by Davies and Harré (1990) in the following way: 'A subject position incorporates both a conceptual repertoire and a location for persons within the structure of rights for those that use that repertoire ' (p. 48).
Subject positions are hence articulated and inarticulate social positions related to, but not reducible to, formal positions and roles.Davies and Harré (1990) further emphasize that 'There can be interactive positioning in which what one person says positions another.And there can be reflexive positioning in which one positions oneself' (p.48).Which can be understood such that managers position both their subordinates and themselves, and vice versa.
Subject positioning is a constantly ongoing process.Such processes of positioning also take place during work environment initiatives, which entails that the participating employees and managers continuously position themselves and each other during the discussions.Subject positions likewise influence who has the informal rights to bring certain problems forward, who speaks for whom and with what authority (Wåhlin-Jacobsen, 2020).
Rather than treating statements of attitudes (e.g.expressing resistance or support towards a particular project) as direct reflections of a stable psychological trait, such claims are interpreted as discursive actions enacted to fulfil certain functions, and form the basis of subject positioning (Symon, 2005).As an example, change resistance has been argued to be partly motivated by threats to valued identities (Wåhlin-Jacobsen, 2019), suggesting identification with the existing organizational state can be a source of resistance (Ezzamel et al., 2001).Alvesson and Willmott (2003) have likewise emphasized an interlinked nature of management and identity.
From such a discursive position Thomas and Hardy (2011) further criticize how change resistance often has been polarized, viewing resistance to change as either pathological and undesired (as done by, for instance Kotter, 1995) or as adaptive and praise-worthy (as done by for instance Ford et al., 2008).They suggest seeing resistance as a practice rather than something that needs to be demonized or celebrated (Thomas et al., 2010).In their view the issues related to organizational change and resistance are processes of continuous negotiations of power-resistance relations in the processes of 'organizational becoming' (Thomas et al., 2010).Symon (2005) similarly criticizes the tendency to view line managers' lack of change-readiness, or resistance to change as a predictable and uniform attitude.The critique of resistance as a static concept has led to re-assessing the foundations of resistance, both in relation to employees as well as managers, viewing resistance more as a situational act of discursive subject positioning rather than being a stable intrinsic attitude (Symon, 2005;Thomas et al., 2010;Thomas & Hardy, 2011).
Change, Surprise and Enacted Subject Positions
In this paper, we aim to expand on the work of the discursive scholars of resistance to change and employ a wider subject positioning perspective to change reactions and behaviours.First, we emphasize that subject positioning is based on a repertoire of previous experiences, social relations, identity and positioning.Though it might seem like a banal observation it suggests that the specific acts of positioning of one-self and others in the social fabric are done based on a context and follows previous practices and events.Second, interpersonal positioning is based on a social structure of anticipation and expectation.In the sense that, as ethnomethodologist have proven (Garfinkel, 1967), unexpected situations produce challenges for our social practices.A surprise in social interaction leading to positioning conflicts is likely to cause both discomfort due to cognitive dissonance (Cooper, 2012), as well as escalating social conflict.In this sense subject positioning in relation to change processes is more than merely resisting or being change ready, it is an ongoing and reactive positioning process towards the context and actions of others that is enacted in an ongoing process.Vough et al. (2017) conversely find an absence of change related conflict in a case study where a company implemented proactive employee self-initiated change, based on that it was required to discuss changes early on with management, hence avoiding breaches in surprise and socially conflicting situations (Vough et al., 2017).
In a broad sense, a combination of change and surprise is potentially volatile (Louis, 1980).Also as there is a risk that, what Schein (2010), labels the core basic assumptions in the organization can be violated and a struggle of positioning will entail to establish a new social order.The reason for emphasizing surprise and positioning dilemmas is that when major change events are introduced by an organization, it is seldom on the line managers' initiative and even when it is, it is possible that the decision is motivated by (more or less direct) suggestions or decisions from higher organizational levels.Likewise, and more applicable to the present case, when changes imply the empowerment of employees, the workplace manager's privileged subject position of power might be threatened, which in turn can potentially lead managers to enact reactive and negative counter-positioning of their subordinates and themselves.The analysis in the present paper hence draws on the discursive psychology concept of subject positioning to study the processes of managerial change reactions in participatory initiatives.More specifically our research question is: What change related subject positioning do the managers engage in during the participatory work environment initiative.
Method
The present study is a qualitative multi-site case study and the empirical material is from an intervention study in the Danish elder care sector testing the applicability of a participative method to improve the working conditions.The intervention study was carried out over a two-year period in four public sector eldercare centres.
JOURNAL OF CHANGE MANAGEMENT: REFRAMING LEADERSHIP AND ORGANIZATIONAL PRACTICE
The Participatory Workplace Intervention The intervention was based on the German Health Circle method (Aust & Ducki, 2004).The aim of the health circle method is to identify problems in the work environment that cause work strain and that might lead to negative health outcomes, develop proposals for work environment improvements and implement solutions.The method aims to involve employees as much as possible through all phases of the project and is supported by an external facilitator.
As part of a research project conducted by the National Research Centre for the Working Environment (including two of the authors of this paper (JSA & BA)), the intervention was conducted in four eldercare homes from four different municipalities in Denmark.All municipalities in Denmark were invited to participate in the research project and if they showed interest, they were asked to relay contact information for two eldercare homes that could participate.All the interested nursing home managers were invited to information meetings about the project and participation was based on an active interest from the managers.Four municipalities in different parts of the country were interested and chosen for the project.In each municipality, the research group randomly assigned the two interested eldercare homes to either be an intervention workplace or a control workplace.However, in all eight workplaces a questionnaire survey about their working environment was conducted and a short report about the results were provided to all workplaces.Thereafter, only the four intervention workplaces received the intervention.
The Health Circle intervention was supported by a facilitator who had participated in a three-day training course about the Health Circle method organized by the research group.The results of the questionnaire survey were presented to all employees during a two-hour seminar.The results were discussed among all employees and based on that, employees were asked to point out challenges in the working environment and how these could be solved.Also at the seminar, it was decided which 6-8 employees would be part of the Health Circle group, that would work with the suggestions for improvement during four to six meetings conducted over the next six months.Employees were asked to point out colleagues that would represent the whole group of employees as comprehensively as possible including for example younger and experienced employees, different departments, and different work shifts.
There are different versions of the Health Circle approach especially with regard to the participation of the line manager in the health circle meetings (Aust & Ducki, 2004).In this version of the Health Circle concept, the line-manager was only invited to participate in the first and the last Health Circle meeting.The idea behind this was to give employees the possibility to discuss the problems they experienced without feeling restricted.To keep employees who did not participate in the Health Circle meetings continuously informed about prioritized work environment problems and suggested solutions, a large pin board was prepared, continuously updated and shown in a staff room for everybody to see.
Case Descriptions
Here we briefly describe the four cases, i.e. the four nursing homes in which the health circle project was conducted.The descriptions are based on interviews with the manager and the shop steward before and after the intervention and were part of the preparatory visits aimed at gaining more knowledge about each workplace.The workplace and manager names have been pseudonymized.
Case 1: Sunflower House is a nursing home in a rural community with a long history of collaborative efforts to improve working conditions.Margaret, the manager of Sunflower House, was positive about participating in the project to the point that she took on the role of the Health Circle facilitator after the project ended.During the Health Circle project the procedure at Sunflower House was altered so that Margaret received a quick briefing at the end of every Health Circle meeting.
Case 2: Seaview Residential is a eldercare centre in a rural community.The manager, Ellen, had long tenure at Seaview Residential as had many employees.Prior to engaging in the Health Circle project Ellen had been engaged in a appreciative inquiry based project (Cooperrider & Srivastva, 1987).The name of this former project was 'The Good Story' and the project aimed at making eldercare workers share positive experiences about their work.Several topic days about appreciative inquiry had been held for all employees at Seaview Residential to equip staff with knowledge that in turn was expected to make their everyday work more aligned with the ideas of appreciative inquiry.
Case 3: Pinewood is a relatively newly founded nursing home in a larger city, hence both its manager, Lee Ann, and the employees had a short tenure at this facility.The new workplace did to some extent lack organizational structure (routine meetings, efficient and flexible shift planning practices) that some of the older eldercare centres had established.At Pinewood Nursing Home the Health Circle procedure was altered so that the line manager received a quick briefing at the end of every health circle meeting.
Case 4: Meadow Lodge is an eldercare centre in a smaller town that went through a number of organizational changes during the time of the Health Circle project.In the year prior to the project the nursing home was relocated.During the project the manager, Ivy, left the organization and the manager of another elder care institution in the municipality, Gabby, took over.Especially under Ivy's management there were conflicts between different groups of employees and some were dissatisfied with the management.
The line managers at the four nursing homes were all relatively experienced care centre managers and were all nurses by education as is common in Danish Nursing homes.Also with regard to employees, the four cases represent the typical Danish Nursing homes: The employees were mainly semi-skilled and unskilled nursing aids and almost all were women.
Data Sources
The analysis presented in this paper is based on interviews with nursing home managers after the series of health circle meetings had ended and after the health circle participants including the nursing home manager -had conducted an evaluation workshop at the eldercare centre.The interviews were carried out by an experienced researcher, and a research assistant who took notes during the interview.The purpose of the interviews was both to obtain factual information about contextual changes that had occurred during the health circle project and to ask the managers about their assessment of the approach and the changes it might have caused.Interviews followed a semistructured format (Brinkmann & Kvale, 2015).The interviews lasted approximately one hour each.The interviews were conducted by two members of the research group, the second and third author participated in half of the interviews each.As a source of validation of the analysis it needs mention that either the second or third author in total participated in most Health Circle meetings and steering committee meetings providing substantial knowledge of the managers, their workplace and the Health Circle process.The observations also provided background knowledge for conducting the interviews, allowing the interviewers at times to challenge the interviewees if their accounts did not match the experiences of the researchers.The nursing home shop steward and managers were also interviewed at the onset of the project, though these interviews have only, in the present study, been used for context and to write the case descriptions listed above.
Data Analysis Strategy
The interviews were transcribed verbatim and coded.All excerpts in this article are translated by the authors from Danish.First to identify relevant pieces of the interviews for the study of change reactions, we used a bottom-up exploratory descriptive coding strategy (Saldaña, 2015).Codes were developed and addressed the following themes: (a) Perception of the intervention process, (b) its outcome, (c) perceived impact of the health circle activities on different levels of the organization during the project, (d) the perception of management obligations, (e) management identity and (f) the perception of the employees as a group as well as (g) the relationship with them.We recoded the material for these themes but, in addition, deliberately held the analysis open to extra information and perspectives, by having an open code for all the passages that were particularly interesting to the topic of managerial reactions to the Health Circle process, but were not covered by the other codes.By reading, re-reading and discussing the coded material we arrived at the theoretical position of subject positioning theory and enactment of change reactions.This led to a theory-based code development where we specifically developed a coding scheme focusing on (a) the enacted discursive subject positioning carried out by the interviewees (b) as well as their enacted discursive subject positioning of their subordinates.
Taking the empirical onset in the descriptively coded material, a second order analysis (Saldaña, 2015) was conducted based on the codes developed from the theoretical framework of this study, that is focusing on discursive subject positioning.To start with, themes that were represented in several cases were analysed.The iterative analytical process included reviewing the above mentioned identified themes, defining and naming the themes and selecting examples.The first author conducted the initial coding and analysis, and then discussed and refined the analysis together with the second author in an iterative process.This process produced five main themes (a) Managers perceptions of empowering employees, (b) Illigitimizing: Neutralizing employee complaints, (c) clash of perceptions -Confrontation with employees' problem constructs, (d) Changes in manager-employee relations and subject positions and (e) Experience of exclusion and betrayal.These themes are presented one at a time in the analysis where we, in a narrative fashion, present and juxtapose the experiences of the managers in relation to the themes to provide a nuanced account of the complexity of subject positioning during change processes.
The analysis is inspired by the theoretical approaches which explain attitudes towards change as a result of subject positions being affected (Davies & Harré, 1990;Potter & Wetherell, 1987).For instance, the line managers claiming specific and sometimes contradictory attitudes towards the health circle intervention were given interest rather than dismissed as incongruent.During the course of the analysis we became puzzled by, and interested in, what role surprise (Louis, 1980) plays in the analysis, especially with regard to the informants Ellen and Lee-Ann.
Managers Perceptions of Empowering Employees
The Health Circle method's focus on raising awareness of work environment issues among employees and the systematic empowering way of dealing with problems was perceived differently by the four line managers ranging from very positive perceptions to more negative perceptions.
In general terms Margaret, welcomed the changes that occurred during the Heath Circle project and credited the method for equipping not only the participating employeesbut also herself -with a new 'way of thinking' about problems which she found to be both meaningful and efficient.This collective awareness of the problems true nature was presented by Margaret as the foundation for change and ultimately for the solving of work environment problems.
Another aspect Margaret articulated as an effective part of the change initiative was that of verbalization.The act of actively giving voice to experiencesspecifically to what has been conceived as a problem.She provided a concrete example of this by sharing the story of how cooperation problems among teams were solved by addressing the issue within all work units: Margaret: It has improved a lot.They have become really good at it [cooperating].
Interviewer: How? Margaret: Well, we throw it out in the open![We] said: "This doesn't work!".Then we incidentally called out a few people by name but that's fine (…) But then I addressed it [the cooperation problem] again at our team development meetings and I asked them [the involved employees] "How good are you when it comes to looking above your own "dup" [pointing at her nose]?Because someone's saying that you are never willing to offer a helping hand.Is that really how it is?" and then something happened." This example shows how Margaret frames the verbalizing of problems as practiced in the health circle meetings as a powerful tool for changing behaviour and ultimately solving those work-related problems that contain a relational component.
Contrary to Margaret's subject position towards the Health Circle intervention as something she readily accepted and used to further her management agenda, Ellen, the manager in Seaview Residential expressed negative sentiments towards the Health Circle project arguing that bringing problems up for conversation would not help in JOURNAL OF CHANGE MANAGEMENT: REFRAMING LEADERSHIP AND ORGANIZATIONAL PRACTICE solving them, but rather, make them 'grow'.In her opinion, the Health Circle method had too strong a focus on identifying problems, which she believed was one of the reasons why the project did not lead to any positive development at Seaview Residential.For her the ongoing municipal initiative called The Good Story was 'drowned' by the Health Circle intervention.She explained how the Health Circle method's focus on identifying and solving problems had made her a more firm proponent of the Appreciative Inquiry method: (…) it [the appreciative inquiry method] should be "the working method".Well, we actually have agreed in the work council, that [appreciative inquiry] should be the method of working at Seaview.That's what we stick to.This way we have something when "grouch, grouch, grouch" comes at us.(…) We can use it to help each other to formulate things more constructively.(Ellen) In the case of Seaview Residential, Ellen elaborates on her positioning of Appreciative Inquiry and the Health Circle as opposite poles.She explains how she wants to: (…) look into resources.Because I have really come to experience that when focusing on problems [i.e. the Health Circle method], they'll grow.Really this is what we got a hold of with "the good story".Taking a look at the dreams that we have.What do we already have that is good?And what it is that we want?(Ellen) Through this polarized framing Ellen positions Appreciative Inquiry as a method where verbalization of complains and concerns should be avoided and one should instead focus exclusively on the positive aspects of work, and the Health Circle method is counter positioned.In the interview Ellen's descriptions consistently divide the two methods (Appreciative Inquiry and the Health Circle method) into the good (which focuses on what we want and dream about) versus the bad one (which focuses on, and hence causes, problems); one that improved the workplace and one that did not.This positioning of herself, her workplace, and the two methods, in the retrospective interview, legitimizes her current choice of one method over the other, which in this case means a complete rejection of the Health Circle method.Such rejection in turn implies a rejection of the work done by the Health Circle group, i.e. a devaluation of the subject position of the employee participants in the health circle and their work on assessments of work environment problems and complaints.The three other managers positioned themselves in the continuum between Ellen and Margaret with Lee Ann being somewhat sceptical, Gabby being positive and Ivy as highly critical towards the Health Circle project.
Illigitimizing: A Way to Use Positioning to Neutralize Employee Complaints
Regarding the work environment issues identified by employees, Margaret expressed how the Health Circle method in principle offered the opportunity to treat the issues that were brought up at the Health Circle meetings systematically and efficiently.She further explained that she decided to continue having the Health Circle suggestion box permanently placed in the dining room, in which all employees were able to put in a written note of what work environment problem they might have encountered: … at this point we have really encouraged everybody to, you know, "if there's anythingput it in the box!".Don't worry if it's a proper issue for us.We'll deal with that judgement.'Cause we don't want to have all that nagging about this and that.This way it [the work environment issues] gets systematized, assessed, "Poof!".(Margaret) By offering this opportunity of getting problems assessed it is legitimized that she and the rest of the Health Circle members simply encourage employees to 'put it in the box!' whenever they express concerns about work environment-related issues.Simultaneously 'putting a note in the box' became a potential gateway mechanism to have problems treated as legitimate work environment issues, addressed by the Health Circle and not, in contrast, as illegitimate nagging.
The participatory element of the Health Circle method provided the employees with discursive power to affect what is considered a legitimate work environment issue: That is, if they put a note into the box, the Health Circle will assess their problem, and hopefully eventually solve it.The interview also sheds light on a weak link in the process between 'putting a note in the box' and the complexity involved in getting a problem legitimized and solved, namely the Health Circle members' recognition and understanding of the problem.Margaret, for instance, explains how it was handled, when a work environment screening survey in conjunction with Health Circle workshops revealed that four employees at Sunflower House had experienced being bullied at work.When they tried to investigate the problem further, they did not get a more concrete description of this problem and therefore decided to not use more time on it: … We brought it up in all the fora that we had decided on.And then, one has to say "okay, this does not exist at Sunflower House anymore".Because nobody had come forward, right?So we won't do any more about it.Shelve.So whenever a problem is brought up [for discussion] and is found to be, "what is this?, there's not really anything in it", then we say "throw it out!".Then we don't want to waste any more time on it.(Margaret) The quote illustrates the discursive nature of workplace reality.That, if a problem is not recognized, for instance by the Health Circle members, it risks being classified and enacted as non-existing.Consequently, there are no longer any legitimate excuses for complaining or legitimate subject positions claiming the reported problem to exist.If problems are not written down and placed in the box, they risk not being considered as real problems.In a sense Margaret's alignment of her own subject position with the Health Circle method and the involvement of employees in the process also had discursively repressive effects towards the perceptions and positions of employees who were not part of the Health Circle group.
Clash of Perceptions -Confrontation with Employees' Problem Constructs
A source for challenges related to subject positioning in the Health Circle project was that the employees in the Health Circle were to define what was to be considered a work environment problem.This is for example expressed in the following quote from Lee Ann, the manager of Pinewood: It's obvious that one cannot solve a problem, if one does not know it exists, right?This might be some of what I have come to experience.(…) Maybe there were things that weren't that important, I don't know.For instance, I have been asking questions about that communication thing several times: "just exactly, what information is it that you've talked about that you do not receive?"That one is still unsolved.(Lee Ann) JOURNAL OF CHANGE MANAGEMENT: REFRAMING LEADERSHIP AND ORGANIZATIONAL PRACTICE As Lee Ann explains, the fact that when she was informed of some of the problems that the Health Circle had identified, she did not completely comprehend what the problems was, which hampered her participation in the problem-solving process.From the statement above it does not seem like this lack of shared understanding was experienced by her as particularly threatening.This is however in contrast with the impression Lee Ann gives when sharing her recollection about an early Health Circle meeting.Specifically, when the meeting was at its end, and she was invited in with the purpose of a quick briefing: I think that one of the first meetings was very unpleasant.It felt like it was kind of an ambush.That people had talked about all the aspects that were not working and then you just enter and 'get shot at'.That, I think, was very unpleasant.(…) But it was also the way you were called in by the end of the meeting and you just got..!!And well, that's not … that's not collaboration; I am tempted to say.(…) I wasn't the least prepared for what it was and maybe I thought the negative attitude had been allowed to rule the meeting before I entered.(Lee Ann) Lee Ann here provides a very vivid account of how she experienced the discursive constructions of work-related problems made and presented by the Health Circle participants.In this particular situation the facilitator noticed that Lee Ann felt overwhelmed by being called in to the Health Circle meeting and being confronted with her subordinates' opinions.Therefore, the facilitator consulted her afterwards, assuring her that this scenario had not been intended and took on the responsibility that this situation would not be repeated.This is a dramatic example of how a new process and method, the Health Circle method, that allows for novel social processes has the potential to cause surprise and breaches of pre-existing implicit social contracts.Specifically, the candid expression of employee subject positions regarding problems in the workplace fuelled a clash of subject positioning with Lee-Ann, both in relation to the project and the employees in question.The example demonstrates the challenge of maintaining congruent subject positions during a change process.
In the case of Seaview Residential, the manager Ellen not only framed the Health Circle method as a problem stimulating process, but also expressed a dismissive attitude towards the process facilitator.In the end, she came to blame both the facilitator and the Health Circle method itself for her negative experiences during the Health Circle intervention at Seaview Residential.
The clearest example of Ellen's negative response to the Health Circle intervention was when Ellen burst into tears in front of her employees.That happened after she read on the Health Circle pin board for everyone to see what the Health Circle members had discussed during their meetings.The information at the pin board covered a number of different aspects about the work environment that needed improvement, among them 'lack of influence' and a statement saying 'pseudo-democracy.Employees are not heard by management'.Ellen expresses how the note made her feel both personally responsible for the work environment issue as well as shut out of the discussion about work environment problems that took place at the Health Circle meetings: … I felt really bad about it … Not because I think that employees weren't able to handle this [discussing problems and finding solutions]they definitely can and I don't need to be involved in everything.But I felt that in this way it was legalized that "now we talk about how problematic she is without telling her".That, I felt bad about.… And I said from the start of the intervention that I think it is very wrong that I can't participate [in the Health Circle meetings].I am a part of this.I've felt bad, right?About not taking part.I think it has made … it has made me feel at least … there has been a kind of … there has been a polarization in some domains.That's it: That it is okay to talk about me and what problems I cause, instead of telling me.(Ellen) From this statement, it becomes clearer why Ellen felt uncomfortable being left out of the Health Circle meetings and enacted a subject position in opposition to the Health Circle intervention.It is not just a question of losing the power to negotiate which work environment problems to prioritize.The discussion that takes place in her absence -which she strongly opposes -is a negotiation of both the nature of the work environment at Seaview Residential as well as a subject positioning of her way of leading the nursing home.At that point, whether she was positioned as a capable manager or not was out of her reach.
Based on how Ellen felt like the Health Circle group had defined her as the problem, it is understandable why she opposes and challenges the whole idea of letting employees define problems.In contrast, Margaret, who did not express the experience of being framed as the problem, enacted a way more congruent subject positioning in relation to the Health Circle project.This was perhaps due to the Health Circle project at Sunflower House contributing to the subject positioning of her as a competent, dedicated and successful manager.It is also worth mentioning that Sunflower house, during the health circle project, won an award for their efforts to improve their working conditions and was presented as a success story in the eldercare sector.
Throughout the interviews, both Margaret and Ellen did, without any invitation from the interviewer, engage in a very direct positioning of their own managerial identity.Margaret did this by telling how everyone at Sunflower House cared about each other and showed mutual interest in each other's lives.She also stressed several times how she was one of those managers with a knack for organizational development, in contrast to those more comfortable with numbers.In addition, she was by far the respondent most frequently using the personal pronoun 'we' during the interview.A clear example of subject position herself and others as a collective.Sometimes using 'we' to refer to the entire group of staff at Sunflower House, to her and the members of the Health Circle group, and the bigger group of municipal managers she felt affiliated with.In contrast, Ellen engaged in incongruent subject positioning, at one time expressing how she might be perceived as a very strong individual, while a few minutes later tearing this construction down with statements such as this one: Ellen: But really, I'm becoming an old lady!Interviewer: No! (laughing) Ellen: Yes! Way too old.I can feel that … Interviewer: (still laughing) That's the way it's going for some of us.
Ellen: One doesn't need to become a grumpy old lady.I think that that's what I'm becoming.
Interviewer: No, one doesn't.That's the way it is … It's not a good thing if that's the way you feel anyway.
Ellen: Well at the time being it is … it is (laughing).
JOURNAL OF CHANGE MANAGEMENT: REFRAMING LEADERSHIP AND ORGANIZATIONAL PRACTICE
In summary, the potential in the project of the employees assessing what is a problem, clearly has implications for the subject positions of their managers.Though the last quote might be extreme in its self-deprecating labelling as 'a grumpy old lady' it demonstrates vividly how employees in concert with a participatory work environment change initiative such as the Health Circle project could affect the subject positions of the manager and form new positions that are either collective or conflicting.
Changes in Manager-employee Relations and Subject Positions
Just as the Heath Circle project intertwined with the positioning of the managers themselves, so was the Health Circle project related to the managers' positioning of their subordinate employees.This goes both for the employees who participated in the Health Circle meetings as well as for those who did not.
Just like Ellen experienced being positioned in a less favourable way due to the employees' construction of problems, she herself conversely positioned her subordinates negatively in the interview.Noting them as 'below averagely' motivated to participate in the Health Circle project.In comparison she recalls her own attitude prior to the project as 'highly motivated'.Here she compares her remaining staff with a group of employees who left Seaview residential in the beginning of the Health Circle project due to a municipal reorganization of the eldercare sector.
(…) the home-care group has been better than the in-house group at telling the good story I suppose.And I have always thought managing the in-house group has been significantly more challenging (…) I think it comes down to individuals and culture.(…) When we launched the theme day [about] the good story, the home-care-group said "Yes!", the inhouse group said "Hmph!".(Ellen) Ellen is positioning her remaining employees as demotivated and sees them as comfortable with status quo, in contrast to the home-care group who left Seaview Residential, due to restructuring.Ivy, the manager of Meadow Lodge in the beginning of the Health Circle project, did the same when she compared the employees at Meadow Lodge to the staff at her new eldercare centre: 'I thought of the girls that were at Meadow Lodge, as leaden-footed and I had the same opinion back when I was still there', in contrast to her the employees at her new workplace who she generally described as progressive and engaged.
Both Lee Ann and Ivy also perform a rather classic positioning of employee as good or bad (mirroring McGregor (1957) motivational theories; theory X and theory Y), in the sense of: positioning employees as either intrinsically motivated by making a difference (theory Y) or being generally lazy and motivated by individual-goals, needing management to closely control and supervise them (theory X). Lee Ann clearly positions herself as adhering to theory Y, and her subordinates to theory X, during the interview: Despite Lee Ann's polarized positioning of strong versus weak individuals, she reports how she actually experienced those of her employees participating in the Health Circle, after a period with frustration, that reduced the group to what she labels the elite, became happier and took more initiative in general.They were taking on more responsibility and tried to solve tasks themselves that they would have passed on to her in the past and she admits feeling more comfortable seeing her subordinates happy and less stressed.
It is tempting to evaluate this shift towards a positive work climate as a result of the participatory work in the Health Circle project.Nevertheless, as Lee Ann explains, a major reason might be the change in staff that occurred during intervention: I do also think that there have been some colleagues here and there who were just negative about everything new (…) you maybe had to get some employees out who didn't really want to work the way we do.Where one is close to them [the elderly citizens living in the case home] all the time, right?(…) By replacing some [employees], more strong employees have come.(Lee Ann) The ultimate consequence of employees being positioned as weak or less strong, motivated or demotivate, seemed to be the risk of individuals positioned as 'weak' or 'demotivated' either leaving or losing their job.In this sense the change related positioning seems to have substantial impact on the workplace.
Experience of Exclusion and Betrayal
It is noteworthy that the managers of all four nursing homes report positive changes in the relationship among the employees: they have become a more tight-knit group, more caring etc.Even though it appears as if Margaret also felt left out, she does not articulate any discomfort by the way the employees' defined problems.Rather it seems as if she is curious and open-minded towards the problems that might be revealed in the investigation.
If I didn't do it [read the summaries from the Health Circle meetings], it was my own fault (I'm the one to blame).I did feel left out, but still it wasn't like it was kept secret.I only had to take a walk down to the Health Circle board, right, and so I did a couple of times![Then I thought] "Oh, have they discovered [such and such problems]?",right?(Margaret) Ellen surprisingly reports the same effect saying the employee group became more cohesive.But, as the only line manager, she questions this development.Ellen takes the stance that the Health Circle project has been 'very problematic', causing a closer connection, and more cohesive subject positions, among the employees, but at the same time weakening her relationship to them.When asked whether she has experienced any development in the employees' work relations that may be caused by the Health Circle project Ellen explains: … They are closer.In this way, you might say that the dialogue [between the employees] has gotten closer as well.(…) You see, I think that we have been very close in our work relation in the past and that we have had a positive tight co-operation.Like, we have been present in each other's work.That's how I have felt.(…) We made a promisewe did that several years ago actuallythat we do not say stuff about one another that we can't tell straight to each others face.But that's a thing that has to be worked on, on a continuous basis, I suppose.(Ellen) The experience of being left out or excluded from a strengthened work community seems to not only be linked to assuming a resistive position but also be connected to the feeling of betrayal in Ellen's case.Though she does not use this word herself, she emphasizes more than once, that she and the rest of the staff at Seaview Residential had an agreementa contractabout not talking about one another behind each other's backs, implicating that her employees broke this contract while engaging in the activities that took place at the Health Circle meetings.This subject positioning as 'the betrayed' can be seen as the vocalization of an extreme form of subject positioning of both the betrayed manager and the betraying employees.While Margaret and Lee Ann also mention how they felt left out during the discussions at the Health Circle meetings, Ellen is the only one who seems to perceive this as a painful disconnection from relations with her employees, who at the same time established closer relations between themselves.
Discussion
The aim of this paper was to extend the knowledge about how line managers' enact subject positioning in relation to change during implementation of a participatory work environment intervention.We examined how line managers experienced the Health Circle intervention, how they reacted to the employee-empowering intervention and how these reactions intertwined with the subject positioning of the line manager.Finally, the study shed light on a related phenomenon; how the line managers reciprocally engaged in the positioning of their subordinates and questioned some of the possible negative consequences that might have followed from conducting this particular participatory work environment intervention.
Resistance to Change as a Reaction to Losing Power?
It has previously been suggested that line managers resist changes because they feel they lack skills to successfully implement the change process, have doubts about the potential effectiveness of the changes or perceive the goal of the strategy in conflict with their own personal goals (Giangreco & Peccei, 2005;Oreg, 2006).The fear of losing power and the lack of trust in employees' competences have also been seen as explanations for line managers' attitude and actions, passively or actively obstructing implementation of change (Giangreco & Peccei, 2005;Thomas & Hardy, 2011).
The analyses presented in this paper offer an alternative perspective on the issues of resistance, readiness and support, by suggesting the use of concept of subject-positioning from post-structuralist inspired discursive psychology (Harré & Gillett, 1994;Potter & Wetherell, 1987).From this processual and discursive perspective resistance, support or readiness should be seen as momentary social positions that are highly dependent on the specific situation and context (Thomas et al., 2010).This is demonstrated in all four cases where attitudes towards the Health Circle project were situated and tightly connected to the line managers' direct confrontation with employees' discursive construction of problems and the managers' enactment of change reactions throughout the process.
The retrospective interviews constituting the main empirical source in this analysis is obviously no exception.Here social positioning is also being done by both line managers and researchers.In this specific context the interview is framed as an evaluation of a change initiative brought to the organization by the very same researchers that carried out the interviews (or at least they represent the same institution).A negative evaluation could thereby easily contribute to indirect negative positioning of the researchers.Likewise, the line managers were also being indirectly evaluated and positioned by the researchers, who inevitably also position themselves in this discursive negotiation.
The Line Managers' Role in Implementation
Researchers have suggested that line managers play a central role in participatory initiatives as change agents by exposing employees to the initiative or by obstructing such exposure (Nielsen et al., 2006).However, when dealing with implementation of complex projects relying on employee participation, the chain of commands is not clearly established which in turn makes the role of the line managers negotiable.The authority to initiate strategic initiatives might reside with management, but local decisions about how changes are implemented are made by employees at a workplace level.Depending on the specific design and approach this potentially leaves the line manager with little control over the process.Concurrently line managers often remain responsible for the performance of the workplace.In other words, they may be asked to manage implementation of a top-down decision to encourage bottom-up change (Kieselbach et al., 2009).Resistant reactions to reduced influence might hence be directed both upwards and downwards.If the line managers' resistance furthermore is to be considered as a discursive phenomenon, as a reaction to the experience of being positioned as an incapable manager, resistance might not only manifest as explicit obstruction of specific action plans, but also in the negative positioning of employees (maybe even those already most vulnerable).We saw this in varying degree and forms in the cases of Ivy, Lee Ann and Ellen, and argue that the varying forms of positioning of employees as incompetent or change-resistant should be considered when addressing change reactions from a perspective of enacted subject positioning.
Implications: Further Research and Practice
Within organizational psychology employees' and line managers' motivation is often addressed as a necessary precondition to do participatory initiatives (von Thiele Schwarz et al., 2016).Even though this might provide the best outset for change, there is no guarantee that the initial expressions of motivation and interest persist during the entire course of a project.As this study shows, by taking on the perspective of enacted subject positioning it becomes evident that the motivation to change and to engage in implementation may shift when individuals experience various aspects of the intervention.Consequently, we suggest developing practices that make it possible for both employees and line managers to negotiate change-positive subject positions throughout the intervention, and hence avoid surprises, dissonance, conflict and betrayal.If the line manager is not participating in the main intervention activities, one might keep in mind to closely evaluate if this position of being on the outside in any way leads to the manager positioning themselves counter to the project and if so, initiate actions that allow and invite the manager into change-congruent positioning to prevent conflict.For future research, Recording and analysing longitudinal data, for instance discussions during change initiatives, would enable analyses of positioning-in-the-making. This would be a logical next step in the study of managerial change related subject positioning.
Conclusion
Through an in-depth analysis of interview material, we have demonstrated how four different line managers reacted to the empowering of employees within a participatory work environment initiative.The results suggest that line managers' change reactions should be seen as situated and tightly connected to the direct confrontation with employees' discursive construction of problems and the changes in potential subject positions that were related to the Health Circle intervention.Our study suggests that the commonly used constructs of 'resistance to change' and 'readiness to change' paint a too static and over-simplified picture.We like to encourage organizational members, practitioners, and researchers to view their own context, and the subject positioning taking place, in a more reflexive manner and thereby provide a better chance of successfully conducting planned change, without surprises and perceived ambushes.We hope that in highlighting some of the possible pitfalls of change projects in general, and participatory work environment intervention projects in particular, change agents are slightly better equipped to identify, and address, potential problems arising from discursive struggles during change.
|
2023-11-03T15:22:15.895Z
|
2023-10-31T00:00:00.000
|
{
"year": 2024,
"sha1": "a10bd747bc12aacfb0b28413e3c0c2b745468caf",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/14697017.2023.2275253?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "5d77334453a0a3da7e67a9d14ed56d5c9dd19383",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
234234237
|
pes2o/s2orc
|
v3-fos-license
|
Preparation, characterization and application of MgFe2O4/Cu nanocomposite as a new magnetic catalyst for one-pot regioselective synthesis of β-thiol-1,4-disubstituted-1,2,3-triazoles
Magnesium ferrite magnetic nanoparticles were synthesized by a solid-state reaction of magnesium nitrate, hydrated iron(iii) nitrate, NaOH and NaCl salts and then calcined at high temperatures. In order to prevent oxidation and aggregation of magnesium ferrite particles, and also for preparing a new catalyst of supported copper on the magnetic surface, the MgFe2O4 was covered by copper nanoparticles in alkaline medium. Magnetic nanoparticles of MgFe2O4/Cu were successfully obtained. The structure of the synthesized magnetic nanoparticles was identified using XRD, TEM, EDS, FT-IR, FESEM and VSM techniques. The prepared catalyst was used in the three component one-pot regioselective synthesis of 1,2,3-triazoles in water. The various thiiranes bearing alkyl, allyl and aryl groups with terminal alkynes, and sodium azide in the presence of the MgFe2O4/Cu nanocatalyst were converted to the corresponding β-thiolo/benzyl-1,2,3-triazoles as new triazole derivatives. The effects of different factors such as time, temperature, solvent, and catalyst amount were investigated, and performing the reaction using 0.02 g of catalyst in water at 60 °C was chosen as the optimum conditions. The recovered catalyst was used several times without any significant change in catalytic activity or magnetic property.
Multi-component reactions (MCRs) are reactions in which three or more reactants react to generate only one product. MCRs present a convenient synthetic procedure for producing complex molecules with structural variety and molecular intricacy. 32 These kinds of reactions provide major benets like environmental compatibility, high efficiency, quick and plain performance, and reducing the reaction time and saving energy. Compared to conventional methods, these reactions require fewer steps to achieve the nal product and can be performed in one-pot. Therefore, MCRs play signicant roles in different research elds such as biomedical, synthetic organic, generating libraries of bioactive compounds, pharmaceutical and drug discovery research, industrial chemistry etc. [33][34][35][36] An ideal multicomponent reaction permits the concurrent addition of all reactants, reagents and catalysts under the same reaction conditions. One-pot reactions show an efficient strategy in modern synthetic chemistry. 37 Minimizing the number of synthetic steps in obtaining products from starting reactants is highly favorable in organic synthesis. The perfect regioselectivity and high purity of desired products, and excellent yields are among the other remarkable advantages of multicomponent one-pot reactions.
Ferrite nanoparticles due to their magnetic property are easily separable. Recently, they have received great attention in biomedicine, [38][39][40] and organic synthesis. [41][42][43][44] Nevertheless, the nano-ferrites have hydrophobic surfaces with a large surface to volume ratio and strong magnetic dipole-dipole attractions, and they always suffer from adsorption problems because of their intense tendency of self-aggregation and low quantity of functional groups. 45,46 To prevent agglomeration of magnetic nanoparticles (MNPs) and improve their efficiency, surface coating of the MNPs is required. 47 Aqueous MNP dispersions can be achieved by surface coating with copper nanoparticles.
In continuation of pioneering works on nano-ferrites, 48-54 herein, we wish to report an efficient, three-component click reaction protocol for synthesis of b-thiol-1,4-disubstituted-1,2,3triazoles as new triazole derivatives from sodium azide, thiiranes, and terminal alkynes in the presence of MgFe 2 O 4 /Cu magnetic nanoparticles as a novel and environmentally friendly heterogeneous catalyst in water (Scheme 1).
Instruments and materials
All materials were purchased from the Merck and Aldrich Chemical Companies with the best quality and they were used without further purication. IR and 1 H/ 13 C NMR spectra were recorded on Thermo Nicolet Nexus 670 FT-IR and 500 MHz Bruker Avance spectrometers, respectively. Melting points were measured on an Electrothermal IA9100 microscopic digital melting point apparatus. The synthesized nanocatalyst was characterized by XRD on a Bruker D8-Advanced diffractometer with graphite-monochromatized Cu Ka radiation (l ¼ 1.54056 A) at room temperature. TEM image was recorded using an EM10C-100 kV series microscope from the Zeiss Company, Germany. FESEM images were determined using FESEM-TESCAN. The energy dispersive X-ray spectrometer (EDS) analysis was taken on a MIRA3 FE-SEM microscope (TESCAN, Czech Republic) equipped with an EDS detector (Oxford Instruments, UK). Magnetic property of synthesized nanocatalyst was measured using a VSM (Meghnatis Daghigh Kavir Co., Kashan Kavir, Iran) at room temperature. HRMS analyses were also carried out in the electron impact mode (EI) at 70 eV. The Cu content on the catalyst was determined by Perkin Elmer Optima 7300DV ICP-OES analyzer.
Synthesis of MgFe 2 O 4 nanoparticles
MgFe 2 O 4 nanoparticles were synthesized by a solid-state procedure according to our reported investigation. 48 Briey, in a mortar, Mg(NO 3 ) 2 $6H 2 O (0.512 g, 2 mmol), Fe(NO 3 ) 3 $9H 2 O (1.61 g, 4 mmol), NaOH (0.64 g, 16 mmol), and NaCl (0.232 g, 4 mmol) were mixed in a molar ratio of 1 : 2 : 8 : 2 and ground together for 55 min. The reaction was carried out with the release of heat. Aer 5 minutes of grinding, the mixture became pasty and its color changed to dark brown. For removing the additional salts, the obtained mixture was washed with doubledistilled water for several times. The produced mixture was dried at 80 C for 2 h and it was then calcined at 900 C for 2 h to obtain the MgFe 2 O 4 nanoparticles as a dark brown powder.
Preparation of MgFe 2 O 4 /Cu nanocomposite
In a round-bottom ask, a solution of CuCl 2 $2H 2 O (0.68 g, 4 mmol) in distilled water (50 mL) was prepared and then MgFe 2 O 4 (1 g) was added. The mixture was stirred vigorously for 30 min and followed by gradually addition of KBH 4 powder (0.1 g) in order to reduce Cu 2+ cations to copper nanoparticles. The stirring of mixture was continued at room temperature for 1 h. The black MgFe 2 O 4 /Cu nanocomposite was separated using a magnet, washed with distilled water and then dried under air atmosphere.
Solvent-free synthesis of thiiranes from epoxides: general procedure
The various thiiranes were prepared using a solvent-free method reported in our previous research. 55 Briey, a mixture of epoxide (1 mmol) and alumina immobilized thiourea (0.752 g, 25% w/w) was ground in a mortar for an appropriate time at room temperature. The progress of the reaction was monitored by TLC using n-hexane : EtOAc (5 : 2) as an eluent. Aer completion of the reaction, the mixture was washed with EtOAc (3 Â 5 mL). The combined washing solvents were evaporated under reduced pressure to give the crude thiirane for further purication by a short-column chromatography over silica gel.
2.5. One-pot synthesis of b-thiol-1,4-disubstituted-1,2,3triazoles from thiiranes catalyzed by MgFe 2 O 4 /Cu in water: a general procedure In a round-bottomed ask equipped with a magnetic stirrer and condenser, a solution of the thiirane (1 mmol), alkyne (1 mmol) and sodium azide (0.078 g, 1.2 mmol) in H 2 O (5 mL) was prepared. MgFe 2 O 4 /Cu nanocomposite (0.02 g) was then added to the solution and the resulting mixture was stirred magnetically for 2-4 h at 60 C. The progress of the reaction was monitored by TLC using n-hexane : EtOAc (10 : 2) as an eluent. Aer completion of the reaction, the magnetic nanocatalyst was separated using an external magnet and collected for the next run. The reaction mixture was extracted with ethyl acetate and then dried over anhydrous Na 2 SO 4 . Aer evaporating the organic solvent, the crude b-thiol-1,4-disubstituted-1,2,3triazoles were obtained. Removal of the solvent under vacuum, followed by recrystallization with EtOH/H 2 O (1 : 1) afforded the pure b-thiol-1,4-disubstituted-1,2,3-triazoles derivatives in 80-96% yield ( Table 2). All products are new compounds and were characterized by HRMS (EI), FT-IR, 1 H NMR and 13 C NMR spectra. The spectra of the products are given in the ESI. †
Synthesis and characterization of MgFe 2 O 4 /Cu nanocatalyst
Although, MgFe 2 O 4 has a large surface to volume ratio and therefore possesses high catalytic capability due to its wide contact surface, it tends to aggregate so as to minimize the surface energies. Moreover, the naked magnesium ferrite nanoparticles have high chemical activity, and are easily oxidized in air, generally resulting in loss of magnetic property and dispersibility. Therefore, it is signicant to provide appropriate surface coating to keep the stability of MgFe 2 O 4 particles.
Coating with an inorganic layer, such as silica, metal or nonmetal elementary substance and metal oxide is important because the protecting shells not only reduces the aggregation of the nanoparticles in the solution and stabilize the magnetic nano-ferrite, but can also be used for further functionalization and improves the efficiency of the catalyst. 56 Ferrites are highly valuable catalyst supports because they take advantage of . This is owing to the effect of copper shell coating where each ferrite particle was separated from its neighbors by the coated layer leading to diminish the magnetostatic coupling between the particles. The samples exhibit typical ferromagnetic behavior at room temperature. The narrow cycles and the hysteresis loops show the behavior of so magnetic materials with low coercivity.
3.1.2. Fourier transform infra-red (FT-IR) spectrum. Fig. 4. As can be seen from the images, two sizes of particles are clearly distinguishable, with differences in their colour and morphology. The larger grey spots with cubic shape were attributed to the MgFe 2 O 4 particles which coated with the small black segments of copper nanoparticles. Fig. 5 shows FESEM images of MgFe 2 O 4 /Cu nanocomposite that conrm the presence of nanoparticles with diameters ranging from 29 to 43 nm. The obtained results are in good agreement with TEM and XRD data.
The chemical composition of MgFe 2 O 4 /Cu nanocomposite was conrmed with EDS data. In this analysis, Cu, Mg, Fe, and O signals are observable (Fig. 6). Additionally, the exact concentration of Mg, Fe and Cu was determined by ICP-OES and the obtained values were 10.2, 33.35 and 31.68 wt% respectively, which are in good agreement with EDS data.
Catalytic activity of MgFe 2 O 4 /Cu for the synthesis of bthiol-1,4-disubstituted-1,2,3-triazoles
In order to optimize the reaction conditions, we investigated the one-pot click synthesis of 2-phenyl-2-(4-phenyl-1H-1,2,3-triazol-1-yl)ethane-1-thiol from styrene episulde, sodium azide and phenyl acetylene under various reaction conditions. Initially, temperature, solvents, reaction time and the amounts of catalyst and reactants were studied as experimental factors, and then the results were summarized in Table 1. The favorable outcome was obtained using styrene episulde (1 mmol), sodium azide (1.2 mmol) and phenylacetylene (1 mmol) in the presence of nano-MgFe 2 O 4 /Cu (0.02 g) as catalyst in water at 60 C (Table 1, entry 4). It is noteworthy that the presence of catalyst was necessary to perform the reaction and in the absence of nanocomposite, the reaction did not proceed even aer 11 h (entry 1). The quantity of catalyst was optimized using various amounts of nano-MgFe 2 O 4 /Cu (0.005, 0.01, 0.02 and 0.03 g), and the best result was obtained with 0.02 g of catalyst.
The catalyst concentration plays a signicant role in the optimization of the product yield. An increase in the amount of catalyst from 0.01 to 0.02 g not only increased the triazole yield but also accelerated the rate of reaction (entries 2-4). Using the more amounts of nanocatalyst did not improve the product yield (entry 5).
In order to study of solvent effect, the cyclization reaction was tested in the various solvents. The results showed that the polar solvents such as water, acetonitrile, ethanol, methanol, ethyl acetate and dimethylformamide were effective and utilizable whereas non-polar solvents were not suitable for this purpose (entries 6-13). The reaction was carried out successfully in H 2 O and it was selected as the best option because in comparison with water, the product yields were lower in all other solvents and also water is a green and eco-friendly solvent (entry 4).
The effect of temperature was also investigated and the reaction was tested at different temperatures (25,45 and 60 C). The product yield was not satisfactory at room temperature (25 C) aer 10 h (entry 14). Increasing the temperature simultaneously increased the reaction rate and product yield, and the desired triazole was synthesized in 70% yield aer 6 h at 45 C (entry 15). Further increase of temperature up to 60 C led to produce the product with excellent yield at short reaction time (entry 4). The reaction was tested in the presence of bare MgFe 2 O 4 and Cu nanoparticles separately under the optimized conditions and results showed that although magnesium ferrite nanoparticles improve and enhance the catalytic activity of nanocomposite, copper particles play an essential role for proceeding the reaction and their presence is vital in triazole cyclization (entries 16 and 17).
The generality of the presented procedure was established by reaction of various thiiranes bearing either electron-donating or withdrawing substituents, and cyclic thiiranes with phenylacetylene and sodium azide in the presence of MgFe 2 O 4 /Cu nanocomposite under the optimized conditions. The results are summarized in Table 2. In addition, the reaction of other alkynes such as aliphatic terminal alkynes and 4-methoxyphenyl acetylene with styrene episulde was also considered under mentioned conditions (entries 9-11). All reactions were carried out successfully within 2-4 h to give triazoles in 80-96% yields.
Recycling of nano-MgFe 2 O 4 /Cu
The recycling of the green nanocatalyst was investigated under the optimized reaction conditions ( Table 2, entry 1). The nanoparticles were easily accumulated by applying an external magnetic eld, washed with ethyl acetate and distilled water and, aer drying, reused several times without any signicant loss of activity (Fig. 7). The structure of the recovered catalyst was conrmed using VSM, FESEM, XRD and TEM analyses aer ve runs (Fig. 8).
The extent of Mg, Fe and Cu leaching during catalytic reaction was studied by ICP-OES analysis of the supernatant liquid aer removal of catalyst and the result showed no presence of Mg, Fe and Cu in supernatant liquid.
Mercury poisoning and hot ltration tests
In order to conrm the heterogeneity of the catalyst, both hot ltration and mercury poisoning tests were performed. Accordingly, the ltration of the catalyst was carried out aer 30 min at 100 C and the ltrate was allowed to react for additional 2 hours, but the reaction due to the absence of copper did not take place, and no cyclization reaction was occurred.
The Hg poisoning test was conducted for the model reaction under the optimum conditions as follows: the one-pot reaction of styrene episulde (1 mmol), phenyl acetylene (1 mmol), sodium azide (1.2 mmol) and MgFe 2 O 4 /Cu nanocatalyst (0.02 g) was carried out in the presence of Hg(0) excess (1.89 g, 9.43 mmol, 277 equiv.) under intense stirring conditions at 60 C for 3 h in water. The suppression of catalysis by mercury is evidence for a heterogeneous catalyst. 58 The added Hg(0) poisoned and inactivated MgFe 2 O 4 /Cu heterogeneous catalyst through amalgamating the metal catalyst or adsorbing on its surface and no product was obtained aer 3 h.
3.5. The proposed mechanism for synthesis of b-thiol-1,4disubstituted-1,2,3-triazoles catalyzed by MgFe 2 O 4 /Cu The designed mechanism for the synthesis of b-thiol-1,4disubstituted-1,2,3-triazole may comprise two possible pathways (A and B). MgFe 2 O 4 /Cu nanoparticles catalyze both cleavage of the thiirane ring and 1,3-dipolar cycloaddition leading to the formation of triazoles. 27,28 First, as a result of noncovalent interaction a bond is formed between metal and azide, followed by activation of thiirane ring with MgFe 2 O 4 /Cu catalyst. Then, ring opening of thiirane is accomplished through azide transference from the catalyst which leads to the formation of 2-azido-2-arylethanethiol (pathway A). The thiirane rings carrying aryl groups due to the stability of benzyl carbocation prefer to be opened from the more hindered position via S N 1 type of mechanism (a-cleavage); nevertheless, the regioselective ring opening of thiiranes bearing alkyl and allyl substituents by azide is powerfully preferred from less hindered carbon of the thiirane via S N 2 type of mechanism (b-cleavage). In order to accredit the catalytic role of MgFe 2 O 4 /Cu in the pathway A, the styrene episulde and sodium azide were reacted in the absence of catalyst, and it was found that, only a trace amount of 2-azido-2-arylethanethiol had been generated. For pathway A, consumption of styrene episulde and sodium azide and also the generation of 2-azido-2-phenylethanethiol intermediate were monitored by gas chromatography (GC) analysis and thin layer chromatography (TLC) runs of the reaction mixture, and we found that 2-azido-2-arylethanethiol is formed easily (within the rst 30 min of the reaction) and the rate determining step (RDS) was found to be the 1,3-dipolar cycloaddition step. 2-Azido-2-arylethanethiol was characterized by FT-IR spectrum and stretching frequency of 2097 cm À1 related to the azide (the FT-IR spectrum of 2-azido-2-phenylethanethiol has been provided in ESI Section †).
The pathway B shows the insertion of copper to the C-H bond of phenylacetylene and generation of the intermediate(I), which accelerates the [3+2] cycloaddition between azide and carbon-carbon triple bond of in situ produced intermediate(II), to afford the Cu-C-triazole(IV). The phenylacetylene consumption and also the disappearance of the 2-azido-2-arylethanethiol intermediate, were monitored by GC analysis and TLC runs of the reaction mixture. Eventually, proteolysis of the Cu-C bond of intermediate(IV) by aqueous media gives the corresponding bthiol-1,4-disubstituted-1,2,3-triazole(V) (Scheme 4).
In order to evaluate the accuracy of reaction RDS, 2-azido-2phenylethanethiol was separately reacted with phenylacetylene in the presence of MgFe 2 O 4 /Cu nanocatalyst. The formation of corresponding 1,2,3-triazole was monitored via GC analysis and TLC runs. It was observed that the reaction was carried out within 2 h. This result demonstrated that the pathway B determines the reaction rate.
To conrm the formation of acetylide intermediate(I), phenylacetylene and MgFe 2 O 4 /Cu nanocatalyst were mixed in a separate experiment in aqueous media, and pH of water as a solvent was investigated. A 0.6 unit decrease in pH aer 20 min was detected, indicative of terminal proton release to the water, due to the initial coordination of phenylacetylene to copper to form acetylide intermediate(I).
Conclusions
In summary, in this research, the magnetic nanocomposite of MgFe 2 O 4 /Cu has been easily manufactured through a solidstate procedure and it was then characterized by different techniques such as VSM, FESEM, TEM, XRD, EDS and FT-IR. This novel composite has been utilized as an efficient catalyst for one-pot synthesis of b-thiol-1,4-disubstituted-1,2,3-triazoles as new products via three component reactions of sodium azide, terminal alkynes, and various thiiranes in water. The method reported is completely new due to the novelty of both the catalyst and the triazole products. Furthermore, perfect regioselectivity, the simple process, high product yields, short reaction times, the use of eco-friendly solvent, easy separation and recycling of catalyst are signicant advantages of this proposed procedure.
Conflicts of interest
There are no conicts to declare.
|
2021-05-11T00:05:16.862Z
|
2021-04-07T00:00:00.000
|
{
"year": 2021,
"sha1": "b48e220d8962dd117169cd4e8ec0b76b1a6a79c2",
"oa_license": "CCBY",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2021/ra/d1ra01588e",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cbdfd5b3bc818dd1adab6a6b2b48deba8a7d5b75",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
26575426
|
pes2o/s2orc
|
v3-fos-license
|
Comprehensive identification and analysis of human accelerated regulatory DNA
It has long been hypothesized that changes in gene regulation have played an important role in human evolution, but regulatory DNA has been much more difficult to study compared with protein-coding regions. Recent large-scale studies have created genome-scale catalogs of DNase I hypersensitive sites (DHSs), which demark potentially functional regulatory DNA. To better define regulatory DNA that has been subject to human-specific adaptive evolution, we performed comprehensive evolutionary and population genetics analyses on over 18 million DHSs discovered in 130 cell types. We identified 524 DHSs that are conserved in nonhuman primates but accelerated in the human lineage (haDHS), and estimate that 70% of substitutions in haDHSs are attributable to positive selection. Through extensive computational and experimental analyses, we demonstrate that haDHSs are often active in brain or neuronal cell types; play an important role in regulating the expression of developmentally important genes, including many transcription factors such as SOX6, POU3F2, and HOX genes; and identify striking examples of adaptive regulatory evolution that may have contributed to human-specific phenotypes. More generally, our results reveal new insights into conserved and adaptive regulatory DNA in humans and refine the set of genomic substrates that distinguish humans from their closest living primate relatives.
It has long been hypothesized that changes in gene regulation have played an important role in human evolution, but regulatory DNA has been much more difficult to study compared with protein-coding regions. Recent large-scale studies have created genome-scale catalogs of DNase I hypersensitive sites (DHSs), which demark potentially functional regulatory DNA. To better define regulatory DNA that has been subject to human-specific adaptive evolution, we performed comprehensive evolutionary and population genetics analyses on over 18 million DHSs discovered in 130 cell types. We identified 524 DHSs that are conserved in nonhuman primates but accelerated in the human lineage (haDHS), and estimate that 70% of substitutions in haDHSs are attributable to positive selection. Through extensive computational and experimental analyses, we demonstrate that haDHSs are often active in brain or neuronal cell types; play an important role in regulating the expression of developmentally important genes, including many transcription factors such as SOX6, POU3F2, and HOX genes; and identify striking examples of adaptive regulatory evolution that may have contributed to human-specific phenotypes. More generally, our results reveal new insights into conserved and adaptive regulatory DNA in humans and refine the set of genomic substrates that distinguish humans from their closest living primate relatives.
[Supplemental material is available for this article.] A number of traits distinguish humans from our closest primate relatives, including bipedalism, increased cognition, and complex language and social systems (for review, see O' Bleness et al. 2012). To date, the genetic basis of human-specific phenotypes remains largely unknown, complicated by the difficulties in distinguishing between phenotypically significant and benign variation. Thus, evolutionary changes in protein-coding sequences have received considerable attention, as the phenotypic consequences of these mutations have historically been easier to interpret (Clark et al. 2003;Stedman et al. 2004;Chimpanzee Sequencing and Analysis Consortium 2005;Nielsen et al. 2005;Arbiza et al. 2006;Dennis et al. 2012;Sudmant et al. 2013). Although protein-coding evolution has clearly played a role in human evolution, proteins account for only ∼1.5% of the human genome, most of which exhibit high sequence similarity between humans and chimpanzees (Chimpanzee Sequencing and Analysis Consortium 2005). However, between ∼2.5% and 15% of the human genome is estimated to be functionally constrained (Chinwalla et al. 2002;Lunter et al. 2006;Asthana et al. 2007;Meader et al. 2010;Ponting and Hardison 2011). Thus, the mutational target size of noncoding DNA is considerably larger than protein-coding sequences, suggesting that regulatory DNA is also an important substrate of evolutionary change, as originally proposed four decades ago (Britten and Davidson 1969;King and Wilson 1975). In some cases, detailed studies of individual genes have revealed humanspecific regulatory evolution, such as in FOXP2, which is thought to have influenced traits related to speech and language in humans (Enard et al. 2002).
Nonetheless, interpreting patterns of interspecific divergence and intraspecific polymorphism in noncoding DNA has been considerably more challenging compared with those of proteincoding sequences. An elegant and powerful way to identify evolutionary changes in noncoding DNA of potential significance, originally described by Pollard et al. (2006b) and extensively used thereafter (Pollard et al. 2006a,b;Prabhakar et al. 2006;Kim and Pritchard 2007;Bush and Lahn 2008;McLean et al. 2010;Lindblad-Toh et al. 2011;Pertea et al. 2011), focuses on the discovery of sequences that are rapidly evolving or lost on the human lineage but that are otherwise phylogenetically conserved and thus likely functional. This approach has led to the discovery of several regions with species-specific enhancer activity (Prabhakar et al. 2008;Capra et al. 2013;Kamm et al. 2013), as well as human-specific deletion of regulatory DNA (McLean et al. 2011).
However, phylogenetic conservation is an imperfect proxy for function, particularly for noncoding regulatory sequences that can exhibit significantly high rates of turnover (Dermitzakis and Clark 2002;Wray et al. 2003;Villar et al. 2014). To more directly identify regulatory DNA, recent studies such as the ENCODE (The ENCODE Project Consortium 2012) and Roadmap Epigenomics Projects (Bernstein et al. 2010) have created genome-scale maps of DNase I hypersensitive sites (DHSs) in a large number of cell types. DNase I preferentially cleaves regions of open and active DNA, making it a powerful assay to identify regulatory elements, regardless of their specific function (Galas and Schmitz 1978;Dorschner et al. 2004). Although high-resolution maps of DHSs now exist, not all experimentally defined regulatory elements are expected to be functionally or phenotypically significant (Eddy 2012;Doolittle 2013;Graur et al. 2013;Niu and Jiang 2013).
Thus, we hypothesized that the synergistic combination of comparative and functional genomics would facilitate the highresolution identification of conserved and human accelerated regulatory sequences. Here we describe the genome-wide architecture and characteristics of 113,577 DHSs that are conserved in primates and 524 DHSs that exhibit significantly accelerated rates of evolution in the human lineage (haDHSs). We estimate that ∼70% of substitutions within haDHSs are attributable to positive selection; we experimentally validated a large number of elements; and we perform extensive bioinformatics analyses that integrate information across multiple functional genomics data sets to better understand the functional and biological characteristics of haDHSs.
Results
Framework for identifying conserved and human accelerated regulatory DNA To identify human accelerated regulatory DNA, we leveraged experimentally defined maps of DHSs from 130 cell types identified in the ENCODE and Roadmap Epigenomics Projects (Supplemental Table 1). After merging DHSs across cell types into 2,093,197 distinct loci (median size = 290 bp, SD = 159 bp), we used a whole-genome alignment of six primates from the EPO pipeline (Paten et al. 2008) to obtain separate alignments for each DHS, using strict filtering criteria for alignment quality. We performed two likelihood ratio tests to distinguish between DHSs that are evolving neutrally, are conserved among primates, or are conserved among primates but accelerated in the human lineage (Fig. 1). Specifically, we used a maximum likelihood test (Pollard et al. 2010) to first identify 113,577 DHSs that exhibit significant evolutionary constraint across primates, which manifest as regions of low sequence divergence compared with carefully defined putatively neutral flanking sequence (FDR = 0.01) (Fig. 1). Next, for DHSs that are conserved in primates, we performed a second likelihood ratio test (Pollard et al. 2010) and identified 524 regulatory sequences that have experienced a significant acceleration of evolution in the human lineage and therefore exhibit an excess of human-specific substitutions (FDR = 0.05) ( Fig. 1; Supplemental Table 2). Importantly, to avoid biasing ourselves against identifying human acceleration, we excluded the human sequence in the first test for conservation.
Characteristics of primate conserved regulatory DNA
We first characterized the set of DHSs conserved across primates. Approximately 93% of conserved DHSs overlap a phastCons conserved element, but many also contain short segments of less conserved sequence, making them overall less conserved than those identified by phastCons ( Fig. 2A). We hypothesize that these less conserved sequences interspersed within DHSs may facilitate the rapid acquisition of novel transcription factor binding sites, as these regions are already actionable (i.e., accessible to proteins) and poised to evolve new functions compared with nonconserved sequences outside of DHSs.
Patterns of conservation varied significantly across cell type category (Kruskal-Wallis test; P = 5.08 × 10 −8 ; Methods) ( Fig. 2B), ranging from 5.0% of DHSs in chronic lymphocyte leukemia cells to 20.4% in fetal brain cells. DHSs active in fetal cell types showed the highest levels of conservation, consistent with the observation that gene regulation in developmental pathways is highly conserved (Lowe et al. 2011 activation of chromatin (Vernot et al. 2012). These patterns are also observed in cell-type-specific DHSs (Supplemental Fig. 1a).
Genomic landscape of human accelerated regulatory DNA
We next investigated the set of haDHSs. Overall, these elements have evolved at approximately four times the neutral rate in the human lineage, while other primate branches have evolved at less than half of the neutral rate (Fig. 3A). In total, 70 haDHSs overlap previously identified human accelerated elements (HAEs) (Pollard et al. 2006b;Prabhakar et al. 2006;Bush and Lahn 2008;Lindblad-Toh et al. 2011), which is highly significant (permutation P < 1 × 10 −5 ) (Fig. 3B). Thus, by focusing on experimentally defined regulatory DNA, we identify 454 novel loci that show accelerated rates of evolution in the human lineage, increasing the set of 1621 merged HAEs by 28%. The number of cell types each haDHS was active in varied substantially (Supplemental Fig. 2). Notably, 64% (337) of haDHSs were identified in at least one brain or neural cell type, and 88.5% (464) were active in at least one developing fetal tissue. In comparison to conserved nonaccelerated DHSs, haDHSs are significantly enriched in noncoding regions (P = 1.16 × 10 −7 , hypergeometric test) (Fig. 3C). These data are consistent with the hypothesis that noncoding regions are more free to evolve and acquire new functions. Furthermore, we observed eight regions where four or more haDHSs were clustered within a 1-Mb window, suggesting coordinated changes in multiple regulatory elements (Fig. 3D). For instance, TENM3, which is required for establishing neuronal connections in vertebrate retinal ganglion cells (Antinucci et al. 2013;Merlin et al. 2013), is the nearest gene to five haDHSs, four of which are active in retinal pigment epithelial cells (Fig. 3D, inset).
Adaptive evolution is the primary determinant of rate acceleration in haDHSs
Human acceleration can result from both adaptive and nonadaptive forces (Haygood et al. 2007;Taylor et al. 2008;Kostka et al. 2012). We therefore performed a number of analyses to better understand mechanisms governing rate acceleration of haDHSs. First, to distinguish between relaxation of constraint and true rate acceleration on the human lineage, we applied a novel permutation test (Supplemental Text) and found that 91.8% of haDHSs were evolving faster than their surrounding neutral sequence, suggesting that most haDHSs are not the consequence of relaxed functional constraint. In contrast, it has been estimated that only 55% of HAEs exceed the neutral rate (Kostka et al. 2012). Second, we investigated the contribution of GC-biased gene conversion (GC-BGC) to our data, which influences rate acceleration of HAEs (Pollard et al. 2006a;Galtier and Duret 2007;Duret and Galtier 2009;Kostka et al. 2012), and found that 9.7% (51 haDHSs) show significant evidence of GC-BGC (Supplemental Text;Supplemental Fig. 3a). Finally, we investigated patterns of human-macaque divergence around haDHSs and found that local increases in mutation rate cannot explain rate acceleration in haDHSs, although mutation rate heterogeneity has influenced previous inferences of HAEs (Supplemental Text;Supplemental Fig. 3b;Pollard et al. 2006b;Prabhakar et al. 2006;Bush and Lahn 2008;Lindblad-Toh et al. 2011).
To more directly quantify the proportion of substitutions in haDHSs that can be attributed to positive selection, we used the McDonald-Kreitman framework and compared levels of polymorphism and divergence at haDHSs. Specifically, we used polymorphism data from the 1000 Genomes Project (The 1000 Genomes Project Consortium 2012) and calculated the statistic α, an estimate of the proportion of substitutions fixed by adaptive evolution. As a control, we first estimated α in conserved, nonaccelerated DHSs, which as expected was zero (95% CI −0.02-0.007) ( Fig. 4A; Supplemental Fig. 4a). We estimate that 70.1% (95% CI 65.8%-73.7%) of substitutions can be attributed to positive selection in haDHSs (Fig. 4A), and this number is robust to mutation rate heterogeneity in the presence of complex demographic history (Supplemental Text;Supplemental Fig. 4b). To evaluate the sensitivity of α to GC-BGC, we removed all weak to strong substitutions in haDHSs and repeated the analysis. Although estimates of α decreased for haDHSs subject to GC-BGC, α increased slightly for other haDHSs, and thus the overall estimate remained almost identical (69.9%, 95% CI 64.2%-75.2%) (Fig. 4A). Of the remaining 29.9% of substitutions in haDHSs not accounted for by positive selection, we estimate 9.0% are expected without human-specific rate haDHSs are developmental enhancers that exhibit lineage-specific activity We performed extensive experimental studies to better understand the functional significance and potential regulatory roles of haDHSs. We found that nine of our haDHSs had previously been tested for in vivo enhancer activity using a transgenic mouse assay (Visel et al. 2007), and we tested nine additional loci. Overall, 13 out of 18 haDHSs were positive for enhancer activity in one or more tissues at the single time point assayed (e11.5) (Supplemental Table 3). These 13 haDHSs were active in a wide range of tissues (Fig. 5A), with the midbrain (n = 7), forebrain (n = 4), branchial arch (n = 4), and limb (n = 4) as the most frequent tissues showing enhancer activity. Patterns of enhancer activity varied from very broad to very tissue specific (Fig. 5A). One interesting example is located on 11p15 and is only active in the branchial arch (Fig. 5A). This haDHS is located in an intron of SOX6, and as we describe below, we find evidence that it contacts the SOX6 promoter. SOX6 is a developmental transcription factor involved in brain, bone, and cartilage development (Lefebvre et al. 1998). Notably, the branchial arch develops into several structures, including the jaw and larynx (Graham 2003), making this haDHS an intriguing candidate that potentially influences traits such as facial morphology and speech.
We also performed luciferase assays to functionally test haDHSs in a more high-throughput manner. Specifically, we experimentally tested 37 haDHSs in SK-N-MC cells (derived from a neuroepithelioma) and 20 haDHSs in IMR90 cells (fetal lung fibroblasts) by assaying for differences in regulatory activity of the human and chimpanzee orthologs using luciferase reporters. We chose SK-N-MC cells as a proxy for other neural cell types, and we chose IMR90 cells because many haDHSs were active in this cell type. Of the 37 pairs of haDHSs tested in SK-N-MC, 14 showed significant enhancer activity (P < 0.05) ( one (20%) of which exhibited significant differences in expression between the human and chimpanzee haplotypes. Human substitutions resulted in lower expression in four of the six haDHSs with significant differences in reporter activity between human and chimpanzee sequences (Fig. 5B,C). The haDHS with the largest difference in regulatory activity between humans and chimpanzees (2.32-fold increase in chimpanzees; P = 0.004) had five human-specific substitutions that overlapped several transcription factor binding motifs, and was located 186 base pairs upstream of RNF145, a zinc finger gene that is associated with variation in hematological traits ( Fig. 5D; Soranzo et al. 2009). Although this haDHS is likely part of the promoter for RNF145, as described below, it may target several other genes, including IL12B and CLINT1.
Leveraging chromatin contact data to infer putative regulatory targets of haDHSs Delineating the set of target genes that haDHSs regulate is key to determining their biological consequences and role in human evolution. However, identifying the targets of regulatory sequences poses a significant challenge. Enhancers often regulate distal genes, and in some cases, these may not be the closest genes to the enhancer (van Arensbergen et al. 2014). Chromatin conformation technologies such as Hi-C (Lieberman-Aiden et al. 2009) identify physical contacts between distinct segments of DNA and have been shown to identify long-range interactions between promoters and enhancers (Sanyal et al. 2012). We leveraged high-coverage Hi-C data from human IMR90 fibroblast cells to identify putative regulatory targets of haDHSs using a rigorous statistical method (Ay et al. 2014). We identified 9000 significant contacts for the 524 haDHSs at 40-kb resolution (FDR = 0.01) (Fig. 6A). On average, haDHSs overlap transcription start sites for 3.5 genes, highlighting the potential benefit of using more sophisticated strategies than simply identifying the nearest gene when inferring regulatory targets. We also found that haDHSs contact fewer genes on average than conserved DHSs (permutation P = 0.004), suggesting adaptive regulation is more likely to occur when pleiotropic effects are minimized. Furthermore, 119 haDHSs contact one or more transcription factors, and in total 132 distinct transcription factors are contacted by haDHSs. These include SOX6 (see Fig. 5A), RUNX2, and multiple HOX genes, all of which play important roles in development.
We performed a Gene Ontology (GO) enrichment analysis on the set of genes whose transcriptional start sites are contacted by haDHSs. Because haDHSs are a subset of conserved DHSs, we first performed the analysis on conserved DHS contact regions compared with the genomic background. We found that conserved DHS contacts are highly enriched for developmental genes, including those involved in neuron development (Supplemental Table 5), consistent with previous observations about conserved noncoding sequence (Lowe et al. 2011). Next, we tested for GO enrichments in haDHS contact genes using conserved DHS contact genes as the background and found a significant enrichment for developmental terms, including brain and neuron development (corrected P < 0.05) (Supplemental Table 5). These results show that haDHSs target genes are enriched for developmentally and neuronally important genes relative to conserved DHSs, which themselves are already highly enriched for these categories.
Three examples of haDHSs and their putative target regions are shown in Figure 6, B through D. All contain transcription factor motifs that are dramatically strengthened or weakened by humanspecific substitutions. These haDHSs are likely targets of adaptive evolution as they show no evidence of GC-BGC and are evolving faster than surrounding neutral sequence. Moreover, all three are also active in only a small number of neuronal cell types, such as fetal brain and fetal spinal cord, indicating a potential role in human-specific cognitive phenotypes. Of particular interest is an haDHS on Chromosome 6 that lies in a gene desert 300 kb from POU3F2, a transcription factor that regulates FOXP2 in a humanspecific manner ( Fig. 6C; Maricic et al. 2013). Two of the substitutions in this haDHS strengthen a putative YY1 transcription factor binding site (Fig. 6C), which is known to mediate long-distance DNA interactions (Atchison 2014).
Discussion
Advances in DNA sequencing technology have led to a vast catalog of the variation in the genomes and epigenomes across many primates. However, interpreting the evolutionary, functional, and phenotypic significance of these differences and identifying the precise genetic changes that are causally related to human-specific traits remain a formidable challenge. Here, we have leveraged extensive maps of experimentally defined regulatory DNA and comprehensive comparative and population genomics analyses to identify and delimit the characteristics of conserved and human accelerated regulatory DNA. In total, we discovered 113,577 DHSs conserved in primates, 524 of which exhibit significant rates of acceleration in the human lineage.
We found marked heterogeneity in the distribution of conserved DHSs across cell types (Fig. 2B), with fetal cell types showing the largest amount of constraint. Conversely, DHSs in malignant cell types exhibited the lowest levels of conservation, an observation that may provide insight into cancer biology. For example, chromatin remodeling is disrupted in many cancers (Morin et al. 2010;Jiao et al. 2011). Previous work has shown that DHSs in malignant cell types are more likely to be cell-type specific and have the table correspond to each embryonic region, and numbers in parentheses indicate how many of the haDHSs were positive in the region indicated. Columns represent the 13 haDHSs that showed enhancer activity, and gray boxes indicate what tissues the haDHS was active in. Three examples of positive assays are shown above, along with a schematic depicting their location relative to nearby genes. The haDHS tested is shown in red, and other haDHSs in the region are shown in black. (B,C) Results from luciferase assays for haDHSs that showed significant enhancer activity in SK-N-MC and IMR90 cells, respectively. Dotted lines indicate the mean relative expression from the negative controls, and the gray box indicates haDHS human and chimpanzee sequences that showed significantly different activity (P < 0.05). Bars, SE. Asterisks below each plot indicate haDHSs that were active in SK-N-MC or IMR90 (other haDHSs were active in similar cell types, such as fetal brain or NHLF). (D) A schematic of the region surrounding haDHS12, which had the largest difference in enhancer activity. The haDHS is located just upstream of the alternatively spliced gene RNF145. Red substitutions are weak to strong, and all other substitutions are colored in blue. PhyloP scores are also shown across the region. This DHS was partitioned prior to statistical testing into two distinct DHSs. The red portion is human accelerated, and the black portion is not. levels of nucleotide diversity consistent with neutral evolution (Vernot et al. 2012). Thus, these observations combined with our results that DHSs in malignant cell types have low levels of evolutionary conservation suggest that many malignant DHSs may reflect ectopic chromatin activation.
Our results also provide new insights into human-specific adaptive regulatory evolution. Of the 524 haDHSs that we identified, 454 (87%) are novel and were not detected in previous studies of HAEs (Pollard et al. 2006b;Prabhakar et al. 2006;Bush and Lahn 2008;Lindblad-Toh et al. 2011). The haDHSs that we discovered are significantly less affected by GC-biased gene conversion and relaxation of functional constraint and have a higher proportion of substitutions that are estimated to be due to positive selection compared with previous catalogs of HAEs (Supplemental Fig. 3). We hypothesize these differences are largely the consequence of our study design that synergistically integrated experimentally defined regulatory sequences with phylogenetic conservation, which both focused our analyses to a subset of the genome enriched for functionally important sequence and limit-ed the influence of confounding evolutionary forces. To support this hypothesis, we find that a higher proportion of haDHSs overlap human-specific enhancer marks in the cortex (Reilly et al. 2015) than HAEs (P = 7.62 × 10 −5 ; Fisher's exact test). Large catalogs of experimentally defined regulatory DNA did not exist when HAEs were initially discovered, and we anticipate that the continued development of functional genomics technology will enable even more refined evolutionary analyses than described here.
To help interpret the functional and potential phenotypic significance of haDHSs, we performed extensive bioinformatics analyses and experimental validations. We found that haDHSs were significantly enriched in noncoding regions; a large proportion of experimentally tested elements showed enhancer activity; and many were active in brain or neural cell types and during fetal development. We also used Hi-C data to inform inferences of putative target genes that are regulated by haDHSs. These analyses revealed that haDHSs contact the transcriptional start sites of 132 transcription factors, suggesting that fine-tuning regulatory Figure 6. Hi-C chromatin conformation data identify putative regulatory targets of haDHSs. (A) Contacts are shown for all haDHSs, and each row indicates the contacts for one haDHS, which is in the center. Black boxes indicate one 40-kb contact region. The schematic above illustrates how chromatin conformation information gets translated into the Hi-C contact data. Blue dots represent contact regions; the red dot, an haDHS. (B-D) Three example haDHSs are shown with their surrounding genes and a predicted transcription factor binding site that is affected by a human-specific mutation(s). Genes that contact the haDHSs in Hi-C data are highlighted in blue, with arrows pointing to their transcription start sites. Examples B and C depict substitutions that create transcription factor binding sites, while D is a binding site that is predicted to be lost in humans. Human-specific substitutions that go from a weak to a strong base are shown in red, while all other substitutions are shown in blue. Bar plots, FIMO (Grant et al. 2011) log likelihood ratios of motif calls in each species. networks by tinkering with the sequences that govern the expression of regulatory proteins has been an important target of positive selection during human evolution. A number of transcription factors contacted by haDHSs are strong candidates for influencing hominin-or human-specific traits. For example, RUNX2 has been hypothesized to influence differential bone morphology in humans and Neanderthals (Green et al. 2010), and HOX genes play myriad roles in development. Another intriguing transcription factor contacted by an haDHS is POU3F2, which has recently been shown to regulate FOXP2 in a human-specific manner (Maricic et al. 2013). FOXP2 itself is a transcription factor that has previously been hypothesized to play a role in speech and language in humans (Enard et al. 2002). Our findings suggest that there may be additional levels of human-specific FOXP2 regulation via differential expression of POU3F2 expression. Furthermore, in addition to transcription factors, we identified other genes that are of significant biological interest. For instance, PEX2 is contacted by an haDHS with two substitutions that create a SMAD4 motif (Fig. 6B). Mutations in PEX2 can lead to Zellweger syndrome, characterized by a constellation of features, including impaired brain development and craniofacial abnormalities (Steinberg et al. 2006).
Our study has a number of important limitations. For example, the DHSs we used were ascertained only in human tissues. Although experimentally defined regulatory DNA has been generated in a limited number of nonhuman primates for a limited number of tissues (Shibata et al. 2012;Cotney et al. 2013), a more systematic and comprehensive effort would be of considerable value in understanding the evolution of regulatory sequences. Furthermore, we did not consider additional types of genetic variation, such as structural variation, that may influence humanspecific phenotypes (Dennis et al. 2012;Sudmant et al. 2013). Furthermore, although there is evidence that chromatin conformation is relatively stable across cell types (Dixon et al. 2012), it would be of considerable interest to generate Hi-C or related data for a more comprehensive panel of cell types. These data, combined with gene expression profiles from the same tissue types, would provide further insights into the target genes regulated by haDHSs. Finally, the transgenic mouse and luciferase assays that we performed are only a first step in the experimental characterization of these and other elements that potentially contribute to human-specific phenotypes. Because the activity of a regulatory element may be highly cell-type-and developmental time pointspecific, and depend on the coordination of additional regulatory elements, more extensive in vivo experiments would be fruitful. Nonetheless, associating particular haDHSs with specific phenotypes is complicated by the fact that the putative causal alleles are fixed in humans and thus refractory to traditional genetic mapping methods. However, if mutations at these sites are not lethal, given the current global population size of humans, such mutations are expected to exist, and their discovery could provide valuable phenotypic insights.
In short, our data provide substantial new insights into sequences that have experienced human-specific adaptive regulatory evolution, narrow the set of genetic changes that may influence uniquely human phenotype, and facilitate more detailed experimental and animal models of the most promising human-specific substitutions.
Ultimately, delineating the suite of genetic changes that have causally influenced human-specific phenotypes will provide insight into the evolutionary and molecular mechanisms that shaped our species evolutionary trajectory.
DNase I hypersensitivity sites
We used DNase I hypersensitivity peaks previously published as part of the ENCODE (The ENCODE Project Consortium 2012; Maurano et al. 2012) and Roadmap Epigenomics (Bernstein et al. 2010) Projects. A list of cell types is available in Supplemental Table 1. All peaks were called using the hotspot algorithm (John et al. 2011) and represent the 150-bp region of maximal DNase I signal. We merged DHSs across cell types using the BEDOPS package . Many DHSs were very long after merging (>2000 bp), probably because they consist of distinct regulatory elements located in close succession along the genome. To avoid analyzing distinct, potentially independently evolving regulatory elements as a single unit, we segmented merged DHSs according to the number of cell types each region was active in (Supplemental Text).
Primate alignments
We downloaded the six primate EPO alignment from Ensembl version 70 . By use of this, we obtained an alignment for each DHS and the surrounding 50 kb of sequence. We masked all sites that were polymorphic in the 1000 Genomes Project (The 1000 Genomes Project Consortium 2012) integrated phase 1 data (March 2012) at <95% allele frequency, all repeat masked bases (lower case mark up in the EPO alignment), and all sites that were part of a CpG in any species in the alignment. In the surrounding 50 kb, we additionally masked all segmental duplications (UCSC Table Browser), coding exons (UCSC RefSeq genes) padded by 10 bp in order to remove splice sites, promoters (500 bp upstream of transcription start sites), other DHSs, and phastCons Eutherian mammal and primate conserved elements (UCSC phyloP46way). This helped ensure that the 50-kb surrounding region was a more appropriate approximation of the neutral evolutionary model for each DHS. We filtered any DHS in which (1) <90% of the bases remained unmasked in the DHS or (2) <15 kb remained unmasked in any of the six primates in the neutral region. Note, the EPO alignment is based on GRCh37 (hg19), and all subsequent analyses were done using GRCh37 coordinates. Given that we focus on conserved elements, which are by definition located in regions of the genome that are well resolved and alignable, we do not anticipate realigning to GRCh38 would significantly affect our results.
Identifying conserved and accelerated DHSs
DHSs that passed filtering were tested for overall conservation along the primate lineage with software from the PHAST package (Pollard et al. 2010;Hubisz et al. 2011). For each DHS, we first ran phyloFit on the neutral alignment of the surrounding 50 kb with the parameters -nrates 4 -subst-mod SSREV -EM. We used the newick tree provided with the six primate alignment in Ensembl. The resulting file was used as the neutral model while running phyloP; phyloP was run with the parameters -method LRT -mode CON after removing human sequence from the alignment. DHSs that were conserved at an FDR of 1% as determined with the Q-value package (http://github.com/jdstorey/qvalue) for R (R Core Team 2014) were then tested for human acceleration. For this test we used the same neutral model of evolution, this time using the parameters -method LRT -mode ACC -subtree homo_sapiens. DHSs significant for human acceleration at an FDR of 5% were considered in further analyses. We evaluated the accuracy of the FDR using a sampling approach (Supplemental Text).
To determine the overall rate of evolution in the neutral regions compared with haDHSs, we first concatenated sequence from both sets of regions and then conducted the same set of tests on the regions as a whole. To determine how much faster the human branch in the haDHSs was evolving compared with the expected rate, we multiplied the estimated neutral human branch length by the estimated conservation scale factor and divided the actual haDHS human branch length by this expected number.
Distribution of DHSs across cell types and genomic location
To determine how conserved and accelerated DHSs were distributed across cell types, we used the bedmap program from the BEDOPS suite ) to map DHSs from individual cell types onto the set of merged DHSs. We then calculated the proportion of DHSs in each cell type that were called as conserved and the proportion of conserved DHSs that were also called as accelerated ( Fig. 2B; Supplemental Fig. 1a-c).
Distribution of DHSs and haDHSs across the genome was assessed using UCSC Known Gene annotations from the UCSC Genome Browser, downloaded on May 14, 2013. Annotations were filtered to contain only "canonical" transcripts from the knownCanonical table. Promoters were defined as the 500 bp upstream of a transcription start site. To identify physical clusters of haDHSs, we expanded each haDHS by 500 kb on either side and then used the bedmap -count command from the BEDOPS suite to count the number of haDHSs and conserved DHSs within each 1-Mb region.
Other HAEs
We obtained previously identified HAEs (Pollard et al. 2006a,b;Prabhakar et al. 2006;Bush and Lahn 2008;Lindblad-Toh et al. 2011) and assessed overlap using the bedmap program from the BEDOPS package . When comparing our haDHSs to these other HAEs, we merged all HAEs, again using the BEDOPS program. It was useful for us to compare haDHSs to DHSs that were conserved but not accelerated. In order to do similar analyses using the HAEs, we merged phastCons Eutherian mammal and primate elements (UCSC Genome Browser) and considered any element that was >100 bp.
To determine if the amount of overlap between haDHSs and other HAEs was significant, we created an empirical null distribution by randomly sampling 524 conserved DHSs 10 4 times and determining overlap with HAEs for each sample.
Population genetics analyses
We downloaded the phase 1 integrated release data from the 1000 Genomes Project (The 1000 Genomes Project Consortium 2012) and filtered sites according to several criteria (Supplemental Text). We calculated α as described previously (Charlesworth 1994), using the equation 1 − (P s F n /P n F s ), where P = number of polymorphic sites, F = number of human-specific substitutions, S = number of selected sites, and N = number of neutral sites. We considered bases within haDHSs to be putatively selected and bases in the surrounding 4-kb region to be putatively neutral.
Hi-C analyses
We obtained raw paired-end Hi-C libraries for two IMR90 fibroblast cell lines (Dixon et al. 2012). Although Hi-C data were also available from human embryonic stem cells, we chose not to include this cell type as it may have a more permissive chromatin landscape that is not representative of promoter/enhancer interactions (Dixon et al. 2012). We processed the Hi-C data for each cell line at 40-kb resolution as previously described (Ay et al. 2014). Briefly, we mapped reads to the hg19 (GRCh 37) reference sequence, pairing mapped read ends, filtering duplicates, binning at 40 kb resolution, normalizing raw contact maps (Imakaev et al. 2012), and assigning statistical confidences for each contact bin pair using Fit-Hi-C with a refined null (Ay et al. 2014). We used a significance threshold of q-value <0.01 to determine regions that are contacted by haDHSs containing 40-kb windows. We omitted contacts within the same window and between adjacent windows and only focused on intrachromosomal contacts within 5 Mb of haDHSs. Note that the binning at a coarse resolution and omission of interchromosomal contacts were done to identify only highconfidence contacts with enough sequencing coverage. We used RefSeq gene annotations to obtain a list of transcription start sites that overlap contact regions and used these to perform GO analyses using the WebGestalt server (Wang et al. 2013) with the multiple testing method set to BH and the minimum number of genes per category to 10.
Transgenic mouse assays
Transgenic mouse assays were performed as previously described (Visel et al. 2007). Note, one of the previously tested assays was performed with the mouse ortholog (see Supplemental Table 3). Images of all the mouse assay replicates are available on the VISTA Enhancer Browser (Visel et al. 2007).
Luciferase assays
We considered several factors when selecting which haDHSs to experimentally study. First, because the luciferase assays detect enhancers, we prioritized haDHSs showing evidence of enhancer activity. To this end, we identified a second set of haDHSs that were within 500 bp of an enhancer histone modification (H3K4me1, H3K27ac) signal identified in the same cell type. Histone modifications for this set of haDHSs were downloaded from the UCSC Genome Browser or the Roadmap Epigenomics website. We included only DHSs from the 20 cell types for which histone modification data were available (for additional set of haDHSs and the cell types used, see Supplemental Table 6). There is a column identifying which haDHSs were used in the luciferase assays in Supplemental Tables 2 and 6. Second, we prioritized haDHSs that were active in IMR90, SK-N-MC, or other similar cell types. Both cell types represent time points that are potentially interesting for studying human evolution: SK-N-MC is a brain cell type, and IMR90 is a fetal tissue. Finally, we prioritized haDHSs that showed the greatest evidence for human acceleration.
We used standard techniques for cloning, transfection, and performing luciferase assays. Details are provided in the Supplement. For the luciferase assays, each allele and control had three to eight replicates. The positive control for each plate was cells transfected with the pGL3 control plasmid containing a minimal promoter with strong SV40 enhancer, while the negative control for each plate was cells transfected with the empty pGL3 plasmid with minimal promoter but no additional sequence cloned in.
To increase power to detect enhancer activity, negative control replicates were normalized by plate so that they could be directly comparable and combined. To accomplish this, we used the lm() function in R (R Core Team 2014) to create a linear model where the ratio of firefly to Renilla for all negative control replicates was a function of plate number. Then the coefficient for each plate was subtracted from all data points for that plate. Enhancer activity was determined using a one-sided t-test, and haDHSs were considered enhancers if either the chimp and/or human allele showed greater luciferase activity than the negative controls. We then tested enhancers for allelic differences with a two-sided t-test between the human and chimp alleles.
|
2017-08-18T15:14:10.722Z
|
2015-09-01T00:00:00.000
|
{
"year": 2015,
"sha1": "701fd27ca980eee4ef9c107559fd5d946750d7ad",
"oa_license": "CCBYNC",
"oa_url": "http://genome.cshlp.org/content/25/9/1245.full.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e31ce2ccaca8467ac081797d16eacd6c5c5e13f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
250643888
|
pes2o/s2orc
|
v3-fos-license
|
Deep learning for automatic brain tumour segmentation on MRI: evaluation of recommended reporting criteria via a reproduction and replication study
Objectives To determine the reproducibility and replicability of studies that develop and validate segmentation methods for brain tumours on MRI and that follow established reproducibility criteria; and to evaluate whether the reporting guidelines are sufficient. Methods Two eligible validation studies of distinct deep learning (DL) methods were identified. We implemented the methods using published information and retraced the reported validation steps. We evaluated to what extent the description of the methods enabled reproduction of the results. We further attempted to replicate reported findings on a clinical set of images acquired at our institute consisting of high-grade and low-grade glioma (HGG, LGG), and meningioma (MNG) cases. Results We successfully reproduced one of the two tumour segmentation methods. Insufficient description of the preprocessing pipeline and our inability to replicate the pipeline resulted in failure to reproduce the second method. The replication of the first method showed promising results in terms of Dice similarity coefficient (DSC) and sensitivity (Sen) on HGG cases (DSC=0.77, Sen=0.88) and LGG cases (DSC=0.73, Sen=0.83), however, poorer performance was observed for MNG cases (DSC=0.61, Sen=0.71). Preprocessing errors were identified that contributed to low quantitative scores in some cases. Conclusions Established reproducibility criteria do not sufficiently emphasise description of the preprocessing pipeline. Discrepancies in preprocessing as a result of insufficient reporting are likely to influence segmentation outcomes and hinder clinical utilisation. A detailed description of the whole processing chain, including preprocessing, is thus necessary to obtain stronger evidence of the generalisability of DL-based brain tumour segmentation methods and to facilitate translation of the methods into clinical practice.
GENERAL COMMENTS
This manuscript describes an attempt to reproduce/replicate two previously-published works on Brain MRI segmentation (following recommended reporting criteria & author correspondence if necessary), as well as an evaluation on an external (private) dataset for the successfully-replicated model, and a proposal for an expanded reporting criteria for improved replication. Reproducibility is an important consideration, especially in the medical domain. Some comments follow: 1. In the Overview subsection, it is stated that "The study design is based on the assumption that the reproducibility items proposed by Renard et al. are necessary and sufficient for reproduction and replication". The claim on "necessary" might be further justified, since it is not clear that all the items are strictly necessary (i.e. replication might have been successful omitting one/some of them) 2. In the Reproducibility analysis subsection, it is stated that "In Table 1 these algorithms are described... together with libraries and computational parameters we used in our implementations". It might be clarified as to whether these "libraries and computational parameters" were as specified by the checklists in the original works, or determined by the authors.
3. In Table 1, the "CV strategy + number of folds" item does not appear fully specified. For example, for DeepMedic, 5-fold CV on training set is listed, but this does not appear to specify the assignment to each fold. The "1 subject in both HGG and LGG" specification for Pereira et al. also appears not fully clear. Table 1, both "Hyperparameters" and "Hyperparameter section strategy" are listed. It might be clarified whether the selection strategy was actually attempted to reproduce the reported hyperparameters (and if different optimal hyperparameters were found, were they used instead), or if the reported hyperparameters were used in training the models regardless.
In
5. In the Pereira et al. subsection, it is stated that after the intial failure to replicate the model (with the corresponding results as reported in Table 2), "the author generously provided information on the bias field correction as well as image histogram normalization parameters", and this "plausibly explained the failure to reproduce". However, no reported attempt using this additional information appears to have been made "...we decided not to pursue further efforts to reproduce the study". Was there any reason why some attempt with the additional (partial) information was not tried (especially since the algorithm, if not the specific implementation, is known), if only to see if the performance would be closer to the reported values?
6. For the proposed updates in Table 5, the actual updated items might be highlighted, and discussed point-by-point.
REVIEWER
Ghassemi, Navid Ferdowsi University of Mashhad, Computer Engineering Department
GENERAL COMMENTS
Authors have investigated one of the main issues with papers published in the field of deep learning for medical diagnosis, specifically, the reproduction of results. In my opinion, the idea is excellent, and the manuscript can contribute to the body of research in the field of study. It is good to see a manuscript that aims to solve actual problems than justifying minor improvements to previous works.
However, there are a few issues with how the study is designed. In my opinion, "MATERIAL AND METHODS" and "RESULTS" need a significant change. Mainly, the reasoning behind choosing references [10] and [11] needs to be clarified. I personally think that you can see different types of issues in papers when you try to reproduce their results, and these can not be all demonstrated by reviewing two papers. This is somewhat acknowledged by the authors, too, as they have included different aspects in their final checklist, but I think to some extent, this should be demonstrated that what different (at least five distinct) papers lack, and not merely rely on one flawed work to show this. Authors have mentioned that in a prior finding, it was stated that: "three out of twenty-nine studies" had sufficient information to reproduce results. This may be accurate, but it's itself misleading. Some of the questions raised after hearing this info are: "what did those other twenty-six lack?", "Is it the same between all different papers, or each one has its own flaws?", "How much of necessary info is missing?" My suggestion is to reformat these two sections of your paper by doing these steps: -Add more than merely two references. Two can not be representative of a whole.
-Select these papers to form a comprehensive representation of works done by different groups of researchers. Usually, the problems in research papers published by computer scientists are different from the ones done by physicians. I suggest selecting a few (2-3) papers from top-tier computer science conferences, such as Nips and ICML (highly cited ones) and a few other (2-3) from prestigious journals in biomedical and healthcare (such as computers in biology and medicine).
-Now, after reviewing these papers and covering different aspects and views into the problem at hand, you may justify your selection of two methods for the rest of these two sections.
-After the formation of the checklist, write about what was missing from the papers you've reviewed before, in a table.
The rest of the results section is well-formatted and detailed in my opinion, but it might need a few changes after adding new references. Also, the checklist might change too.
Lastly, I suggest adding the following item to the checklist under "Model evaluation": -Demonstration of a few samples for which the model has performed poorly.
Reviewer: 1 Dr. Tariq Bashir, COMSATS University Islamabad
Comments to the Author: R1C1: The novelty of the paper is not clear with respect to the proposed work. The majority of the work in the experimental result and discussion section is the comparison of the different algorithms.
Authors' answer to R1C1: Our reproduction and replication study might of course be seen as lacking novelty, however, we feel that the study addresses a knowledge gap we previously identified in a scoping study of the field (cited as [5] in the manuscript). In that study, we noted an abundance of published algorithms, contrasted by a scarcity of studies that independently confirm validity of existing algorithms. The latter type of work is required to achieve progress towards clinical application of advanced image analysis algorithms. We are aware that many scientific journals prefer to publish novel discoveriesor, in our field, new algorithms. The choice to submit to BMJ Open was in part based on the journal's policy to deemphasise novelty and to explicitly invite specialist studies.
R1C2: Statistical analysis is missing and seems no comparison with the proposed work.
Authors' answer to R1C2: We apologize for not explicitly describing our statistical analysis. We now have added a Statistical analysis subsection in the Materials and Methods where we explain that we used descriptive statistics of commonly used segmentation evaluation metrics: "We provide descriptive statistics (means and, when possible, standard deviations) of segmentation evaluation metrics. The metrics we used are: Dice similarity coefficient -DSC, positive predictive value -PPV, and sensitivity." We further explain in the Discussion why no advanced statistical analysis was conducted in our study: "Our claims of reproducibility / non-reproducibility could not be supported with advanced statistical analysis; the online evaluation system[16] (used to evaluate the segmentations in the original validation papers and our study) provides arithmetic means of the evaluation metrics without measures of dispersion. The small sample size of the in-house data, along with the difference in tumour components segmented as a reference for HGG (tumour core) and LGG (whole tumour) further precludes a meaningful analysis of the statistical difference between the results obtained in the reproducibility and replicability analysis."
Reviewer: 2 Prof. Bu Gao, Shijiazhuang First Hospital, Hebei Medical University
Comments to the Author: In this study, the authors studied deep learning for automatic brain tumor segmentation on MRI.
R2C1:
Abstract: Please give the full phrase for DL.
Authors' answer to R2C1: We appreciate the thorough revision of our manuscript. The abbreviation is now explained.
R2C2:
The second sentence in the Methods section needs revision.
Authors' answer to R2C2: We agree that the sentence was unclear. For the revised manuscript, we split it into two sentences which now read: "We used the definitions of reproduction and replication from the National Academies of Sciences, Engineering and Medicine [13], which were also used by Pineau et al. [6]. Renard et al. identified two methods for brain lesion segmentation [10, 11] as adequately reported [3], and we chose these two for the present study." We hope this addresses the reviewer's criticism.
R2C3
: Introduction: In this section, please clearly describe the two segmentation methods and their advantages and disadvantages in clinical study. The background of these two methods is poorly described and we do not know what you are going to do with these two methods. More detailed description is needed Authors' answer to R2C3: We revised the introduction section according to the reviewer's suggestion. We provide a description of DeepMedic and Pereira et al.'s method (referred to as the 3D dual-path CNN and the 2D single-path CNN respectively, according to the suggestion from R2C4), and clarify how and why we used them in our study. We further highlight that the papers where the methods were originally proposed provide validation at a technical level, whereas our study evaluates eligibility of the two methods for clinical studies.
R2C4:
The method described by Pereira et al should be given in a more scientific way. For example, the DeepMedic method can be termed to Dual-path 3D CNN, and the methods described by Pereira et al can be indicated as 2D CNN. All similar terms should be changed accordingly. Do not use Pereira et al.
Authors' answer to R2C4: We agree with the reviewer and made changes accordingly, now referring to the algorithms as "3D dual-path CNN" and "2D single-path CNN".
R2C5
: Some abbreviations used in the text should be given the full phrase at the first time of use.
Authors' answer to R2C5: Thank you for pointing out the omissions. We revised the manuscript according to the comment.
R2C6:
Discussion: IN the first paragraph, the major findings should be briefly described.
Authors' answer to R2C6: Thank you for the suggestion. We agree that major findings should be summarized early on in the Discussion and we have revised the first paragraph accordingly. It now reads: "Reproducibility and replicability of scientific results are the foundation of evidence-based medicine. In this work we show that current guidelines for publishing validation studies on deep-learning algorithms are incomplete. While attempting to reproduce the two studies on MR brain lesion segmentation that were identified as meeting current reproducibility recommendations, [3] we found that only one of them was reproducible based on the published information. Remarkably, even after consultation with the authors of the second method, we were not able to obtain satisfactory segmentation results with their method."
Reviewer: 3 Dr. Gilbert Lim, National University of Singapore
Comments to the Author: This manuscript describes an attempt to reproduce/replicate two previously-published works on Brain MRI segmentation (following recommended reporting criteria & author correspondence if necessary), as well as an evaluation on an external (private) dataset for the successfully-replicated model, and a proposal for an expanded reporting criteria for improved replication. Reproducibility is an important consideration, especially in the medical domain.
Authors' answer: Thank you for this summary and favourable assessment.
Some comments follow:
R3C1:
In the Overview subsection, it is stated that "The study design is based on the assumption that the reproducibility items proposed by Renard et al. are necessary and sufficient for reproduction and replication". The claim on "necessary" might be further justified, since it is not clear that all the items are strictly necessary (i.e. replication might have been successful omitting one/some of them).
Authors' answer to R3C1: Agreed: we only evaluate sufficiency of these items, not whether they are necessary. We revised the sentence accordingly and it now reads: "The study design is based on the assumption that the reproducibility items proposed by Renard et al. are sufficient for reproduction and replication."
R3C2:
In the Reproducibility analysis subsection, it is stated that "In Table 1 these algorithms are described... together with libraries and computational parameters we used in our implementations". It might be clarified as to whether these "libraries and computational parameters" were as specified by the checklists in the original works, or determined by the authors.
Authors' answer to R3C2: We agree and have now clarified the description of Table 1 to indicate which parameters were specified by the authors of the original works and which were used in our implementation: "All the parameters and versions found in the first part of the table were specified in the original works. In the part "Our implementation middleware", we specify the Python version and libraries used for our implementations." We also added the libraries and versions specified in the original articles in Table 1, the "CV strategy + number of folds" item does not appear fully specified. For example, for DeepMedic, 5-fold CV on training set is listed, but this does not appear to specify the assignment to each fold. The "1 subject in both HGG and LGG" specification for Pereira et al. also appears not fully clear.
Authors' answer to R3C3: Thank you for noting this omission. We have now added the size of the training set in Table 1 so that these parameters are meaningful: CV strategy + number of folds Not specified 5-fold CV on training set (n=274) 1 subject in both HGG (n=220) and LGG (n=54 LGG (n=18) Validation using 1 subject in both HGG (n=220) and LGG (n=54)
R3C4:
In Table 1, both "Hyperparameters" and "Hyperparameter section strategy" are listed. It might be clarified whether the selection strategy was actually attempted to reproduce the reported hyperparameters (and if different optimal hyperparameters were found, were they used instead), or if the reported hyperparameters were used in training the models regardless.
Authors' answer to R3C4: We agree with the suggestion. We have added a clarification to the Materials and Methods that reads: "For our implementation, we used hyperparameters reported in the original articles."
R3C5:
In the Pereira et al. subsection, it is stated that after the initial failure to replicate the model (with the corresponding results as reported in Table 2), "the author generously provided information on the bias field correction as well as image histogram normalization parameters", and this "plausibly explained the failure to reproduce". However, no reported attempt using this additional information appears to have been made "...we decided not to pursue further efforts to reproduce the study". Was there any reason why some attempt with the additional (partial) information was not tried (especially since the algorithm, if not the specific implementation, is known), if only to see if the performance would be closer to the reported values?
Authors' answer to R3C5: We omitted the description of these further attempts from the Results by mistake and only mentioned them in the Discussion. We are grateful to the Reviewer for pointing out this flaw. We have updated the Results section with the following: "To compensate, we extracted the mean and standard deviation from the training images by collecting intensity information of patches sampled from various brain regions to ensure class balance. We imposed a condition that for a given class, a certain percentage of patch pixels are labelled as that class. The values of mean and standard deviation depended on the percentage value, and we did not succeed at finding a value that would improve the segmentation results. Instead, we resorted to normalizing whole testing images to have zero mean and unit variance, but the dominance of the intensities of healthy tissue skewed the estimated parameters and did not result in satisfactory results."
R3C6:
For the proposed updates in Table 5, the actual updated items might be highlighted, and discussed point-by-point.
Authors' answer to R3C6: We followed the Reviewer's suggestion to indicate what constituted the updates in Table 5. We added to the caption of Table 5: "The update from the established checklists[3,8] includes a new category Data set preprocessing, and a new item in Model evaluation category: Failed cases: number and reasons. We also regrouped the items into categories that provide a clearer structure for reporting in particular of reproducibility and replicability studies." We also reiterated and included more detailed reasoning for the regrouping of the items in the Discussion: "First, we add what we conclude to be a necessary and sufficient description of the preprocessing. Second, we regroup the items to provide a clearer distinction between the various elements and aspects that are involved in the algorithm development vs. the validation of the medical image segmentation tool: such a structure for providing a more transparent and easily implemented way of reporting is specifically designed to help those who seek to reproduce and replicate."
Authors' answer to R3C7: We apologize for these errors. We have thoroughly re-reviewed the manuscript for typographical errors. The ones pointed out have, of course, been changed according to the suggestions.
Reviewer: 4 Dr. Navid Ghassemi, Ferdowsi University of Mashhad
Comments to the Author:
R4C1:
Authors have investigated one of the main issues with papers published in the field of deep learning for medical diagnosis, specifically, the reproduction of results. In my opinion, the idea is excellent, and the manuscript can contribute to the body of research in the field of study. It is good to see a manuscript that aims to solve actual problems than justifying minor improvements to previous works.
Authors' answer R4C1: We greatly appreciate the encouraging comment.
R4C2.1: However, there are a few issues with how the study is designed. In my opinion, "MATERIAL AND METHODS" and "RESULTS" need a significant change. Mainly, the reasoning behind choosing references [10] and [11] needs to be clarified.
Authors' answer to R4C2.1: We apologize that we failed to convey what the scope and aim of our study were. Our mistake logically led to a misunderstanding of what we intended to do followed by a reasonable suggestion to improve the methodology of the study. To correct our mistake, we have now added a paragraph in the introduction that explains the choice of the two methods for this study (edited parts highlighted in yellow): "Critics have also pointed out that scientific reporting of study designs has often been insufficient, and that the analysis of results tends to be biased towards authors' desired outcomes. [4,6,7] These issues present critical challenges to realizing the potential of artificial intelligence (AI) and translating promising scientific algorithms into reliable and trusted clinical decision support tools.
In our previous work[5], we systematically explored the literature to identify whether prevalent brain lesion segmentation methods are a suitable basis for developing a tool that supports radiological brain tumour status assessment. Our findings corroborated the issues with reporting that may affect reproducibility.
[5] In particular, reporting of the preprocessing steps is inadequate in many instances." "Furthermore, Renard et al.
[3] only identified three out of twenty-nine studies included in their review to be sufficiently described according to their reproducibility recommendations. Two [10,11] of the three were algorithms for brain tumour segmentation on magnetic resonance images (MRI). To continue our pursuit of a technically validated DL brain tumour segmentation algorithm that is suitable for clinical validation, we attempted to re-implement the two methods [10,11]." R4C2.2: I personally think that you can see different types of issues in papers when you try to reproduce their results, and these cannot be all demonstrated by reviewing two papers. This is somewhat acknowledged by the authors, too, as they have included different aspects in their final checklist, but I think to some extent, this should be demonstrated that what different (at least five distinct) papers lack, and not merely rely on one flawed work to show this. Authors have mentioned that in a prior finding, it was stated that: "three out of twenty-nine studies" had sufficient information to reproduce results. This may be accurate, but it's itself misleading. Some of the questions raised after hearing this info are: "what did those other twenty-six lack?", "Is it the same between all different papers, or each one has its own flaws?", "How much of necessary info is missing?" My suggestion is to reformat these two sections of your paper by doing these steps: -Add more than merely two references. Two cannot be representative of a whole.
-Select these papers to form a comprehensive representation of works done by different groups of researchers. Usually, the problems in research papers published by computer scientists are different from the ones done by physicians. I suggest selecting a few (2-3) papers from top-tier computer science conferences, such as Nips and ICML (highly cited ones) and a few other (2-3) from prestigious journals in biomedical and healthcare (such as computers in biology and medicine).
-Now, after reviewing these papers and covering different aspects and views into the problem at hand, you may justify your selection of two methods for the rest of these two sections.
-After the formation of the checklist, write about what was missing from the papers you've reviewed before, in a table. The rest of the results section is well-formatted and detailed in my opinion, but it might need a few changes after adding new references. Also, the checklist might change too.
Reviewer 3, comment 1: We thank the authors for addressing almost all our previous concerns. The minor remaining concern rests on the cross-validation set used. Were the exact training splits (i.e. which image belongs to which training/test set in each cross-validation fold) known, or was this independently randomized? This might be briefly clarified.
Our response: We addressed the concern in the description of table 1 that reads now: " Table 1: Description of the two algorithms implemented in the reproducibility analysis, 3D dualpath CNN[9] and 2D single-path CNN, [10] according to the reproducibility categories proposed by Renard et al.[3] All the parameters and versions found in the first part of the table were specified in the original articles. The selection strategy of images to respective cross-validation folds was not specified. In the part "Our implementation middleware", we specify the Python version and libraries used for our implementations. CNNconvolutional neural networks, CRFconditional random field, CVcross-validation, DSC -Dice similarity coefficient, FCfully connected, HGGhigh grade glioma, LGG -low grade glioma." Reviewer 4, comment 1: The authors have answered all of my comments, and I recommend it for publication in its current form.
Our response: We thank the reviewer for this recommendation.
VERSION 3 -REVIEW REVIEWER
Lim, Gilbert National University of Singapore, School of Computing REVIEW RETURNED 24-Jun-2022
GENERAL COMMENTS
We thank the authors for addressing our remaining concern.
|
2022-07-20T06:17:36.658Z
|
2022-07-01T00:00:00.000
|
{
"year": 2022,
"sha1": "c872d61598cb7f68e1cea8ea519c8909aef98409",
"oa_license": "CCBYNC",
"oa_url": "https://bmjopen.bmj.com/content/bmjopen/12/7/e059000.full.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a70c5b1f586bf44e6c52fbf5a8f3762ae69e2034",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
219288626
|
pes2o/s2orc
|
v3-fos-license
|
Resistance or appropriation?: Uptake of exercise after a nurse-led intervention to promote self-management for osteoarthritis
The philosophical underpinning of trials of complex interventions is critiqued for not taking into account causal mechanisms that influence potential outcomes. In this article, we draw from in-depth interviews (with practice nurses and patients) and observations of practice meetings and consultations to investigate the outcomes of a complex intervention to promote self-management (in particular exercise) for osteoarthritis in primary care settings. We argue that nurses interpreted the intervention as underpinned by the need to educate rather than work with patients, and, drawing from Habermasian theory, we argue that expert medicalised knowledge (system) clashed with lay ‘lifeworld’ prerogatives in an uneven communicative arena (the consultation). In turn, the advice and instructions given to patients were not always commensurate with their ‘lifeworld’. Consequently, patients struggled to embed exercise routines into their daily lives for reasons of unsuitable locality, sense-making that ‘home’ was an inappropriate place to exercise and using embodied knowledge to test the efficacy of exercise on pain. We conclude by arguing that using Habermasian theory helped to understand reasons why the trial failed to increase exercise levels. Our findings suggest that communication styles influence the outcomes of self-management interventions, reinforce the utility of theoretically informed qualitative research embedded within trials to improve conduct and outcomes and indicate incorporating perspectives from human geography can enhance Habermas-informed research and theorising.
place to exercise and using embodied knowledge to test the efficacy of exercise on pain. We conclude by arguing that using Habermasian theory helped to understand reasons why the trial failed to increase exercise levels. Our findings suggest that communication styles influence the outcomes of self-management interventions, reinforce the utility of theoretically informed qualitative research embedded within trials to improve conduct and outcomes and indicate incorporating perspectives from human geography can enhance Habermas-informed research and theorising.
Keywords
Chronic illness and disability, patient-physician relationship, primary care, illness behaviour Providing self-management support as a way of meeting the challenge of the predicted rise in long-term conditions (LTCs) has been a UK policy agenda for over a decade (Ong et al., 2014b) and continues to underpin National Health Service (NHS) supported patient-led care for those with LTCs (NHS England, 2014). Osteoarthritis (OA) is one such LTC which is reported to lead to discomfort and disruption to individuals as well as costing state and society in terms of lost productivity and healthcare costs (Arthritis Care, 2012; Dziedzic et al., 2018). Supported self-management is recommended as a way of ameliorating these problems and helping patients live with OA (National Institute for Health and Clinical Excellence (NICE) 2014). In response, a trial intervention featuring primarily nurse-led supported self-management in general practice settings was devised and implemented (Dziedzic et al., 2018), which we detail further below. The efficacy of primary care self-management interventions has been questioned due to clinical trials demonstrating little to no effect (Sun and Guyatt, 2013). A long-standing hierarchy of evidence positions trial findings as the 'gold standard' (Barton, 2000) and, in part, underpins this debate. The philosophical standpoint of trial methodology emphasises identifying linear causality within a closed system between intervention and outcome (Marchal et al., 2013). It is recognised that embedding self-management support in everyday practice is not straightforward because it requires change at different levels and places additional pressures on practitioners and patients. For professionals, it can conflict with external drivers, the existing organisation of care and individual ways of working (Kennedy et al., 2013;Ong et al., 2014a). Practitioners often experience difficulties reconciling their professional identities and relinquishing responsibilities for patients (Blakeman et al., 2006;McDonald et al., 2008). Accordingly, practitioners may worry about disrupting professional patient-relationships by altering the ceremonial order of clinical interactions (Blakeman et al., 2010). For patients, managing chronic illness involves managing disruptions to social relationships as well as the demands of medical regimens (May et al., 2014). Thus, a number of factors (or mechanisms) can influence the delivery of supported self-management in routine practice.
Embedded qualitative studies offer the opportunity to explore the processes and mechanisms which can explain trial intervention outcomes and efficacy (Blackwood et al., 2010;Marchal et al., 2013;Ong et al., 2014a). In this article, we report on a nested qualitative study embedded within a trial to implement the National Institute of Health and Care Excellence (NICE) OA guidelines in general practice. We detail factors that influenced the outcome of the trial and use Habermasian theory to explain the findings. We now turn to discussing our use of Habermasian thought which we later draw on to situate the findings.
Habermas, consultations and health
Jurgen Habermas' work has been widely used to explore the dynamic of consultations and how people engage with medical advice. Scambler (2002) has usefully highlighted the potential for using Habermasian theory to critically explicate tensions and negotiations between broader societal systems, structures, power relations and individual integrity of action, particularly within consultations and beyond. Our iterative analytical approach (see the 'Methods' section for more detail) featured consideration of our findings and comparison with existing literature and theories. It became apparent that Habermasian thought offered a powerful explanatory nexus to account for the different domains of knowledge, interaction and experience evident within a complex multi-source dataset, which in turn influenced the outcome of the trial. First, it facilitates understanding of interactions and of issues of compliance within consultations. Second, it allows incorporation of phenomenological and interactionist elements of patient experience and knowledge. Finally, it affords to situate how different logics and agendas are negotiated by actors with different social positions (and, therefore, power) within consultations and beyond (i.e. in life away from the consolation). We now turn to summarising previous work in this tradition.
Habermas drew on a wide range of disciplinary perspectives, and sociological theories such as Mead's theory about the interactional constitution of mind and self (Mead, 1934), and Parsons' action theory (1970) that focussed on those societal functions that are necessary for stable social life. Habermas' project sought to explicate the 'colonizing' consequences of modern instrumental rationality, economic imperatives and state bureaucracy (or the 'system'). He argued that the 'system' undermines ethics, personal preferences and people's everyday concerns (the 'lifeworld') because of its focus on a goal-oriented form of reasoning. He was concerned with the potential for individuals to retain freedom and integrity of action via the use of 'practical' reason (which is grounded in life world values, ethics and locally situated relationships) to counteract the system's goal-oriented instrumental reason. In particular, Habermas (1984) focussed on the potential of 'communicative action', or the ability for agents to communicate and negotiate to Health 26(2) reach understanding and agreement free of 'distorted' coercive communication shaped by 'system' goal-oriented reasoning.
Habermasian theory has been used to analyse how, in some instances, health care (biomedical technical expertise constituting a 'system') can 'colonize' people's values, modes of existence, sovereignty and preferences for action (or 'lifeworld';Edwards, 2012). This arises when healthcare professionals (HCPs) and patients fail to engage in open communication and HCPs implicitly or explicitly deploy instrumental goal-oriented communication which seeks to direct patients to a particular outcome (Barry et al., 2001;Edwards, 2012;Greenhalgh et al., 2006;Mishler, 1984;Scambler, 2002).
Barry and colleagues highlighted four ways in which communication was enacted in doctor-patient consultations; 'strictly medicine', in which both patient and doctor used medicalised language; 'lifeworld ignored', where doctors avoided or disengaged discussion of patients concerns; 'lifeworld blocked', where doctors channelled efforts into framing consultations in biomedical terms and 'mutual lifeworld', or a when both parties situated the discussion within the patient's agenda (Barry et al., 2001). Similarly, Greenhalgh et al. (2006) noted that consultations mediated by an interpreter were conducted with an implicit or explicit solution-focussed approach promoting a medical agenda. They too have argued that open communication would lead to better outcomes (Greenhalgh et al., 2006).
Outside of the consultation room, 'resistance' to the voice of biomedicine and its intrusion into the lifeworld occurs when patients draw on experiential lay knowledge which challenges or diverges from medical thinking (Edwards, 2012;Jackson and Scambler, 2007;Williams and Popay, 2001) Germond and Cochrane (2010) contend the lifeworld (or 'healthworld' in their reformulation) is embodied, experiential and socially situated, extending Habermas' focus on cognitions and meanings which underpin the individual and collective values and preferences which constitute the 'lifeworld'. Finally, Bissell et al. (2018) consider the democratising potential of Habermas' theory of communicative action based on open dialogue and weighing up the validity of arguments to support egalitarian decision-making.
The Managing OSteoArthritis In ConsultationS Study
As indicated in the 'Introduction' section, self-management support is a core recommendation for treatment of OA in (NICE, 2008) guidelines (subsequently updated 2014). OA (the most common type of arthritis) is a leading cause of disability worldwide and affects approximately 8 million people in the United Kingdom (Arthritis Care, 2012). The Managing OSteoArthritis In ConsultationS (MOSAICS) trial was devised in the wake of research which indicated that people with OA were not self-managing their condition in accordance with clinical recommendations , clinicians may not be advising patients on self-management in accordance with guidelines Porcheret et al., 2007) and patients desire more information and self-management support from practitioners (Mann and Gooberman-Hill, 2011). Full details of the trial intervention can be obtained from the study protocol (Dziedzic et al., 2014), but for context, we provide a brief overview below.
The trial intervention aimed to enhance the supported self-management provided to patients and promote the uptake of the core treatments recommended in the NICE (2008) OA guidance. The intervention consisted of a semi-structured general practitioner (GP) consultation, use of an OA Guidebook and referral to a nurseled OA clinic once the GP had diagnosed OA. The intervention centred heavily on practice nurses because of their potentially key role in offering supported selfmanagement for OA (Dziedzic et al., 2014).
Extensive training was delivered to intervention practices as a whole, and the GPs and practice nurses as professional groups to implement the intervention. Nurse training focussed on the anatomy and disease process of OA, the core treatments for OA (information and advice, exercise and weight loss) and discussing the use of pain medications. Behaviour change theories also underpinned the intervention and were reflected within practitioners training, with an emphasis placed on incorporating patients concerns, existing strategies and knowledge into a holistic approach (Main et al., 2010;Rollnick et al., 2005) to encourage and motivate participants to undertake self-management based on their situation and needs, including muscle strengthening exercises or increased levels of aerobic activity. To emphasise the importance of patient experiences and perspectives, training sessions utilising incremental case studies drawing from the findings of preceding qualitative research (Grime et al., 2010;Morden et al., 2011) were incorporated into the training. This was supplemented by the use of the Guidebook that was developed to incorporate lay and medical knowledge, and be used as an aid for practitioners and patients (Grime and Dudley, 2014). This was in line with the WISE (Whole System Informing Self-Management Engagement) approach which underpinned the intervention design (Kennedy et al., 2007). Key to utilising the WISE approach was an emphasis on being flexible in consultations to engage with patient concerns and agendas to appropriately utilise behaviour change methods incorporated in the trial design. While an emphasis on understanding patient's lived experience was incorporated into the intervention and training, Habermasian theory was not drawn from. Integration of theory into the study findings is discussed further in the analysis section. Eight practices in the West Midlands and North West of England were recruited to take part in the study: four control practices and four intervention practices. The trial did not demonstrate any statistically significant changes in physical functioning between the control and intervention arms. However, it did show improvements in patient enablement and uptake in core OA treatments (Dziedzic et al., 2018) and possible reasons will be explored below.
Qualitative study design and methods. The MOSAICS study features a collage of substudies using different methodological approaches to evaluate the outcome of the trial (Dziedzic et al., 2014). Qualitative methodology was utilised in this sub-study. To be able to obtain a variety of perspectives of the same phenomena (Mays and Pope, 2000), two types of data collection strategies were used. First, we used observation methods. Observation as a qualitative research method involves the researcher 'going into the field' and describing and analysing what has been seen, what people do and what people say, therefore, illuminating behaviour and interactions in natural settings (Walshe et al., 2011) and aims to identify meaning to people in that setting (Sharkey and Aggergaard Larson, 2005). We used observation methods in two ways. First, by observing nurse-led clinics delivered as part of the intervention. Second, we observed nurses' discussions of their experiences and impressions of the intervention by attending post study feedback meetings at practices. We also interviewed nurses and patients who participated in the trial. We used in-depth interviews because they can yield rich sources of data on people's experiences, opinions, aspirations and feelings (May, 2002). They enable the respondent to tell their own stories in their own words and the meaning that people attach to events can be revealed (Bowling, 2001).
Sample selection, recruitment and data collection. Research ethics approval for all substudies (including qualitative studies) comprising the trial was obtained from the local research ethics committee (ref:10/H1017/76). Data collection occurred in four stages. First, we approached all of the nurses from the intervention practices (n ¼ 7) and asked for permission to observe and audio record their clinics. All nurses agreed to be observed, but two declined to have their consultation audio recorded. A suitable date to attend a whole morning or afternoon of clinics was agreed. Written consent to participate was gained from the nurse. Researchers asked patients who attended clinics if they minded having their consultation observed and audio recorded. Written consent was obtained before and after each appointment. A total of 27 patient consultations were observed by AM and BNO. We were not able to match patient interviews with clinics because observed patients declined to be interviewed or could not be contacted.
Second, we used a convenience sample and recruited patients who had consulted at intervention practices. Consulting patients were issued baseline and 3-month 'consultation questionnaires' as part of the broader study evaluation. From questionnaires, we identified patients who indicated they had seen the GP and nurse. Potential participants were sent an invitation letter and information sheet offering them the opportunity to take part in the qualitative study. A total of 29 patients volunteered to take part in interviews. All patient interviews were undertaken in participants' homes by AM. Participants provided informed written consent prior to interviews commencing and they were undertaken between May 2012 and May 2013. Patients were asked about their expectations of attending the nurse consultation(s); how they thought the visits went; what they discussed with the nurse (including exploration of exercise advice) and invited to reflect on what they thought was helpful and what they would have changed (if anything). Patients were also asked to reflect on their response and subsequent actions in relation to the consultation advice (particularly, in relation to exercise).
Third, all nurses from intervention practices (n ¼ 7) were invited to take part in semi-structured interviews once the intervention had been completed. At the time of the interviews there was an increased amount of organisational pressure on primary care services and only four nurses participated because others could not spare the time. The nurse interview schedule featured topics such as: how they found delivering the new consultation; invited to compare the intervention to routine practice; how they thought patients responded to consultations (including to exercise advice); what they thought worked well in consultations; their thoughts on what worked less well or could be improved (if anything). Finally, AM and BNO attended and observed post-intervention feedback meetings (n ¼ 4) at intervention practices where participating HCPs (GPs and nurses) experiences of and thoughts about the intervention were discussed.
Data analysis. All interviews and 21 of the 27 clinical observations were audio recorded and professionally transcribed verbatim (two nurses did not give consent for audio recording of their consultations). Field notes from clinical observations and study meetings were typed up into a standard format. Thematic analysis was undertaken using some of the principles laid out by grounded theory, in particular, focussing on identifying emergent codes, developing themes and constantly comparing data and coding (Charmaz, 2006). Data analysis took place in several stages. First, members of the research team independently read and closely coded transcripts and field notes. Independent coding was compared and broader themes agreed. Memos were used during analysis to record developments in coding and make connections between themes and supporting comparisons with existing literature (Charmaz, 2006). During analysis, 'deviant cases' in the data were searched for to act as 'disconfirming' checks and balances (Green and Thorogood, 2004). As analysis progressed the utility of Habermasian thought to situate findings was increasingly considered. As a final step, the data were deductively recoded using the Habermasian concepts to allow clear comparison with the open coding and ensure that there was strong conceptual and data 'fit' (Macfarlane and O'Reilly-de Brun, 2012) or alternatively, enough scope to credibly extend or expand theory (e.g. we discuss space and place in the context of the lifeworld). Another example of where Habermasian thought does not directly explain findings relates to the nurses' interpretation of the intervention's purpose and 'fit' with existing work (normalisation process theory is more relevant), however, their interpretation informs how they engaged with patients in the consultation (see also the 'Results' section).
Results
We begin by presenting data from observations of study meetings and postintervention interviews which reveal nurses' interpretations of the intervention, their role within it and how well they thought it went. We then turn to observational data from consultations and describe how nurses enacted discussions about exercise. Finally, we report data from patient interviews regarding their responses to exercise advice and attempts to fit exercise into their 'lifeworld'.
Nurse interviews and post-study meeting observations 'Fit' with existing practice. Our analysis revealed that the practice nurses involved in delivering the trial interpreted the intervention purpose, aligned it within their existing frame of reference, their usual working practices and rationalised what the core deliverables would be, indicating that nurses were attempting to 'normalise' the intervention within their immediate working context (Macfarlane and O'Reilly-de Brun, 2012). Within this framework, participants considered that the intervention significantly focussed on promoting lifestyle management to patients to help them manage pain: It really does make you think about promoting lifestyle, it does, and a positive effect that pain control have. (Nurse 3) Importantly, an emphasis on discussing lifestyle and imparting the correct information to patients was seen to 'fit' well with normal routine practice. In particular, it was interpreted as an 'extension' of what nurses did in other chronic condition clinics: The practice nurse linked what she was taught in the MOSAICS training to what she has to deliver in other chronic disease clinics, just with more of an OA specific focus. (Practice 4 meeting observation, 13 November 2013) Thus, nurses' interpretation of the intervention was about promoting lifestyle management which was deemed to be easily situated alongside their usual practice. Accordingly, nurses saw their role as being one to 'motivate' or 'educate' patients: One lady had stopped going swimming because she thought it was making the joints worse, and as soon as I, sort of, explained, 'No', you know, 'that exercise is actually helping' . . . They were a bit fearful that they were causing harm, you know, and exercise was the one thing, I suppose, I did concentrate on a lot with them, you know. (Nurse 1) As this interview extract demonstrates, challenging patient perceptions of harm and correcting patients' fears and worries was seen as central to the nurse's role in the intervention and important in encouraging people to maintain or take up exercise. Seemingly, little attempt was made to understand the social construction of patients' perceptions. Even though nurses received training to take this into account, the structure of the consultation was shaped by the OA guidelines.
Thus, for those nurses whose consultation style was more didactic they could more easily stay within their preferred style. The nurses who were more open to change, and/or patient-centred would adopt a dialogue-based style using the OA guidelines as guidelines rather than as rules.
'Good' patients versus 'bad' non-motivated patients. A key theme that emerged from interviews and observations with nurses centred on the depiction of patients who were either compliant and a 'success story' or deemed recalcitrant, uncooperative or beyond assistance. The latter patients were subsequently disowned.
First, nurses 'good patients' or their 'best patients', who had been receptive to the messages that were conveyed. These patients were perceived to have made changes and benefitted from the intervention: So, he was my best patient, because he came with his wife, and they were both encouraged by knowing if he lost weight, increased his exercise, then the benefits would help him. (Nurse 4) Notably nurses took 'ownership' of such patients and emphasised how they were 'their' patient who reflected successful practice. The above example additionally illustrated that involving partners helped to reinforce the health messages.
Conversely, nurses described patients who were not 'success' stories. Observations revealed that they described characters who were problematic and non-adherent to advice: . . . but they appear to have forgotten most of what the PN had told them, and when they felt 'well' have 'relapsed' into putting on weight or not doing exercise. The nurse sounded somewhat judgemental about patients not 'adhering'. (Practice 1 meeting observation, 09 October 2013) Accordingly, nurses framed such patients as needing additional 'education' and as difficult to 'motivate' or take ownership of: But patients can have the same attitude whether they're diabetic or whether they've got OA. It's to try and, you know, motivate them rather than tell them, 'Yes, you will do this, because this and this will happen', because it doesn't always work with some patients. You have to motivate them that it was their idea and for them to work with you. That's the thing, you know. They have to work with us and we work with them. (Nurse 3) Nurses depicted themselves as patient-centred and with the intention to collaborate with patients, demonstrating some appreciation of the lifeworld by stating that patients needed to feel that it 'was their idea'. However, this was not carried through in their overall perceptions where they positioned 'failure' as the responsibility of the patient who had not 'worked' with them. Nurses distanced themselves from such patients and critiqued them because they consulted 'only because they wanted an operation and they were blinkered and wouldn't listen'. Thus, 'success' and 'failure' were defined as the fault of individual patients' intrinsic motivations and dispositions (Kennedy et al., 2013). We now detail how discussions about exercise were enacted in consultations.
Structuring the consultation around exercise. Our analysis developed a theme which describes the process of the consultation. This theme centres on the tendency for practice nurses to (a) utilise a particular structure during the consultation and (b) focus discussions on exercise and lifestyle.
Observations revealed that nurses heavily focussed on promoting exercise during clinics, in tandem with explaining why exercise was beneficial. This was particularly evident during first encounters with patients. Nurses followed a set format which arguably restricted mutual discussion and recognition of the patient's agenda: The nurse appeared to use a fairly rigid structure in the consultation . . . the approach she used shifted between quite a didactic one to being open to the patient's concerns. However, she always focused upon the core NICE guidelines and promoting weight loss and exercise with the patient. Very little exploration of the patient's context was engaged in. (notes from clinical observation, 18 June 2012) This note points to the inherent tension in the intervention between asking the nurses to follow NICE guidelines (system world), while at the same time, being sensitive to patients' perspectives (lifeworld). In the training emphasis was placed on flexibility in being responsive to the patient which is unusual in a trial where interventions tend to be tightly protocolised. In other words, the nurses had a degree of autonomy as to how they engaged with patients and thus they could either experiment with a patient-centred consultation or revert to their own preferred style.
Nurses usually asked patients whether the GP had discussed their understanding of and concerns about pain. They offered an explanation about the disease process of OA before explaining why patients needed to exercise, discussing how an osteoarthritic joint needs strengthening via exercise to 'repair' itself: The nurse then brings in the importance of exercise and strengthening the ligaments and muscles to the patient. I.e. the muscles and ligaments support the knee joint, taking the strain which helps it repair itself. The patient has been generally agreeing with this explanation, saying 'Okay'. (Clinical observation notes, 18 June 2012) Consultations were often guided further towards the topic of exercise by the nurse asking the patient how active they are, or if they engage in a particular form of activity: This was a common approach by the nurses, delving into the patient's 'lifeworld' with the purpose of framing the discussion around exercise and activity. Of note, the patient raised concerns about safety (slipping) which were not addressed; instead the nurse directed the conversation back to exercise and activity.
Maintaining the exercise agenda. In tandem with the previous theme, analysis revealed that as well as consultations containing a particular structure and initial emphasis, content was habitually steered towards a continual focus on an 'exercise agenda'. Exploring patients' existing activity was an entry route from which the nurses could begin to promote the benefits of exercise. They either promulgated the advantage of exercise for joint pain, or would reiterate previous explanations, frequently not engaging with patients' concerns that were seemingly considered tangential to the purpose of the consultation: The patient attempts to initiate another discussion about eating fruit and OA pain. The patient leads on this conversation and the nurse listens smiling, but does not deal with the patient's queries. Instead she changes to topic of the consultation to exercise again, reiterating that doing exercise does not necessarily make joint pain worse. (notes from clinical observation, 01 August 2012) Often patients would respond by either 'proving' they were active people, outlining that they struggled because of pain, or highlighting 'real world' reasons (finances, time issues) as to why they could not engage with exercise. Nurses responded by offering encouragement to 'good patients' who were thought to agree with their agenda. If patients questioned the rationale for doing exercise or described difficulties, the nurses would often reiterate the reasons for doing exercise or try to 'problem solve': Nurse: Do you do anything like swimming or anything like that? Patient: I would love to do swimming but I've got warts all over my torso and I've been to the nurse several times over the years to have them sort of removed round the back where a swimming costume would be, I just feel very self conscious about it . . . Nurse: You know you can get . . . the Australian surfers use them and I must admit I do when it's really hot, it's got like a little polo neck there and it's like a t-shirt and it's pretty close fitting and it's all stretchy lycra type stuff and you can get them all different colours. They were selling them in TK Max at one time, if you look in the kids' section, the bigger kids, they're really stretchy, it's like a t-shirt so that would cover up quite a bit and that sort of help like that . . . and they're specifically for going in water.
The example highlights how the nurse engaged with the patient's worries about finding a suitable swimming costume, but did not discuss what alternative activities may be enjoyed or what may suit the patient. Thus, nurses tended to prioritise or repeat the OA guideline agenda, or try to solve patients' problems rather than enter into a dialogue about the patient's thinking and what might be an acceptable course of action.
Follow-up consultations featured a pattern of nurses checking up on what patients had been doing and exploring any problems that they had encountered. The agenda of promoting exercise and its benefits remained at the forefront of these consultations. Patients who demonstrated that they wanted to engage with exercise were praised for doing the 'correct things'. Where patients reported problems, nurses reminded them why it was important to exercise and their concerns were not discussed explicitly or negotiated, with nurses returning to a didactic or problem-solving approach.
In summary, nurses led consultations, focussed on exercise and tried to ensure that patients were compliant with the treatments emphasised by the NICE OA guidelines, which was ultimately what they were trained to do. Consequently, nurses were more pre-disposed to following a model of communication which limited engagement in patients' 'lifeworld' in a mutually beneficial way. Thus, while nurses may have operated a mode of communication that did not necessarily 'block' or 'ignore' the lifeworld (Barry et al., 2001), interactions were conducted with an underlying agenda which arguably did not serve to fully access the person's agenda and facilitate open communication (Greenhalgh et al., 2006). This communication style did not necessarily prevent patients from engaging with the advice provided. We turn to patient interview data to explore what happened after receiving the intervention.
Reactions to being advised to exercise: situating advice against interests, concerns and needs. Analysis of patient interviews unearthed a theme which indicated that patients responded to exercise and lifestyle advice in a number of ways, but which were underpinned by a common theme, namely, how they related it to their existing lifestyle, experiences, concerns or personal agenda (or the 'lifeworld').
In the interviews, a number of participants (n ¼ 4) explained that being given exercise was not uncharted territory because 'a lot of it, I already knew', especially, if they were recreational walkers or cyclists. Thus, they interpreted the advice as useful because it reinforced existing knowledge, but not novel. In the main, participants were receptive to the rationale for doing exercises, because they integrated well with a broader concern or priority in terms of maintenance of health and ability to avoid the disruptive qualities of ill health, well exemplified by this quote: And I do think that the explanation about keeping the certain muscles, you know, above the knee and behind the knee, and keeping those strong, actually helps, you know, the knee, and keeps that mobility. And, obviously, from my point of view, I want to keep my mobility for as long as possible. I've had both knees cleaned outone twice and one once -and I'd prefer not to have an operation on them for, you know, the foreseeable future. So it was helpful to have that information from the nurse. (Participant 15) In short, patients appreciated being given a technical explanation which underpinned the reason for undertaking exercise and allayed future fears of disability. This participant, in common with others, compared the actions of the nurse favourably with previous experiences of visiting physiotherapists for musculoskeletal complaints: I think that the, you know, the fact that she doesn't just give you the piece of paper with it on saying do these. She explained them properly and the benefits of doing them. That's what I get out of them so you know I thought that was pretty good. (Participant 4) Participants valued that the nurse actively demonstrated the exercises because it helped to clarify what they needed to do and how, coupled with explanations about their current and future benefit. In other words, nurse consultations influenced how patients thought about knee OA and offered some reassurance about future prospects.
For some participants, the series of visits to see the nurse acted as a motivator and, without the nurse's engagement, they would not always have continued using strengthening exercises beyond the first week. As a result, participants suggested they should routinely use exercises, or as another person put it, used them 'religiously three times a day, just like she said'. While participants were cognizant about the reasons for exercising, they did not always maintain exercise as part of their daily lives and routines. Patients who had difficulties (discussed below) suggested that nurses had a particular agenda, intimating that they felt little room was offered for discussion or exploration of alternatives that represented a better fit with their own views: 'she wasn't talking off a script, but it wasn't too much of an interactive discussion' and 'she just kept on about these exercises' (Participant 23).
Continuation with muscle strengthening or aerobic exercise was influenced by either contextual embodied knowledge or availability of appropriate 'places' within which to exercise. We now discuss this in more detail.
Effectiveness of exercise: embodied experiential knowledge. Analysis revealed an important factor in determining the effectiveness of exercise(s) was whether some form of 'proof' existed in terms of feeling less pain or other perceived benefit. Closely related was the ease of being able to take up exercise, often situated against living with the symptoms of other health conditions.
For some participants, doing muscle strengthening exercise rested upon whether they were found to be beneficial after a period of testing and observation: The first few times I said it was very, very painful, and was really uncomfortable, but the pain in my knee, I can get comfortable again now, so I'm hoping that that's because of the exercises. So I keep doing them. And keep trying them. (Participant 14) The above participant demonstrates that decisions about effectiveness were informed by a period of observation whether muscle strengthening exercises had reduced pain. Other 'visible' evidence of success included reduction in swelling or inflammation or improvement in function.
Conversely, some participants contested the utility of doing exercise by suggesting it increased pain, again drawing on experiential evidence to make the case: I tried riding a push bike and I can't, I mean obviously I can ride it but once I come to getting up a hill and that I can't put the pressure on me knees. And any distance walking I can walk for so long but after that I can't, I've got to stop, I've got to rest me knees. (Participant 6) For this man, the level of pain experienced when cycling or walking made him feel like stopping and resting, thus, he decided that subjecting himself to self-inflicted suffering was not worth persevering. This next person subtly suggested that the prescribed exercise was ineffectual as it had not made any immediate noticeable improvements to her condition: Well I am doing it, I can't say that I've noticed a difference yet, but then again, we're just shy of two weeks of seeing her so, maybe it's not had time to work yet. It hasn't made it worse, but I can say that it's made it better either. (Participant 12) The lack of definitive experiential evidence indicating 'difference' in pain or function levels left her unconvinced that she could derive any benefits from the exercises.
A number of participants (n ¼ 5) who suggested that aerobic exercises might be difficult and unpleasant were influenced by the presence of co-existing health conditions. This man suffered from respiratory problems and said that walking could, at times, be problematic: Because of difficulties arising from his chest complaint, he did not prioritise doing additional exercise for knee pain because of the additional complications and discomfort. Again, embodied knowledge played a role in exercise uptake. This was further influenced by previous advice received from HCPs that had been incorporated into the life world and in turn shaped the meaning and relative importance of other morbidities (Cheraghi-Sohi et al., 2013).
In summary, aerobic and muscle strengthening exercises which participants thought were difficult (often pain conferring), hard to do or yielded little tangible effect were off-putting and discontinued. In our study, a combination of 'lifeworld' embodied experiential knowledge (Germond and Cochrane, 2010) and advice from clinicians shaped participants' engagement with exercise. In some instances, OA was balanced against their experiences and sense-making about other health problems, so patients restricted aerobic exercises to what they felt was manageable and would not 'threaten' their overall health.
Incorporating exercises into the 'places' of the lifeworld. A co-existing theme emerged during analysis of patient interviews which details how the social and geographical position of individuals was a central factor whether and how they could embed exercise into their 'lifeworld'. This was partly an issue of resource, but more strikingly, an issue of the meaning of spaces and place.
One example is how participants' engagement with exercise was influenced by their interpretation as to how it could fit into existing routines and whether appropriate places could be found to exercise: This woman considered herself active due to the nature of where she worked and her existing pursuits. Furthermore, she also assessed her existing routine and modified it by incorporating additional activity into it by catching the bus to work. Thus, she did not experience a contradiction between her lifeworld and medical advice and, as a result, she could adapt her everyday routine without too much difficulty and embed the prescribed advice.
Exercise as an isolated and individual experience, on the surface, could be interpreted to not 'motivate' people. Notably, participants discussed the role of appropriate venues for doing exercise. As the following excerpt highlights, participants did not position the 'home' (a part of the lifeworld often seen as a place of leisure, relaxation, safety and retreat (Imrie, 2004)) as an appropriate place to undertake exercise: Why I don't do the exercises? I don't know. It's just -I think you've got to get into a routine of doing exercise, haven't you? I go to a health spa; a few of us go to a health spa every so often. So I always do exercises there. It's just doing them on your own in the house, which I know sounds silly, but it's just a bit boring. (Participant 9) Being able to access an appropriate 'place' or venue that had meaning as an arena within which to exercise and socialise was important and this meaning giving as either appropriate or inappropriate for exercise or activity in turn influenced the activities engaged in and health benefits obtained (Doughty, 2013;Krenichyn, 2006;Milligan et al., 2013).
A lack of facilities to exercise (including group classes which would provide an appropriate social space) was highlighted. Finding group classes was not necessarily easy, with participants citing the high cost of using gyms or stating that they would not belong because 'it's hard to find something where there's other people of a similar age'. Other participants felt that they would feel awkward due to their weight and not meeting a 'fit and healthy' body image they associated with gyms: I think they'd laugh at me if I went to a gym now, too embarrassed . . . I used to go to gyms; it's probably 'cause I was looking at people who were overweight when I wasn't and my comments around that, which were not very nice. And I could imagine the same thing being said about me. (Participant 17) The presence or absence of venues or 'places' that were meaningful, comfortable and appropriate for exercise was an important facilitator or inhibitory factor for participants when trying to undertake more exercise(s). In Habermasian terms, participants resisted the encroachment of medical 'system' imperatives (exercise) into certain places of the 'lifeworld' (the home) and embedded the advice using 'suitable' places where possible.
Discussion
We have described a nested qualitative study within a trial that examined if, how and why patients engaged with an intervention to test a method of increasing patient self-management of OA in line with the relevant NICE guidelines (Dziedzic et al., 2018). The findings draw from data gathered via a combination of observations and interviews with practice nurses and patients.
Data analysis revealed that nurses thought that the intervention fitted with existing ways of working and the interpretation of their role (Macfarlane and O'Reilly-de Brun, 2012) which was to educate and motivate in a patient-centred manner. They depicted 'success' and 'failure' to be related to patients' willingness or motivation to work with them and their agenda, thus blame was attached to patients for lacking the necessary intrinsic dispositions or personal qualities (Kennedy et al., 2013). Observations of clinical consultations revealed a pattern where nurses discussed exercise in a way which situated a biomedicalised 'system' (Barry et al., 2001;Greenhalgh et al., 2006) agenda, often drawing from the 'lifeworld' (or patient experiences) to ensure that a means-ends goal was achieved via problem solving or turning the focus of the clinics back to exercise, thus ensuring compliance to a biomedicalised model (Zola, 1972). The intervention was unusual for a trial in that there was a degree of flexibility in consultations to ensure patient centeredness. This may have caused some tensions which the nurses had to decide how best to resolve, reflecting previous findings that nurses attempt to remain 'in charge' to maintain their professional role and purpose and not feel undermined (McDonald et al., 2008).
Interviews with patients indicated that attending nurse clinics had ameliorated some commonly reported perceptions that exercise would further damage the joint (Hendry et al., 2006;Holden et al., 2012) and encouraged people to try to exercise. In other words, the explanation and potential of exercise made sense to patients. While patients understood the benefits and logic behind exercise(s) and said that they had engaged with them, they did not necessarily continue with them, consistent with preceding research that patient compliance is influenced by experience and meaning-making, thus 'reasoned' (Conrad, 1985;Donovan and Blake, 1992;Zola, 1982). Patients made short-term assessments of the efficacy of exercise(s) in terms of pain reduction and improvements in function. This was aided by the perceived ease in which they could do exercises (both strengthening and aerobic), but patients reported that they found it difficult to discuss with the nurses if they encountered any problems. Conversely, other patients cited experiential evidence of no improvements to pain or function, said that exercises caused more pain (Holden et al., 2012), or that exercises were physically difficult to complete. Thus, lay experiential knowledge played a role in if and how patients felt willing or able to incorporate exercise into their lives, similar to previous findings that embodied knowledge is an important resource which influences self-management activities (Pickard and Rogers, 2012). This was sometimes further compounded by the presence of complex multi-morbidities which can be the focus of patients' attention dependent upon (temporally shifting) fluctuations of symptoms and their impact (Cheraghi-Sohi et al., 2013). Notably, engaging in exercise was influenced by the meanings that people placed upon particular environments in relation to their appropriateness as a 'place' to exercise (Doughty, 2013;Krenichyn, 2006;Milligan et al., 2013), which contrasted with meanings associated with the 'home' (Imrie, 2004), arguably a place not connected to exercise because it relates to refuge, safety and relaxation. Participants preferred participating in group exercise classes or using venues which provided an element of social interaction, which was beneficial because it matched their personal priorities or dispositions (Milligan et al., 2013). Conversely, participation could be inhibited by the lack of affordable and welcoming places to exercise where people felt they 'fitted in' (Ali et al. 2012;Morden et al., 2011).
To recapitulate, the trial demonstrated no changes in physical functioning between control and intervention while making improvements in patient enablement and uptake in core OA treatments (Dziedzic et al., 2018). The findings from the qualitative study offer insights into why, despite the trial demonstrating an initial uptake in core treatments, no broader change in physical function because, notwithstanding those who reported benefits, of the range of challenges in adopting maintaining or exercise long term within the patient's sense-making domain. Other factors may have played a role, not least the length of the intervention which may not have provided patients with ongoing support or motivation. However, a longer intervention dose would also benefit from taking account of this study's findings.
Previous research suggests that failing to openly engage with the 'lifeworld' can be detrimental to consultation outcomes (Barry et al., 2001;Greenhalgh et al., 2006;Mishler, 2005). Our findings indicate that this was not necessarily the case. One way of explaining this dissonance is that the Habermasian perspective of society depicts that negative things stem from the 'system' and good things arise from the 'lifeworld' (How, 2003). In contrast, Edwards (2012) notes 'that "system" and "lifeworld" are intermeshed in ways more complex than Habermas suggests' (p. 43) because patients seek to make gains from medicine, which could, for example, be receiving a diagnosis or treating complaints (Ballard and Elston, 2005). Therefore, patients arguably attended nurse clinics because they were seeking a way to ameliorate the impact of pain (discussed as the reason for consulting in interviews, but not reported above) and they followed the nurses' advice because they obtained a clear sense of the potential benefits of exercise.
Habermasian literature argues that when patients encounter the 'system' voice of medicine, in some cases, they engage in forms of 'resistance' when it encroaches into the 'lifeworld' and fails to resonate with values, preferences and existing ways of living and knowing (Jackson and Scambler, 2007;Williams and Popay, 2001). Our findings simultaneously differ from and corroborate this corpus of work. Some patients appropriated medicalised exercise regimes into their lives, be that by incorporating exercise into daily routines, by ensuring it was effective and therefore worthwhile, or by finding appropriate places and spaces to exercise. Other patients resisted the voice of medicine: first, because they had no experiential evidence of effectiveness or were worried about the interaction with co-morbidities; second, because they lacked access to venues that felt meaningful and comfortable as places to exercise. In other words, they placed a boundary around the meaning of places central to their 'lifeworld'. Lived experiential knowledge and meanings attached to 'place' influenced if, how and why people took up exercise with joint pain. Our findings suggest that incorporating aspects of human geography, specifically how people engage with the phenomenological embodied spatial elements of the lifeworld, can enhance Habermasian theorising, something not always explicitly incorporated into this lineage of thought and analysis. Patients' accounts suggest that they did (initially at least) try exercise(s) despite closed consultations, which contradicts Habermasian literature (Barry et al., 2001;Greenhalgh et al., 2006;Mishler, 2005) for reasons outlined above. Patients may have benefitted from a more open communication style, especially in follow-up consultations (which focussed on reiterating messages about exercise). For example, research has found that it is possible to help patients transcend fears of pain relating to exercise (Hurley et al., 2010) and we suggest that how topics are discussed over a series of consultations plays an important role. Positioning patients as motivated or unmotivated as a result of intrinsic personal dispositions can be problematic, because focussing on individual behaviours often omits the contextual factors which underpin 'motivation' (Ong et al., 2014b). A more open and detailed discussion is arguably beneficial for patients (Barry et al., 2001), because it can uncover the complexities of their embodied experiences and sense-making (Germond and Cochrane, 2010), what constitutes appropriate healthful actions and when and where they are deemed acceptable. Such an approach would arguably support continued uptake of exercise and support continued enablement as per the trial findings (Dziedzic et al., 2018). Recent debate considers the importance of whether incorporating lay experiential (or 'lifeworld') knowledge can improve the development and conduct of complex interventions and clinical practice (Greenhalgh, 2014;Percival et al., 2017). The findings from this study, particularly relating to the challenges and tensions inherent in communication and negotiation of different agendas, reiterate that paying greater attention to how patient experience is responded to during an intervention as well as during its design stage could influence positive outcomes (or otherwise).
Conclusion
This article demonstrates the importance of nesting qualitative studies within trials and drawing on social theory to contextualise and explain findings. Furthermore, the study also elucidates the importance of paying attention to communication styles and patient agendas in consultations. In this case, trial outcomes may have been improved by greater emphasis on consultation style and being responsive to patient needs (beyond what was already incorporated). Finally, the study also offers potential for developing the scope of Habermasian theory applied to health by more explicitly incorporating phenomenological approaches human geography (space and place) into analysis.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) disclosed receipt of the following financial support for the research, authorship and/or publication of this article: This paper presents independent research commissioned by the National Institute for Health Research (NIHR) Programme Grant (RP-PG-0407-10386). The views expressed in this paper are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health. This research was also funded by the Arthritis Research UK Centre in Primary Care grant (Grant Number 18139). CJ is partfunded by the National Institute for Health Research (NIHR) Collaborations for
|
2020-06-04T09:03:19.725Z
|
2020-06-02T00:00:00.000
|
{
"year": 2020,
"sha1": "66307db0f1d4751e268a99960635c7b3a945dbf5",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1363459320925879",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb090c705336fc7da71774bf2ff75c4427c75447",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
237791397
|
pes2o/s2orc
|
v3-fos-license
|
Developing and Evaluating the Persian Edition of the Cognitive - Behavioural Assessment in Isfahan Steel Users
: Background: Workers are at risk of working memory and poor retrieval of information. The index of general abilities and knowledge of people at work is influenced by working memory and processing speed. The Cognitive Skills Index (CPI) lays out the conditions for calculating effective retrieval of data from working memory and processing of speed tasks. Objective: The present study aimed to develop a valid and psychometrically evaluated questionnaire for the assessment of the industrial work environment. Method: To assess internal consistency and repeatability, the questionnaire was applied to 1000 employees at Isfahan Steel Industries, and reliability was determined using Cronbach's alpha and Pearson's correlation coefficient. Result: The final questionnaire is composed by 27 items with a high value of validity and reliability, and the resulting tool is a standard tool that can be applied in the work field by researchers, psychologists and health professionals to assess the cognitive effects in the workers’ mental health. More than half of steel employees are worse than their age in social abilities, although their ability to publicly think is comparatively low. Conclusion: The CPI can be helpful in quantifying the analytical capacity of a worker if adequate facilities are sufficient for relative cognitive weaknesses.
Introduction
Development of human societies and the increased use of Earth's resources have created problems for humans. Acid rain, ozone depletion, unnecessary climate pollution and, most importantly, noise, which as pollution in the human environment, have adverse psychological and physical effects [1]. The concept of information processing is one of the basic concepts for explaining and understanding cognitive activities in humans. According to the theory of mind, information processing speed is one of the most essential foundations of cognitive abilities that affect high levels of cognitive ability, and cognitive operation in the real world such as school, university and job performance [2]. Information processing speed is one of the main components of cognitive processes and one of the most essential variables that has recently been considered. The study of the relationship between the minds' speed (the speed of information processing) and cognitive abilities goes back to Galton's ideas [3]. Because processing speed is one of the critical elements of the cognitive process, it is one of the most crucial learning skills, academic performance, intellectual development, reasoning and experience. It is defined as the length of time it takes for a person to perform a mental task. Processing rate refers to the speed at which a person can perceive and react to the information they receive, whether visual (letters and numbers), whether hearing, language, or movement. In other words, the processing time is the time between receiving the stimulus and responding to it [4].
The detrimental effects of noise have been widely observed in terms of hearing and non-hearing impairment, the most important of which are discomfort, sleep disorders, cardiovascular disease, and cognitive impairment, nerve stimulation, and stress. This has different effects depending on the level of exposure to noise and the tolerability of the person [5]. The Cognitive Processing Model (CPI) to assist the Differential Diagnostic Process. In fact, the CPI processing model is based on the latest cognitive research related to neurodevelopment and six general domains (vision, hearing, sequential/logical, conceptual processing, abstract, Processing Speed, Attention) was developed to measure cognitive processing and assist in the process of differential diagnosis of specific learning disabilities for ages 7 to 19 years [6]. Each model of information processing used in the CPI helps to explain mental problems and leads to a particular guidance of interventions, along with the ability to predict issues in areas (educational and non-educational) that depend on information processing [7]. The disruption is considered a major work challenge in the iron and steel, smelting, and spinning, garment and process sectors. Since the CPI Cognitive Questionnaire was intended to be instructional and did not answer industrial workers' issues because it is really necessary for employees to have mental wellbeing, we agreed to investigate the validity and reliability of the CPI Cognitive questionnaire.
Materials and Methods
The present study is a cross-sectional study and was conducted in 2019 in the central region of Iran. The main data collection tool in this study was the CPI Mental Processing Questionnaire. This questionnaire was first translated from English to Persian. The version obtained from this step was compared with the original English version and has 99% compliance. In this study, after designing a questionnaire to assess validity, face and content validity were used, respectively, and to assess reliability, Cronbach's alpha test and Pearson correlation coefficient were used. Various steps were performed to design a suitable CPI questionnaire for industry workers. The first step was to determine the scope of the content of the CPI questionnaire. At this stage, studies were conducted accurately and extensively for about three months, and various dimensions of mental processing were studied and identified [8].
The focus of this test is to measure interpersonal behaviors and orientations. For these reasons, the (CPI) is widely used in staff selection and career guidance. This questionnaire is useful in organizations to identify and develop successful employees and create efficient organizations. On the other hand, by using this test and recognizing the personality traits of the people in the organization, a good opportunity is created for the planners of the organization to be able to set appropriate training plans and improvement programs for people. This scale; It enables planners and decision makers to make decisions about broad long-term or short-term planning in different areas of behavior. This test can also be used in many research programs that aim to identify the normal personality traits of individuals and provide a basis for broader and deeper research to quantify personality variables. CPI can be used as part of a formal processing evaluation tool as well as a pre-referral screening tool [9].
In this evaluation, various aspects related to the content of the questionnaire, library and field studies were performed accurately and extensively to identify the various dimensions of the CPI Mental Processing Questionnaire. Different dimensions of the designed questionnaire, like the main CPI questionnaire, have dimensions of visual processing, auditory processing, sequential / logical processing, conceptual / abstract processing, processing speed and attention, and the important difference is that it is localized for workers in the industry and items it has been edited. At this stage, along with various meetings, both in groups and individually, with ten professors of occupational health, engineering and psychology, several discussions were held for the adequacy and efficiency of the modified questions, and after the initial discussions, the initial questionnaire consisting of 27 items was designed and provided to the workers in printed copies. Each question is scored on a five-point scale from one to five, with the number one indicating obvious difficulty and the number five indicating obvious ability, and the number three indicating average skill or when the assessor is unsure about the correct answer (Appendix A).
The items of the questionnaire were changed according to the hazards and harmful factors of the steel industry work environment and with the opinion of professors and industry and occupational health experts, so that in each question, the word sound and side effects were included. The logic of this method is that after changing the questions due to the physical detrimental factor of the work environment, namely noise, we were able to change the cognitive questions and mental processing of the questionnaire in an understandable and legible way for workers exposed to sound by filling out this questionnaire Twice in a year we can examine the effect of sound on cognitive and mental parameters.
Validity and reliability of the questionnaire measurements are the basic criteria in determining the accuracy and precision of the measurements. Reliability is associated with random error and validity with regular error, so by increasing the sample size, you can reduce the random error caused by completing the questionnaire and thus increase the reliability of the tool.
And therefore affects the accuracy of the measurement. However, it should be noted that increasing credibility requires the use of standard and effective tools. In other words, validity indicates the accuracy of the questionnaire measurement [10].
It should be noted that reliability is a prerequisite for the validity of any questionnaire. In other words, if assessments are not reliable, they cannot show the true value of a Behavioural Assessment in Isfahan Steel Users phenomenon. Therefore, in the validation of instruments, the validation should be done after verifying the reliability. Reliability is associated with random error and reliability is associated with regular measurement error, so increasing accuracy by decreasing random error increases reliability, and increasing accuracy is due to reducing regular error, which increases reliability [11]. The necessary condition for the test validity is its reliability, but it is not a sufficient condition and in order for a test to be valid, it must be reliable [12]. Validity and reliability were assessed separately, because poor reliability also reduces validity and indicates that a set of errors occurred in the measurement [13]. From a classical and methodological point of view, there is a significant difference in the basic concepts of Truth and Certainty, as shown in Figure 1 [14]. From a methodological point of view, narrative is related to truth and reliability is related to certainty. In addition, inverse relationship between reliability and validity as well as the methodological approach, the inverse relationship between truth and certainty in the classic view is shown. This means that the reliability is less than the uncertainty increases (12).
Initial Questionnaire for Selecting Narrative Determination Pattern: the previously mentioned ten professional's panel was asked to comment on each item for higher sensitivity in the measurement, each item was assessed using a 5-point Likert scale. Likert scale is classified as follows: "completely agreeable" (rated 1) to "completely opposite" (rated 5). Distribution of questionnaires: In person and in the form of a login file were sent to panel members for review. Information entry and analysis; after the forms were completing and data coding. The data was entered into the computer and then was analyzed using Excel software. The average value of three criteria was used as the total CVI for each item. Minimal required amount of CVI for each item was 0.79 (Kellar & Kelvin, 2013).
In this study, since the purpose was designing a special questionnaire for the work environment and envelopment of measurement tools are important stages in the research processes regarding social, educational, and medical sciences, which mainly focus on the measurement of characteristics, qualitative variables, and abstract variables. Validity and reliability are two important components of researcher-made tools. The quality of assessment and confirming of validity and reliability are major concerns in research. Before publishing their findings, researchers are required to provide a report on the quality assessment of the validity and reliability of measurement tools. Precision in explaining these features could lay the ground for commenting on the trustworthiness and validation of the obtained findings, as well as their comparison with previous studies. If the validity and reliability of research instruments are not confirmed, researchers' efforts will be in vain (13).
Determining the content validity index (content) and introducing the final content questionnaire: narrative index judgments related to the validity or applicability of the model, test, or end instrument. For this purpose, in this study, after collecting the questionnaire and entering the data into Excel software, the content validity index was examined in three levels of simplicity, clarity and relevance. In terms of the acceptability of the questions, it was considered that the items with the content validity index above 0.75 were acceptable.
The following formula was used to ensure that the acceptance of the legal validity of each item and its effect score should not be less than 1.5, and only questions regarding formal validity are acceptable whose rating is higher than 1.5.
(CVR: Content validity ratio; ne: The number of experts who have selected an essential and relevant option for each question; N: The total number of specialists) Determining the Content Validity Index Using the Validity Index Content Index (CVI), the CVI indicates the comprehensiveness of the judgments related to the validity of the model or the ability to execute the model, test, or ultimate instrument, the higher the final content validity the CVI to 0.99.
(CVI: Content Narrative Index; CVR: Content validity ratio; Retained number: The number of items remaining) Internal compatibility: To measure the questionnaire internal compatibility, the questionnaire was applied to a population size of 1000 personnel working in different sectors of Isfahan Steel Industries, which was reduced to 880 people considering the amount of decline in the sampling of questionnaire studies. After data collecting and coding, its internal consistency was determined using Cronbach's alpha (14). Repeatability; To test reproducibility, the designed questionnaire was given to the same number of people after seven days to check the reproducibility of the questionnaires using the retesting method. In this study, the Pearson correlation coefficient test was used for reproducibility (15).
All the ethical codes required for the study, including the optional nature of the participation in the project, as well as the confidentiality of the information obtained by the employees willing to cooperate and information use, were observed only for the purposes of this article.
Results
CVR values, the numerical mean of judgments, and results of acceptance or rejection of questions of mental processing questionnaire were assessed. After collecting the questionnaires from the members of the panel group and entering the information in Excel software, the CVR values for each question and also for the whole questionnaire were calculated. After the 50-question questionnaire to experts for evaluation the items with more than 80% of the mismatch between experts and the worker's population answers, so after considering the all the opinions, the necessary questions were eliminated ( Table 1). The total CVR of the questionnaire after removing 23 items and the remaining 27 questions was equal to 0.84%. The results of the content validity index and the introduction of the last questionnaire shows that after the final review, the CVI value of 23 out of 50 questions were rejected, and the final CVI total of six scores was calculated resulted in 16.4%. The CVI value was 0.1% for simplicity, 0.12% for relevance, and 18.3% for clarity (Table 2). Validity was investigated through the content validation method and 0.84 was obtained. This information confirms that the questionnaire has a solid validity and reliability (Table 3). Internal compatibility; In this study, after completing the questionnaire by 880 workers of Isfahan Steel Industries, the Cronbach's alpha value was 0.84%, which is a sign that the questionnaire has a durable internal consistency. The final questionnaire design has six areas, including Image recognition (4 items), Auditory screening (4 items), Processing in a sequential or logical (7 items), Theoretical / Intuitive Computation (4 items), Computing frequency (4 items), and Concentrate. (4 items). Repeatability; in this study, a retesting process was used to determine the reliability of the mental processing questionnaire. The work method consisted in applying the questionnaire to 1000 employees of Isfahan Steel Industries and then after a week, the same sample was asked to complete it again. The correlation coefficient between the scores was estimated 0.9 (P ≤ 0.001), which was an acceptable and sufficient amount, as well as the reliability of the CPI questionnaire had Cronbach's alpha of 0.86. Processing rate score from the cognitive processing questionnaire (CPI) was received. The analysis of table 3 to verify the relationship between each of the subscales in the whole questionnaire and that the two subscales of Image recognition and Concentrate had lowest correlation between them, and also the highest correlation was observed in the Computing frequency of scale. In this questionnaire, all scales except the Concentrate scale have a strong correlation with (r=0.56 or 0.79 and P <0.0001). The relationship between each subscale and each other examines in the whole questionnaire and found that there is the least correlation between the two subscales of vision and attention and also the highest correlation was observed in the processing speed scale. In this questionnaire, all scales except the attention scale have a strong correlation with (r = 0.56 or 0.79 and P <0.0001) of the whole questionnaire (Table 4).
Discussion
According to studies of harmful physical factors in the work environment, noise has been proven to be a high risk factor, affecting millions of workers around the world (16). The noise has a variety of effects, including physiological and psychological disorders caused by physical stressors in the body. Voice-induced psychological disorders include anxiety, stress, and restlessness, sleep disturbances, and impaired mental operation and information processing (stimulus identification, response selection, and response planning) [15].
Cognitive processing represents a situation that involves receiving, comparing, and changing or not changing information as it occurs through cognitive activity in the individual's brain (Basner et al., 2014). Disruption of any information processing factor leads to impaired ability to use the information collected through the senses (Zamanian et al., 2013). The findings of the present study confirmed the psychometric characterization of the CPI questionnaire in a sample of the workers of steel industry population. But also Nam showed that the CPI questionnaire is a reliable tool for measuring mental processing in people with disabilities. The internal consistency of the CPI questionnaire and the subscales of this test were calculated and validated according to Cronbach's alpha coefficients [16]. This finding is consistent with the results of studies by Pichora-Fuller, (2003) [17], Ljunberg & Neely (2007) [18], Saremi & Rezapor (2013) [19] and Kharazi & Rezaian (2018) [20], in terms of the harmful factors impact on the work environment and on cognitive parameters such as accuracy, speed and attention ability.
The study performed by Lubitz, also showed that the rate of depression could be Practical in changing the speed of mental processing, which is in line with the study of cognitive processing in employees on shifts and working [21]. Another study by Gyomber et al. [22], studied behavioral differences in management between men and women, and found that female managers were more influential, in communication skills and mental processing than men, and that the study was consistent with the goals of this study. According to the Jones since the test was first published. Researchers and clinical psychologists have used it for many apparent purposes of psychological testing, such as predicting academic achievement, graduating from high school or college, and performing in specific areas such as English and Mathematics. The work pace in industries and the importance of employees in examining this questionnaire in Iranian industries seemed necessary [23]. One of the significant features of the Psychological Questionnaire (CPI) in terms of the number of scales and features studied to assess mood and social interaction styles, and this individual is a subject that psychological researchers should pay attention [21]. Since the CPI's primary focus is on the practical benefit and effort to develop appropriate, comprehensible, and accurate descriptions of behavioral insights, and how its basic concepts are understood, it is more important than everyday social interaction. And since it is related to the current aspects of behavior, its interpretation is also indirect and is understandable for the subject [21]. Also in this study like in the study performed by Lubitz et al. [21], to determine the questionnaire validity, formal validity and content validity were used. Experts examined the formal validity, and the necessary omissions and corrections were applied. To determine the validity of the content, the opinions of the experts were used, and a number of questions were edited SAFAEEPOUR et al. [24]. The content validity index in the present study is 0.84, which is close to the results of this study and recommend average standard of the content validity index as 0.90, with a difference of 0.06 [25].
In the study performed by Minooei et al. [26], the Positive Health Behaviors Scale (PHBS) was adjusted and validated for 1017 nurses, and the results showed that the structure of this tool is ambiguous and PHBS can be used in workplacebased health promotion programs and just Used in nurses. Given that this study, like the present study, has a coefficient of α Cronbach = 0.844 suitable for all scales, but cannot be used for all health cases. The physical causes of several mental disorders were examined and it was found that these disorders reduce a person's ability to work and in most cases cause disability [27].
To determine the validity of the questionnaire, formal validity and content validity were used. Experts examined the formal validity, and the necessary omissions and corrections were applied. In order to determine the validity of the content, the opinions of the experts were used, and a number of questions were edited. The content validity index in the present study is 0.84, which is close to the results of the study of Pezeshki et al. [28], which recommend the standard of the content validity index as 0.90, with a difference. In reviewing the reliability of the questionnaire, Cronbach's alpha coefficient was determined by the internal consistency method and the results showed that all items have the appropriate Cronbach's alpha coefficient, since the Cronbach's total alpha for this questionnaire was 0.86. According to the study of Jhun, Ellie H et al. [29], which introduced the appropriate reliability number like 0.70, it can be stated that the reliability of the designed questionnaire to be applied in employees of different industries is practical and relevant. In their study, since the CPI questionnaire was the first to be examined in the sample of Iranian employees, required its repetition in different samples, especially in more process jobs. Overall, the present study data showed that the CPI questionnaire is a valid and a tool for measuring mental handling, and according to the research literature that emphasizes the study of processing and speed of mental performance using appropriate and specific tools, this questionnaire can be in studies and research related to this field were used.
Conclusion
Applying a mental processing questionnaire can be a solution and a suitable guide for the production of knowledge and improving the level of health and psychological understanding of the work environment according to the professional characteristics of the employees, working conditions and culture governing the labor profession. According to the results of this study, the questionnaire with an average narrative index of 16.4% and Cronbach's alpha of 0.84 and a correlation coefficient of 0.10% is a suitable tool for measuring the health and mental knowledge of the work environment in the working population.
Through the application of this questionnaire, it will be possible to assess the state of mental health before hiring any worker, and then follow up their health status. In addition to the application of this questionnaire, health and safety technicians at work should adopt preventive measures to increase cognitive condition in the workplace. After the implementation of these measures, they must apply the questionnaire again in order to verify the effectiveness of the measures implemented. The proposed strategy will allow increasing the productivity of the workforce at work, by reducing cognitive and neurological problems. 1 How disturbed is the coordination between your speech and breathing when exposed to sound? Acceptance 1.6 66.67 2 Task duration in the noisy environment Acceptance 1.8 83.33 3 Reaction time to unexpected events in the workplace (in the presence of noise) Acceptance 1.6 66.67 4 The ability to perform calculations is affected during the exposure to loud noises in the workplace Acceptance 2 100 5 The ability to develop team work is affected when there is noise in the workplace Acceptance 2 100 6 Satisfaction of the assigned task in a noisy workshop non-acceptance 1.3 33.33 7 The ability to memorize information when exposed to a noisy environment Acceptance 1.8 100 8 The ability to recognize similarities and differences in noisy environments non-acceptance 1.1 33.33 9 The ability to understand what you hear or see is affected by the noise Acceptance 1.8 100 10 The ability to understand or remembering instructions and guidelines related work when exposed to noise non-acceptance 1.1 33.33
11
The ability to remember names and new phrases work when exposed to noise Acceptance 1.8 83.33 12 The ability to remember essential phone numbers while working in a noisy environment Acceptance 1.7 66.66 13 The ability to remember the name of work processes or its details in the presence of noise released in the workshop Acceptance 1.7 66.66 14 The ability to recall basic information about the task in noisy environments Acceptance 1.8 83.33 15 The ability to quickly think about a problem or situation occurred in a noisy environment Acceptance 2 100 16 The ability to sort and arrange quickly in a noisy work environment non-acceptance 1 16.67 17 The ability to schedule and divide activities into smaller parts or steps is easier in noisy environments. Ability to function correctly in a noisy environment Acceptance 1.6 100 29 The ability to recognize sounds from each other in noisy environments. Acceptance 1.8 100 30 The ability to stay focused on the task assigned when there is high level of noise in the workplace Acceptance 1.5 66.67
31
The ability to find creative and new idea and finding new ways to work in the presence of noise in the workplace non-acceptance 1.1 33.33 32 Ability to visualize and imagine (image, faces, words, numbers, etc.) in your mind in noisy environments non-acceptance 1 50 33 Ability to work for long periods of time when exposed to noise non-acceptance 1.1 50 34 Orientation skills and identify the location of tools in the workplace in the presence of loud noises non-acceptance 1. How affected is your ability to navigate horizontally and vertically while working in a noisy environment? non-acceptance 1 16.67 48 How your balance and ability to perform a specific task is affected when exposed to a noisy environment? non-acceptance 1.2 33.33 49 What is your ability to understand speech and distinguish between S and SH when exposed to loud sounds? Acceptance 2 100
50
In what extent, your ability to understand speech in a noisy area is affected? Acceptance 1.8 100
|
2021-09-28T01:15:22.504Z
|
2021-06-25T00:00:00.000
|
{
"year": 2021,
"sha1": "4c5a07fb2fd9a01aa5725029bbfcb2bd5e8335c5",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijpbs.20210603.11.pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "598e22c3fd1a495a16dc789c16ce057b4ff5bcef",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
200079329
|
pes2o/s2orc
|
v3-fos-license
|
Spatiotemporal Changes in Aridity of Pakistan during 1901-2016
The changing characteristics of aridity over a larger spatiotemporal scale have gained interest in recent years due 10 to climate change. The long-term (1901-2016) changes in spatiotemporal patterns of annual and seasonal aridity during two major crop growing seasons of Pakistan, Kharif and Rabi are evaluated in this study using gridded precipitation and potential evapotranspiration (PET) data. UNESCO aridity index was used to estimate aridity at each grid point for all the years between 1901 and 2016. The temporal changes in aridity and its associations with precipitation and PET are evaluated by implementing a moving window of 50-years data with 11-year interval. The modified Mann Kendall trend test is applied to 15 estimate unidirectional change by eliminating the effect of natural variability of climate and the Pettitt’s test is used to detect year of change in aridity. The results reveal that climate over 60% of Pakistan (mainly in southern parts) is arid. The spatial patterns of aridity trends show a strong influence of the changes in precipitation on aridity trend. The increasing trend in aridity is noticed in the southeast where precipitation is low during Kharif while a decreasing trend in Rabi season in the region which receives high precipitation due to western disturbances. The annual and Kharif aridity are found to decrease at 20 a rate of 0.0001 to 0.0002 per year in northeast while Kharif and Rabi aridity are found to increase at some locations in the south at a rate of -0.0019 to -0.0001. The spatial patterns of aridity changes show a shift from arid to the semi-arid climate in annual and Kharif over a large area while a shift from arid to hyper-arid region during Rabi in a small area. The most of the significant changes in precipitation and aridity are observed in the years between 1971 and 1980. Overall, aridity is found to increase in 0.52%, 4.44%, and 0.52% area and decrease in 11.75%, 7.57%, and 9.66% area for annual, Rabi and Kharif 25 seasons respectively during 1967-2016 relative to 1901-1950.
Introduction
More than 20% of the global population is living in arid regions under the threat of severe consequences of climate change, particularly due to increasing hydrological extremes (Alazard et al., 2015). The temporal variability and spatial distribution 30 of precipitation and other hydrological phenomena have significantly changed with the increase in global temperature (Kousari et al., 2014). Changes in precipitation have caused more hydrological extremes such as floods or droughts. The ecosystems of arid and semi-arid climates are sensitive to minor changes in climate (Ahmed et al., 2018). These regions are also characterised by very complex hydrological systems due to high variability in precipitation which often exhibits extreme behaviours, such as flash floods caused by extreme precipitation and extended droughts due to prolonged dry spell (Buytaert et al., 2012). The droughts are projected to become more frequent and severe in arid regions due to an increase in aridity 5 (Nam et al., 2015) as reported in Iran , Serbia (Hrnjak et al., 2014), Turkey (Selek et al., 2018), Iraq (Şarlak and Agha, 2018), India (Ramarao et al., 2018) and China (Liu et al., 2018a) among others. Climate models projected an increase in the range of 11 to 23% by 2100 in global arid and semi-arid climate area which will expand aridification in different parts of the globe (Huang et al., 2016).
Pakistan located in South Asia has a complex terrain with limited water resources. Several attempts have been made to 10 classify the aridity and climate of Pakistan based on different climate variables and methods (Bharuqha and Shanbhag, 1956;Oliver et al., 1978;Shamshad, 1988;Chaudhry and Rasul, 2004;Hussain and Lee, 2009;Zahid and Rasul, 2011;Sarfaraz et al., 2014;Haider and Adnan, 2014). Bharuqha and Shanbhag (1956) classified the climate of a station (Hyderabad) based on the fraction of precipitation to evaporation for the period 1926−1940 and found that Hyderabad has an arid (desert) climate. Oliver et al. (1978) applied clustering approach for climate classification using meteorological data 15 from 53 stations. The results of the study showed that Pakistan has nine climate regimes where most of the area falls under arid climate. Chaudhry and Rasul (2004) used Thornthwaite's precipitation effectiveness index (PEI) for the estimation of annual and seasonal aridity for the period 1961-1990 using temperature data of 50 stations. The results showed that around 75% of the land has arid climate while only a small area in the north-eastern plain has a sub-humid climate. Hussain and Lee (2009) classified the climate using factor and cluster analysis utilising 26 years Sarfaraz et al. (2014) used Köppen classification to classify the climate based on 59 stations data for the period 1981 to 2010 and showed that 75% of the country has arid to semi-arid climate. Recently, Nabeel and Athar (2018) classified the climate based on wet and dry spell using 46 stations data for the period 1976 -2007. They reported that 66% of the country belongs to the arid climate while 30 only 4% belongs to the humid climate. Even though several studies have been conducted for the classification of climate using aridity indices, there is still no comprehensive study to assess the long-term trends in the aridity of Pakistan in different seasons (annual, Kharif and Rabi). Furthermore, no study has been conducted to determine the impacts of climate change on aridity, particularly the influence of different climate variables like precipitation, temperature and potential evapotranspiration on aridity in different seasons.
Both increasing and decreasing trend in aridity has been reported in different regions of the world due to climate change.
Several studies reported an increase in aridity in global (Dai, 2013;Trenberth et al., 2014) and regional (Ramarao et al., 2018;Jiao et al., 2016) scales. On the other hand, decrease in aridity is also reported in USA (Finkel et al., 2016), China (Yin 5 et al., 2018) and some regions of Iran (Tabari and Talaee, 2013). In recent years, an increase in aridity in some regions of Pakistan has been reported (Haider and Adnan, 2014). However, it was just anticipation based on the assumption that rising temperature has intensified PET and thus an increase in aridity. The magnitude of temperature rises and the changes in regional precipitation pattern determines the changes in the aridity of an area. Therefore, it is required to assess the changes in aridity in regional scale considering the changes in both temperature and precipitation due to global warming.
10
Several studies suggest rising temperature and changing precipitation in Pakistan due to global warming. Recently, Pakistan experienced several temperature extremes in the form of scorching heatwaves that resulted in significant fatalities.
Additionally, the prolonged spell of droughts due to lack of seasonal precipitation has caused enormous economic damages.
The annual maximum temperature in the country is increasing at a rate of 0.17-0.29°C/decade , while the precipitation is reported to increase in the north and decrease in the south at a rate of -4 to 4 mm/year in the last fifty years 15 (Ahmed et al., 2017). The variations in temperature and precipitation patterns are also reported in different climatic and cropping seasons (Iqbal et al., 2016). The rising temperature has intensified the evaporation which in turns has caused water losses from major water reservoirs and thus causing water scarcity. In this context, assessing the changing characteristics of precipitation and potential evapotranspiration (PET) over the manifold topography and climate of Pakistan is very important.
As the characteristics of precipitation, PET and aridity changes with season and time, it is also imperative to evaluate their 20 changing patterns for different periods and seasons.
The main objective of the present study is to evaluate the changing characteristics of aridity based on precipitation and PET in annual and two distinct cropping seasons (Rabi and Kharif) of Pakistan. Several aridity indices are available for the classification of aridity such as de Martonne aridity index (de Martonne, 1926), Thornthwaite aridity index (Thornthwaite, 1931), Erniç aridity index (Erinç, 1965), UNESCO aridity index (UNESCO, 1979). Among all the aridity indices, the 25 UNESCO aridity index which considers the effect of precipitation and PET for the classification of climate is most widely used (Zarch et al., 2017). The long-term gauge-based gridded precipitation and PET datasets are analysed by implementing a moving window of 50-years data with 11-year interval. The modified Mann-Kendall trend (MMK) is used to evaluate the significance of changes estimated using Sen's slope estimator and the Pettitt's test is used to identify the year of change in aridity and climate. It is expected that the use of MMK test would provide the changes in aridity due to global 30 warming by eliminating the effect of natural variability of climate which infested as a long-term autocorrelation in time series. The procedures presented in this study can be used for the assessment of the changing characteristics of aridity and the identification of the factors that drives the changes which can help to understand the possible shift in the climatology of an area owing to climate change. The findings of the study can be helpful for Pakistan in planning adaptation measures and adjust cropping patterns to ensure sustainability in agriculture.
Description of the Study Area
Pakistan located in South Asia shares borders with India in the east, China in the north, Iran and Afghanistan in the west, and 5 the long coastline with the Arabian Sea in the south (Fig.1). Around 80% of the land is characterised by arid to the semi-arid climate where precipitation is less and temperature is high (Khatoon and Ali, 2004). The topography of the country varies widely from plain lands in the south to high mountainous ranges in the north. The large variations in topography from 0 to 8552 m above mean sea level causes a large variation in the climate in the country. The precipitation in both seasons varies widely in time and space (Ullah et al., 2018b). The mean annual precipitation in Rabi is 119 mm/year, and Kharif is 191 mm/year. The precipitation varies from 10 to 700 mm from the southwest to the 20 north during Rabi and 11 to 900 mm from the southwest to the north near the foothills of Himalaya during Kharif. Most of the country receives precipitation less than < 100 mm in Rabi and < 190mm in Kharif (Ahmed et al., 2018a).
The Rabi and Kharif have contrasting temperature due to their coincidence with winter and summer respectively (Ullah et al., 2018a). The annual average temperature is 14 o C during Rabi and 26 o C in Kharif . The average temperature in around 10% of the country (the northwest and far north) goes below zero during Rabi while it goes above 30 25 C in 45% of the area during Kharif. The overall temperature varies from -12 to 23 C in Rabi and 1.9 to 33 C in Kharif (Ahmed et al., 2018a).
5
The gauge-based gridded climate data are widely used as a proxy of observed precipitation and PET data around the world (Shiru et al., 2018). Over the last two decades, several gauge-based gridded datasets have been developed and applied for different purposes (Li et al., 2014). Among them, the datasets of Global Precipitation Climatology Centre (GPCC) ) (dwd.de/EN/ourservices/gpcc/gpcc.html) and Climatic Research Unit (CRU) (Harris et al., 2014) of the East Anglia University (crudata.uea.ac.uk) are the most popular due to their spatial and temporal continuity (Kishore et 10 al., 2015). Thus, GPCC precipitation and CRU PET data are used in the present study. The GPCC and CRU data have several advantages that made the products superior to others. Both the datasets are developed by considering relatively a large number of in-situ data (Schneider et al., 2014;Harris et al., 2014). Besides, these datasets are available at high spatial resolution (0.5 o × 0.5 o ) and longer period (1901 to 2016) that helps better understanding of the changes in climate (Spinoni et al., 2014). Furthermore, an extensive quality control procedure was followed during the development of GPCC and CRU data which has made them more reliable compared to other products (Sun et al., 2014). Additionally, robust interpolation 5 technique was used in the development of GPCC (Spheremap spatial interpolation) and CRU (Thin plate smoothing splines interpolation) New et al., 2002). Several studies revealed better agreement of GPCC and CRU data with station records of Pakistan (Adnan and Ullah, 2015;Asmat et al., 2017). The precipitation and PET data are extracted from 350 grid points for the period 1901-2016 to cover entire Pakistan.
10
The procedure used for the assessment of the changes in the characteristics of aridity in Pakistan is outlined below: 1) The aridity is estimated as the ratio of precipitation to PET at each GPCC/CRU grid point for all the years during 1901 -2016. The aridity values are estimated separately for annual, Rabi and Kharif seasons.
2) Sen's slope estimator is used to estimate the rate of change in precipitation, PET and aridity in annual, Rabi and Kharif seasons for the period 1901 -2016.
15
3) The MMK trend test is used to evaluate the significance of the change in precipitation, PET and aridity for all the seasons.
4) The influence of precipitation and PET on aridity is assessed for different 50-year with an interval of 11-year over the period 1901 -2016.
5) The shift in the aridity from one aridity class to another between two periods, 1901 -1950 and 1967 -2016 is 20 mapped to assess the changes in areal extent of different arid classes.
6) The Pettitt's test is used to detect the change points in aridity, precipitation and PET in Pakistan.
Aridity Index
Aridity index (AI) is often used to quantify the long-term climatic conditions of an area (Ashraf et al., 2014). Several 25 definitions of aridity can be found in the literature which are derived using different climate variables like precipitation, temperature and PET (Zarch et al., 2017). Among them, the AI definition of UNESCO (1979) as a ratio of precipitation to PET is most widely used (UNEP, 1992). The precipitation and PET data are averaged for a year or a season to estimate AI.
Various methods are available in the literature to estimate PET. Among them, the Thornthwaite (Thornthwaite, 1948) and Penman-Monteith (Monteith, 1965) methods are the most widely used. The Penman-Monteith method is adopted in UNESCO (1979) while the Thornthwaite method is adopted in UNEP (1992) for defining aridity. Thornthwaite method is preferred over the Penman-Monteith method in data scarce regions (Zarch et al., 2015). However, the Penman-Monteith method provides a better estimation compared to other approaches (Tukimat et al., 2012). Thus, CRU PET data estimated 5 using the Penman-Monteith method is used in the present study.
Sen's Slope Estimator
Sen's slope estimator (Sen, 1968) is a non-parametric method which is widely used for robust estimation of change over a period (Yue et al., 2002;Khan et al., 2018). In this method, the rate of change in data between two consecutive times is first estimated for the whole series. The median of all the consecutive changes in data series is then determined to show the rate 10 of change for the whole period. The slope between two data points is calculated as follows: Where is the slope between two data points xj and xk estimated at time j and k. The median of all estimated for all the two consecutive data points in the time series is the Sen's slope which gives a measure of change for the whole period.
Modified Mann-Kendall (MMK) Test
In the MMK test (Hamed, 2008), the significance in the trend is first computed by applying classical MK test. The MK test statistics (S) for time series with n data points can be calculated as: Where and are sequential data and ) ( The standardised test static ( 1 ) is then calculated from the variance of S as, The null hypothesis on no trend is rejected at a confidence interval of 95% if 96 . 1 1 .The MMK test is conducted when the null hypothesis of no trend is rejected. For this purpose, the existing trend in time series data is removed, and the data are ranked. The equivalent normal variants of ranked data (Ri) are calculated as, Where ϕ is the inverse standard normal distribution function. The Hurst coefficient (H) is estimated by maximising the log-likelihood function. If H is found significant, the biased variance of S is calculated as, Where is the auto-correlation function for given H. The unbiased estimate is calculated as, t
15
Where B is a function of H as below: Where the coefficients , , , , and are the functions of the sample size n, which can be found in Hamed (2008).
The significance of the MMK test is computed by using in place of in equation (5).
Relationship of Aridity Trends with Precipitation and PET
The relationships of precipitation and PET with aridity are assessed using a moving window of 50-year with 11-year interval over the study period, i.e., 1901-1950, 1912-1961, 1923-1972, 1934-1983, 1945-1994, 1956-2005 and 1967-2016. The main purpose of considering a 50-year window is to decipher the changing pattern in the relationship over the study period. The 11-year interval was considered to assess the relationship for the whole period (1901-2016). 5
Pettitt Test
The point of change in time series is detected using the Pettitt test (Pettitt, 1979). This nonparametric test allows identification of the point at which any significant shift occurred in time series. The test relies on Mann-Whitney statistic where the two samples …. and t …. are tested to confirm whether they are from the same population or not. Mann-Whitney statistic is calculated as below: and ꁸ ꁸ ꁸ The test statistic ( (Figure 2b). Kharif season coincides with the monsoon that mostly enters from the northeast of Pakistan. Therefore, the eastern part of the country receives more rainfall in Kharif (60 to >80% of annual precipitation) ( Figure 2c). Overall, the average annual and seasonal precipitations are high in the north, low in the southeast during Rabi and in the southwest during Kharif. Figure 2d depicts the spatial distribution of average annual PET. The PET is relatively high in the south and low in the north.
5
The southwestern and southeastern corners showed the highest PET ranging from 2100 to 2529 mm. The Rabi and Kharif PET as a percentage of annual PET are presented in Figure 2e and 2f respectively. Like the annual, the PET in Rabi and Kharif show more or less similar distributions. In Rabi, PET is low (< 30%) in the north and high (> 45%) in the south while the spatial distribution of PET during Kharif is opposite to Rabi. Relatively lower PET in Rabi indicates the influence of winter (low temperature) while higher PET in Kharif is due to its coincidence with summer when the temperature usually is 10 high in most of the country.
Spatial Pattern of Annual and Seasonal Aridity
The AI values are classified as hyper-arid, arid, semi-arid, sub-humid and humid based on UNESCO classification to show the spatial distribution of annual and seasonal aridity in Pakistan (Figure 3). The annual aridity over Pakistan for the period 5 1901-2016 (Figure 3a) reveals an arid climate in most of the country (61%) followed by semi-arid (21%) and humid (11%).
Arid climate covers a larger area in the south and a small area at the top north. The sub-humid and humid climate dominates near the foothills of Himalaya where precipitation is high. On the other hand, the climate in a small area (2%) in the southwest is found hyper-arid where PET is high, and precipitation is very low. Figure 3b shows the spatial patterns of aridity during Rabi. Cold winds bring precipitation from the Mediterranean Sea 10 during Rabi season which enters the country from the southwest and therefore, aridity in Rabi is notably less in the southwest. The percentage of the area belongs to semi-arid, sub-humid and humid climate increases during Rabi which indicates a decrease in aridity over a major portion of the country. However, the area belongs to hyper-aridity climate (9%) increases in the southeast during Rabi. Besides, the aridity in the top north reduces and the humid climate zone near the foothills of Himalaya increases.
15
Spatial distribution of aridity during Kharif is presented in Figure 3c. Most of the country is characterised by arid climate (59%) followed by semi-arid (20%) and hyper-arid (9%). The area belongs to the hyper-arid climate in the southwest increases during Kharif due to the lack of precipitation in the west during this season. On the other hand, aridity reduces in the southeast due to monsoon precipitation. The area in the top north and near the foothills of Himalaya which are characterised by semi-arid also reduces which could be owing to an increase in PET.
20
Overall, figures show that climate in more than 70% of the country is arid to semi-arid. The aridity varies with the season due to the occurrences and dominance of precipitation is different in different seasons. In general, the southern region of the country is characterised by the arid climate and the north is predominantly sub-humid to humid.
Spatial Pattern in the Trends of Precipitation and PET
The sen's slope is used to assess the magnitude of change in precipitation and PET for all the seasons at all the 350 grid points over Pakistan to prepare the corresponding maps as shown in Figures 4 to 6. The significance increasing/decreasing trends estimated using MMK test at 95% level of confidence are presented using the plus (+) and minus (-) signs in the 5 figures. The increase in precipitation indicates a wetter and the decrease a drier condition, while an increase in PET indicates a drier and decrease a wetter condition. Figure 4a shows that annual precipitation is increasing significantly over a large area in the northeast and at a few places in the far north, while it is decreasing significantly at a few places in the south and three locations near the foothills of Himalaya. It is worth to mention that precipitation is decreasing at a few locations near the foothills of the Himalaya where precipitation is highest in Pakistan (Figure 2a). The spatial distributions of the trends in 10 annual PET are shown in Figure 4b. The annual PET in Pakistan is increasing (high evaporation rates) in the southeast corner and decreasing (low evaporation rates) at a few grid points scattered in the center and north-western parts where precipitation is usually high, and the temperature is low. 20 Figure 5a shows the spatial patterns in the trend of Rabi precipitation. The precipitation during Rabi is found to increase significantly at a few grid points in the north and two grid points in the east while decreasing significantly at two locations in the south. It can be observed that there is a non-significant decreasing tendency in Rabi precipitation over a large in the south.
The PET in Rabi (Figure 5b) is found to increase significantly (high evaporation rates) over a large area in the southeast and the southwest, while it is not found to decrease significantly at any location. The Kharif precipitation (Figure 6a) is found to increase significantly in the northeast and at two grid points in the north. The 10 significant decreasing trend in Kharif precipitation is also observed over a large area in the southwest and at a few grid points near the foothills of Himalaya. Overall, the spatial patterns in annual and Kharif precipitation trends are found very similar. The spatial distribution of PET trends in Kharif is displayed in Figure 6b. The figure shows a significant decrease in PET over a large area in the northeast and decreases at two grid points in the south.
Spatial Pattern in the Trends of Annual and Seasonal Aridity
The Sen's Slope method was used to estimate the changes in aridity values calculated using UNESCO method and the MKK test was used to determine the significance of the change at 95% level of confidence. The changes in aridity index are found in the range of -0.0039 to 0.0060 for Pakistan (Figure 7). The values were divided into five classes using natural break algorithm available in ArcGIS 10.3. The plus sign in the figure indicates a significant reduction in aridity (wetter condition) 5 while the minus sign indicates a significant increase in aridity (drier condition). Figure 7a shows that the mean annual aridity has a significant wetter trend over a large area in the northeast and a significant drier trend at a few locations in the south.
The aridity trends in Rabi (Figure 7b) shows a significant drier trend in the southwest, at two grid points in the center and in the south. Significant wetter conditions during Rabi are also observed at a few grid points in the north. The aridity trend in Kharif (Figure 7c) is found to follow similar patterns of annual aridity trend. Significant wetter tend is noticed over a major 10 area in the northeast and at a few grid points in the north while drier trend at few locations in the southwest and southern corner of the country. Overall, the results reveal a wetter trend over a major portion in the northeast and drier trend at few locations in the southwest.
Time-varying Trends in Areal Extent of Aridity
A moving window of 50-years with 11-year interval over the period 1901-2016 is used to assess the time-varying trends in aridity, precipitation and PET. The major purpose was to understand the influence of precipitation and PET on aridity in different periods. The obtained results are presented in Figures 8 and 9. The figures show a higher influence of precipitation 5 on aridity compared to temperature. For instance, a reduction in precipitation at 80 grid points caused an increase in aridity at 77 grid points in 1923-1972. On the other hand, a decrease in PET at 150 grid points was the reason for a reduction of aridity at 40 grid points during 1934-1983 (Figure 8). Similar results are also noticed for Rabi and Kharif seasons (Figure 9). Therefore, it can be remarked that the changes in precipitation have a higher impact compared to PET in determining the aridity of Pakistan.
The Shift in Aridity
The spatial pattern in the shift of aridity from one class to another is estimated by comparing the aridity maps of the early period and late period (1967 to 2016). The obtained results are presented in Figure 10. The shifting of aridity from one to another class are illustrated using different colours while the white colour represents no shift in aridity class.
The annual climate in a large area is found to shift from arid to semi-arid (Figure 10a). A shift from semi-arid to sub-humid climate is also observed at a few grid points near the foothills of Himalaya. On the other hand, the climate at two grid points in the southwest is found to shift from semi-arid to arid.
Relatively more changes in aridity during Rabi (Figure 10b) compared to annual is observed. A large area in the southeast 5 has changed from arid to hyper-arid. The climate at some grid points in the center and the southwest are also found to change from semi-arid to arid. Besides, the sub-humid climate at a grid point in the north is found to become humid. The climate at several points is also changed from hyper-arid to arid in the southeast and arid to semi-arid at different locations in the north.
The spatial pattern of the shift of climate in Kharif (Figure 10c) reveals a change in arid to the semi-arid climate in the central region and hyper-arid to arid at a grid point in the southwest corner while semi-arid to arid at a grid point in the southeast corner. Figure 10. Changes in the spatial patterns of aridity between 1901-1950 and 1967-2016 The percentage of changes in different aridity classes are shown in Table 1. No shift in aridity class is observed in more than 85% of the area. There are both positive shift (more arid to less arid class) and negative shift (less arid to more arid class).
However, positive shifts are found relatively more compared to negative shifts.
The highest positive shift is found from arid to semi-arid climate (9.14% of the total area for annual and 5.48% for Kharif) 5 while 2.61% area is noticed to shift from semi-arid to sub-humid climate during Rabi season. On the other hand, a negative shift in only 0.52% and 0.27% areas are noticed for annual and Kharif and relatively in a higher area (2.61%) for Rabi.
Detection of Change Point in Climate
The areal averages of aridity, precipitation and PET of different aridity classes are used to detect the year of their changes using Pettitt's test. The significant changes detected in different years are presented using bold letters in Table 2. Most of the changes in aridity and precipitation are detected between 1971 and 1980 while the change point for PET showed more 15 significant changes compared to aridity and precipitation.
It is important to note that the changes (years) detected for aridity and precipitation are the same for all seasons. For example, the change point of both aridity and precipitation in the hyper-arid region is 1983. The results again suggest that the influence of precipitation on aridity is higher compared to PET.
Discussions
The changes in aridity depend on the changes of different climatic variables. The present study found precipitation as the varies from region to region, but precipitation is the most dominating factor in most of the regions. Tabari and Talaee (2013) reported increasing PET and decreasing precipitation are the cause of increasing aridity in Iran. Most recently, Araghi et al. (2018) also identified increasing temperature and decreasing precipitation due to global warming as the major cause of increasing aridity in Iran. These studies indicate different climate variables as the major driver of aridity in the region. The present study reveals changes in precipitation are the major cause of the changes in aridity in Pakistan.
Pakistan receives precipitation from the monsoon originated in the Bay of Bengal and the western disturbances originated from the Mediterranean Sea. Monsoon contributes a large quantity to annual precipitation as compared to winter rainfall 5 (Sheikh et al., 2009). Therefore, the geographical distribution of annual precipitation is found more or less the same with the monsoon. Several studies such as Ahmed et al. (2017) claimed that climate change has altered monsoon precipitation in the form of more precipitation in the north and at a few places in the southeast of Pakistan. A similar pattern in annual and Kharif precipitation trends has been observed in the present study. The aridity has decreased in the area where precipitation has increased. The PET is found to increase significantly in a large area in the southeast, but its impact is not significant for 10 annual aridity. Like monsoon, an increase in winter precipitation in a large area has been reported (Ahmed et al., 2017). The aridity during Rabi season is found to follow the same pattern of Rabi precipitation. However, a mismatch in rainfall and aridity trends is found in the southwest. This is due to a large increase in PET in the region. Khan et al. (2018) reported a rapid rise in temperature in the southwest which has probably increased PET and aridity in the area. This indicates that both the changes in precipitation and PET have impacts on the changes in aridity in Pakistan. However, precipitation has a much 15 higher influence on the aridity of Pakistan compared to PET.
The aridity is found to increase (drier) and decrease (wetter) in different regions and seasons with the changes in precipitation and PET. Overall, 11.75%, 7.57%, and 9.66% areas are found to shift to wetter while 0.52%, 4.44%, and 0.52% areas to drier condition for annual, Rabi and Kharif respectively. It is important to mention that a large area has a wetter trend in recent years particularly in the semi-arid or sub-humid regions which mean more area become wetter in recent years.
20
However, some areas in the arid region are found to become drier. This indicates that few dry regions are becoming drier and a large relatively wet area is becoming wetter. A similar finding has been reported by Liu et al. (2018b) in neighbouring China. Overall, a large area in the northeast of Pakistan has become wetter and a few locations in the south become dried during 1901 -2016.
Pakistan is mainly an agriculture-based country where a notable portion of the population is associated with the agro-based 25 economy. Haider and Adnan (2014) reported that changes in aridity could have a severe impact on the agricultural sector of Pakistan. They showed that some regions in the northeast of the country are becoming less arid while some of the regions in the south are becoming drier. It is pertinent to mention that southern regions of the country are highly prone to droughts (Ahmed et al., 2018b). Increase in drier conditions can have a severe impact on the agricultural-based economy of the south.
Similarly, the agriculture of north-eastern regions can be benefitted by the wetter condition.
30
The changes in temporal patterns of aridity reveal that the major shift in aridity and rainfall occurred between 1971 and 1980.
Global atmospheric moisture amount is found to increase after 1973 (Ross and Elliott, 2001). An increase in precipitation in many regions of the world is observed due to the increase in global moisture content (Trenberth, 1998). The present study suggests that precipitation of Pakistan has also changed during 1971-1980 which may be due to the increase in global atmospheric moisture after 1973. This has caused a shift in precipitation and aridity in Pakistan. Machiwal et al. (2017) reported a significant change in dry season precipitation in the period 1973-1975 in the hot arid region of India. Some'e et al.
5
Many factors influence regional and local changes in precipitation including shift in monsoon circulation due to global climate change (IPCC, 2014), land use changes like the changes in forest cover and irrigated agriculture (Pielke, 2001) and aerosols in the atmosphere due to human activities . Studies related to anthropogenic activities on precipitation changes in Pakistan and nearby countries are very limited (Basistha et al., 2009). Previous studies suggested that global warming as the cause of the shift in precipitation pattern in the region (Duan et al., 2002;Gautam et al., 2009).
10
The nature of the shift in rainfall regime over a large region which coincides with the increase in global atmospheric moisture suggests that global climate change may be the cause of the shift in precipitation and aridity of Pakistan.
The present study suggests that the relative influence of precipitation and temperature on aridity determines its trends in the context of climate change. Aridity may decrease due to a small increase in precipitation in the regions where the influence of precipitation is higher on aridity. The gridded data used in this may cause uncertainty in the estimation of aridity and its 15 trends. Other gridded data can be used in future to assess the uncertainty in the estimated trends in aridity. Besides, different aridity assessment methods can be used to compare the results.
Conclusions
The long-term changes in annual and seasonal aridity in Pakistan and its causes are analysed in this paper.
Gauge-based gridded precipitation and PET data are used to show the spatial and temporal patterns of the changes in aridity 20 over the diverse climate of the country. Following conclusions are drawn based on the findings: (1) The precipitation is high in the north and low in the southeast and southwest during both Rabi and Kharif seasons while the PET is low in the north due to the cold climate and high in the south due to high temperature.
(2) Most of the country is characterised by arid and semi-arid climate except the northern region near the foothills of Himalaya which is characterised by sub-humid to the humid climate. However, the aridity of the country is found to vary for different seasons due to the spatial pattern of Rabi showed drier trends in the southwest and wetter trend in a small area in the north. (6) Overall, there is a wetting tendency over a large area in the northeast and drying tendency at few locations in the southwest. Therefore, it can be remarked that Pakistan has become wetter from 1901 to 2016. (7) The time-varying trends in aridity reveal that the influence of precipitation is high on the aridity compared with PET. Increase in precipitation in the southeast has reduced the aridity to some extent in the region. Even though the increasing temperature has caused an increase in PET, but its influence is found less on aridity. (8) The changes in spatial patterns of aridity show that the climate in a large area has shifted from arid to semi-arid for annual and Kharif while a small area from arid to hyper-arid in Rabi. (9) The highest shift in arid climatology 5 is observed between arid to semi-arid. About 9.1% area is found to shift from arid to semi-arid climate between the periods 1901-1950 and 1967 to 2016. (10) A significant shift in aridity and precipitation in most of the climatic regions of Pakistan is found during 1971-1980.
|
2019-04-27T13:12:44.219Z
|
2019-07-19T00:00:00.000
|
{
"year": 2019,
"sha1": "48275cd85fad6c50e0a78a4c1a46a3b88c993eb1",
"oa_license": "CCBY",
"oa_url": "https://hess.copernicus.org/articles/23/3081/2019/hess-23-3081-2019.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "38b18105b53564f7f12327b1e489f6955ac289b4",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
235778196
|
pes2o/s2orc
|
v3-fos-license
|
Three-dimensional vascular and metabolic imaging using inverted autofluorescence
Abstract. Significance: Three-dimensional (3D) vascular and metabolic imaging (VMI) of whole organs in rodents provides critical and important (patho)physiological information in studying animal models of vascular network. Aim: Autofluorescence metabolic imaging has been used to evaluate mitochondrial metabolites such as nicotinamide adenine dinucleotide (NADH) and flavine adenine dinucleotide (FAD). Leveraging these autofluorescence images of whole organs of rodents, we have developed a 3D vascular segmentation technique to delineate the anatomy of the vasculature as well as mitochondrial metabolic distribution. Approach: By measuring fluorescence from naturally occurring mitochondrial metabolites combined with light-absorbing properties of hemoglobin, we detected the 3D structure of the vascular tree of rodent lungs, kidneys, hearts, and livers using VMI. For lung VMI, an exogenous fluorescent dye was injected into the trachea for inflation and to separate the airways, confirming no overlap between the segmented vessels and airways. Results: The kidney vasculature from genetically engineered rats expressing endothelial-specific red fluorescent protein TdTomato confirmed a significant overlap with VMI. This approach abided by the “minimum work” hypothesis of the vascular network fitting to Murray’s law. Finally, the vascular segmentation approach confirmed the vascular regression in rats, induced by ionizing radiation. Conclusions: Simultaneous vascular and metabolic information extracted from the VMI provides quantitative diagnostic markers without the confounding effects of vascular stains, fillers, or contrast agents.
Introduction
Damaged vasculature and the resulting impaired blood circulation in organs can cause pathological injuries, such as organ failure and stroke. 1 Therefore, vascular imaging plays a pivotal role in diagnosis, follow-up of disease progression, and assessment of treatment efficacy. 2 Assessment of vascular structure in rodent models is key to quantitate organ vasculature. 3,4 This quantitation could be beneficial in analyzing pathological conditions, such as hypertension, 5 diabetes, 6 and retinopathy 7 as well as changes induced by environmental or chemical agents such as radiation 8 or drugs. 9 Vascular imaging is also important to study therapeutic angiogenesis. 10 The gold standard for vascular imaging of small animal organs is histology, which has a major limitation for obtaining a three-dimensional (3D) picture of structural components, e.g., the branching of a vascular tree. 11 Additionally, using histology for vascular imaging of small animals requires the development of molecular tools such as specific antibodies 12,13 or the development of transgenic mice expressing endothelial-specific markers. 14 Imaging modalities such as micro-computed tomography (micro-CT), 15 ultra-microscopy, 15 near-infrared fluorescence imaging, 16 magnetic resonance imaging, 17 and ultrasound imaging 18 are existing tools for vascular imaging in 3D, but they are complex and costly. Labeling with a contrast agent or filler is required for most of these vascular imaging technologies, 19 each having its limitations. In some applications, a solvent must be used to optically clear the tissue and overcome the limiting 3D vascular image contrast, especially for high light-scattering organs like the kidney. 20 Imaging systems typically provide information about just one biological marker that limits the capacity to decipher complex disease with multiple hallmarks such as cancer. 21 For instance, positron emission tomography (PET) can be used to provide specific molecular information, 22 while a hybrid imaging technology, such as PET-CT, 23 acquires anatomical and molecular information but in turn comes with increased cost, acquisition time, and complexity.
We propose an approach that enables us to perform autofluorescence metabolic imaging that provides both metabolic and vascular information simultaneously. The presented method here is solely based on autofluorescence imaging emanating from the tissue. Fluorescence metabolic imaging techniques pioneered by Chance et al. 24 have been developed to measure mitochondrial redox state [nicotinamide adenine dinucleotide (NADH)/flavine adenine dinucleotide (FAD)]. Fluorescence imaging or spectroscopy of metabolic indices provides 2D functional maps from the surface of tissues in vivo or ex vivo. [25][26][27] 3D functional maps can be built using fluorescence cryo-imaging to provide a volumetric mitochondrial redox state of the tissue. However, to the best of our knowledge, optical metabolic imaging using autofluorescence has not been used to delineate the anatomy of the vasculature of organs.
In this study, we present a segmentation algorithm for detecting the vasculature, which is based on the autofluorescent properties of tissues. This novel technique enables vascular detection without the need for labeling vessels with contrast agents or stains. We termed the technique "vascular and metabolic imaging" (VMI). It relies on the foreground autofluorescence (NADH or FAD) that reveals the background vessel network devoid of such metabolic signatures. We hypothesized that the dark voxels are associated with the vasculature because the red blood cells quench the autofluorescence signals from NADH and FAD. 24,27 We further postulated that our segmented vasculature from VMI can be used to quantify the 3D vascular network of whole organs, such as kidney, lung, heart, and liver. Remarkably, VMI, via autofluorescence, can produce both metabolic redox state and vascular information simultaneously that is currently unattainable with any other existing imaging tools. We validated our vascular detection approach by co-registering the VMI vessel images with the vessel images segmented from red fluorescence in a genetically modified rat kidney that preferentially expresses TdTomato in vascular endothelial cells. We also used a partial body irradiation (PBI) rat model with minimal bone marrow sparing to detect radiation-induced vascular regression in multiple organs as well as to demonstrate VMI utility as a biomedical research tool with potential clinical implications.
Animals and Sample Preparations
In this study, the vascular images were segmented based on autofluorescence images of rat organs. All the animal studies and experiments were approved by the Institutional Animal Care and Use Committee (IACUC) at the Medical College of Wisconsin. The studies were performed using two rodent species, rats, and mice. Lungs, the lateral lob of liver, and kidneys were harvested from non-irradiated and irradiated adult female WAG/RijCmcr rats and hearts from non-irradiated male C57BL/6J mice. For lung sample preparation, the airway of the lungs was first inflated by gravity with 1 to 2 mL fluorescein isothiocyanate-dextran (FITC-dextran, MW 150,000, 100 μM solved in water). In addition to the airway injection of the lungs, the sample preparation was similar for all organs. They were immersed in chilled liquid isopentane for a couple of minutes before transferring them to liquid N2. All samples were stored in a −80°C freezer until optical cryo-imaging was performed.
Partial body irradiation in rats
This method has been developed to expose the total body of the rat, except for part of one hind leg, to ionizing irradiation delivered by x-rays. A minimal volume of bone marrow (∼8% of total marrow) is spared to repopulate the marrow compartment and allow the rat to survive the acute hematopoietic injury within the first 30 days after radiation. The delayed effects of radiation on the lung (radiation pneumonitis) manifest between 42 and 90 days after 10 Gy or higher doses, whereas the kidney damage (radiation nephropathy) is observed after 90 days. This sophisticated rat model is the first in rodents to express the acute and delayed syndromes of radiation exposure in the same animals. [28][29][30] In brief, rats were placed in Plexiglas jig. One hind leg was carefully externalized and shielded with a lead block. A total dose of 7.5, 10, or 12.5 Gy x-rays (n ¼ 3∕ group) was delivered. Irradiation and dosimetry were conducted as described. 30 Age-matched siblings (n ¼ 3) were not irradiated and served as non-irradiated controls. Rats were followed up to 101 days to record vascular changes in multiple organs. The animals were euthanized, and their kidneys, livers, and lungs were harvested. The lungs were inflated with an FITC solution introduced via the trachea before freezing. A high dosage of radiation is well-known to cause damaged endothelium and regression of vessel networks. [31][32][33][34] Our well-established, radiationinduced animal injury model provides an ideal system to demonstrate the sensitivity and efficacy of the algorithm to detect vascular damage in multiple organs. 35,36 2.1.2 CDH5-cre recombinase rat For validation purposes, a transgenic rat, expressing the fluorescent protein TdTomato in vascular endothelium, was used. CDH5-cre recombinase rats were performed at the Genome Rat Resource Center at the Medical College of Wisconsin under protocols approved by the IACUC. Briefly, a 2.5-kbp PCR fragment of the rat genomic DNA encompassing Cdh5 promoter was cloned upstream of the codon-optimized HA-tagged Cre (iCre) and this expression cassette was subcloned into a sleeping beauty (SB) transposon vector. 37 The SB method of transpositional transgenesis was used to produce transgenic Sprague Dawley (Crl:SD, Charles River Laboratories) rats by pronuclear microinjection as we have previously described. 38,39 Three transgenic founders were produced, one of which demonstrated robust endothelial-specific Cre expression when crossed to the TdTomato reporter knock-in rat (Horizon). Both the Cdh5-Cre and TdTomato reporter knock-in rat were backcrossed to the WAG/RijCmcr inbred strain for four generations and then intercrossed for the studies presented herein.
3D Fluorescence Metabolic Cryo-Imaging
The 3D fluorescence cryo-imager system was custom-designed in the Biophotonics Laboratory at the University of Wisconsin Milwaukee. The system captures 3D NADH and FAD fluorescent signals of frozen organs/tissues. The flash-frozen sample is stored in −80°C freezer to ensure the preservation of the metabolic state of the tissue. A complete description of the system can be found in our recent cryo-imaging studies. 26,40 Briefly, a mercury arc lamp (200 W lamp, Oriel, Irvine, CA, in the light source from Ushio Inc., Japan) is used as the light source. Appropriate optical filters at selected wavelengths are utilized to excite the specific fluorophores from the surface of the frozen tissue. For the NADH channel, excitation and emission filters were set at 350 AE 80 nm (UV Pass Blacklite, HD Dichroic, Los Angeles, CA) and 460 AE 50 nm (Chroma, Bellows Falls, VT), respectively. The excitation and emission filters for the FAD channel were set at 437 AE 20 nm (Omega Optical, Brattleboro, VT) and 537 AE 50 nm (Omega Optical), respectively. Lungs were also imaged using FITC specific optical filter sets: excitation at 494 AE 20 nm (Edmund Optics) and emission at 537 AE 50 nm (Omega Optical) for airway detection. For imaging Td-Tomato kidney, besides the regular NADH channel, we also used a red channel of imaging, with the excitation and emission filters set at 545 AE 25 nm (Chroma) and 645 AE 50 nm (Chroma), respectively. All filters are controlled by two motorized filter wheels (Oriental Motor Vexta Step Motor PK268-01B). The emitted fluorescent signals are captured with the image recordings system (CCD camera, QImaging, Rolera EM-C2, 14 bit).
The 3D NADH and FAD cryo-images, representing mitochondrial redox state of tissues, were analyzed using a code written in MATLAB. Calibration was performed using a flat-field image of both NADH and FAD channels. The 3D-rendered redox ratio (RR) image (NADH/ FAD) was calculated voxel-by-voxel. Also attempts to understand the heterogeneity of the tissue were made to correlate the RR with the anatomy of the hearts 41 and kidneys. 40 The following section describes how the autofluorescence images provide the structural features of the vascular network of the organs. Figure 1 shows the flowchart of the proposed algorithm that we used to segment the background vasculature from the foreground 3D autofluorescence images. A simple implementation steps in FIJI 42 can be found in Table S1 in the Supplementary Material. The standard preprocessing normalization steps in fluorescence cryo-imaging, such as flat-field calibration is not needed before vascular segmentation because the intensity adjustment in step 1 normalizes betweensample variations, and the background subtraction in step 3 will remove the uneven illumination.
Vascular Segmentation from Autofluorescence Images
Below is the detailed sequence of steps carried out to obtain and reconstruct a vascular network from the inverted fluorescent image. After loading the 3D stack of images, the brightness and contrast are adjusted to have an enhanced image. Then for each slice, the image is inverted, and the background is subtracted. Additional contrast enhancement and optional masking of the unwanted regions based on the tissue type is done on the 3D image. The reconstructed 3D vasculature can be fed to 3D vessel tracing algorithms for quantification purposes.
Step 1. Brightness and contrast adjustments. The brightness and contrast of images are adjusted by remapping intensity values to the full range of 16-bit images, i.e., adjusting the minimum voxel intensity of the image to zero and maximum intensity to 2 16 ¼ 65;536. The intensity of 3D fluorescence images is adjusted to the whole volume intensity range, and this step is performed on 3D images. The captured autofluorescence intensity might be different from samples to samples. This step of the algorithm is designed to specifically normalize the variations in the intensity of images from various samples by adjusting the intensity of images to similar intensity range. This will also sharpen the differences between the black and white voxels, i.e., enhancing the image contrast. Contrast enhancement is generally used to make objects in an image more distinguishable.
Step 2. Image inversion. In our application, vascular network elements are dark voxels. The inverted image (negative contrast) displays the vasculature as bright voxels. This step is performed on each 2D slice separately.
Step 3. Background subtraction. A background subtraction algorithm known as rolling-ball background correction 43 is used in the next step. The rolling ball radius in each organ should be at least set to the largest vessel radius that we expect the organ possesses. The radius of larger vessels can be estimated by manual measurements in the 2D image containing the larger vessel. Background subtraction is traditionally used in fluorescence microscopy to isolate bright objects from an uneven illumination. 44 This step is performed on each 2D slice separately.
Step 4. Brightness and contrast adjustments. Final contrast enhancement is also done on the 3D structures by repeating step 1, i.e., intensity adjustment.
Step 5. Optional masking. Before feeding the 3D vasculature images to the tracing algorithm, based on the organ, some masking is also required. The heart cavities (atria and ventricles) were masked out using a thresholding mask calculated from the original NADH images. Also, for the kidneys, the segmented vasculature from the medullary region was masked to ensure total removal of false segmentation originating from renal tubules within the medullary region. This removal of the medullary voxels removed a negligible portion of the segmented vascular network (see Fig. S1 in the Supplementary Material), while making sure that the segmented vasculature did not contain renal tubules. Step 6. 3D vessel tracing algorithm. By tracing the 3D vessel networks, we can track, measure, and quantify the vasculature. We used Imaris 9. If the structure comes from high-intensity voxels such as airway in FITC airway injected lungs and red fluorescence images in TdTomato rat kidneys, the same segmentation algorithm without step 2 (image inversion) can be applied.
Validating VMI Using TdTomato Rats
A genetically modified rat model expressing TdTomato primarily in vascular endothelial cells was utilized to image the vasculature in kidneys. Histological assessment of rat kidneys was also done to visualize TdTomato expression in endothelial cells of these rats using an antibody for TdTomato. The capability of the 3D fluorescence cryo-imaging to acquire images from multiple channels simultaneously allows us to have both red and NADH fluorescence of the kidney. We used the foreground vasculature extracted from red fluorescence as proof for examining the vasculature segmented from the NADH channel using VMI. The co-registration of the vascular network extracted from the two channels validates the proposed method of our vascular segmentation.
One of the common metrics for evaluating the quality of image segmentation is the Dice coefficient, which measures the overlap/merge between the ground truth and the test. 45 For calculating the Dice coefficient, we let the 3D volume be represented by the point set X ¼ fx 1 ; : : : ; x N g, where N is the total number of voxels. We let the red vasculature be represented by the partition V red of X with assignment function f red ðxÞ, i.e., voxel intensity at x, and we let the VMI vasculature be represented by the partition V vmi of X with assignment function f vmi ðxÞ. Then the Dice coefficient is defined by E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 1 ; 1 1 6 ; 5 2 5 where the numerator represents the common elements between the two images. To quantify jV red j and jV vmi j, we use the squared sum operation. There is a multiplication by a factor of 2 in the numerator because the denominator counts the common elements twice. The branching structure of the VMI vasculature can also be compared with red fluorescence vasculature. Murray 46 proposed an optimization theory that the fundamental structure of a vascular tree should be such that it minimizes work. Murray's law states that a branch that follows the "minimum work" hypothesis should also follow the equation: E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 2 ; 1 1 6 ; 3 9 5 where D p indicates the diameter of a parent vessel, and D d indicates the diameter of the d'th daughter vessel coming from the parent p. Equation (2) means that the cubed diameter of a parent vessel is equivalent to the sum of the cubed diameter of its daughter vessels. After employing the tracing algorithm using filament tracing in Imaris software, we used the information on the depth of the vessels to define the parents and daughters. The depth of a vessel increases every time a bifurcation happens in the branch. Therefore, all vessels with depth k þ 1 are the daughter vessels of the parent vessels with depth k and Murray's law can be written as E Q -T A R G E T ; t e m p : i n t r a l i n k -; e 0 0 3 ; 1 1 6 ; 2 5 5 where D p k indicates the diameter of the p'th parent vessel at depth k, and D d kþ1 indicates the diameter of the d'th daughter vessel at depth k þ 1. Now, we can look at the relationship between the parent vessel diameters with their daughters' diameter by having the depth information of the vessels. The summation of the cubed diameter of all the vessels at each depth [parents on the left side of Eq. (3)] is then compared with the summation of the cubed diameter of all the vessels at the next depth [daughters on the right side of Eq. (3)]. The vasculature follows Murray's law if this relationship is significantly linear and has a linear fit close to the identity line.
Notably, using the depth to find the parent-daughter relationship in vessels can impose an unavoidable error by making the left side of Eq. (3) higher than the real value. The reason is that the terminal branches from lower depths [asterisks in Fig. 2(a)] are considered as parent vessel while there are no corresponding daughter vessels in the next depth. Figure 3 supports our hypothesis that a foreground fluorescence image can be inverted to reveal the vasculature of an organ like the kidney. The 3D raw NADH (excitation at 350 nm and emission at 460 nm) image of a kidney, a sagittal slice of the kidney, and the segmented vasculature on one slice are illustrated in Fig. 3. The stack of 2D vascular images was reconstructed to generate the 3D vascular images of the whole kidney.
3D Vascular and Metabolic Imaging Using Autofluorescence
VMI was also applied to other organs, such as heart and liver. Figure 4 shows selected representative slices for the step-by-step images of the algorithm for each organ: kidney, heart, and liver. In step 1, the contrast and brightness of the images are enhanced. The inverted images of one slice of each organ can be seen in step 2. Note that now, the feature of interest (vasculature) is bright in the image. A background-subtracted image of the slice can be seen in step 3. The resulting 3D vascular images are reconstructed from the stack of 2D images.
Vascular segmentation from the background of autofluorescence in lung tissues was not feasible because the vasculature, airway, and alveoli appeared dark in the images. Therefore, distinguishing the vasculature from these structures was not possible. To circumvent this problem, we injected an FITC solution into the airway and alveoli. Extrinsic fluorescence from FITC (excitation at 494 and emission at 537) and FAD (excitation at 437 and emission at 537) overlapped. This overlap and the injection of an FITC solution into the airway and alveoli enabled us to make the airway voxels bright in FAD images and keep the vascular structures dark. The same proposed segmentation algorithm was then applied to extract the inverted vasculature from the FAD images of the lungs. Figure 5 shows a 3D raw FAD image of the lung and a single slice of the lung. The airways, which are filled with FITC solution, are segmented from light (higher intensity) voxels in the FAD images. The 3D vasculature (in red) and airway (in green) are then reconstructed in 3D as shown in (Fig. 5). In the combined or merged images, the color of voxels that have an overlap between the segmented airway and vasculature should be yellow, but due to a very little intersection, no yellow voxels appeared in this figure. The Dice coefficient <0.001 also confirms that the airway and the vasculature did not overlap. These results demonstrate that the segmentation structures from inverted FAD images do not originate from airways but the vasculature. Note that now, the FAD images are originating from both FITC and FAD fluorescence. This helped us to lighten the airway, but the FITC fluorescence in the airway also interfered with the FAD signal. Therefore, on the downside, the RR is now NADH/(FAD + FITC), which is not an accurate representation of the mitochondrial RR (NADH/FAD). Fig. 3 A background vasculature is segmented from a foreground autofluorescence image of a kidney. For a rat kidney, a 3D raw image of NADH fluorescence is shown. A sagittal slice view of raw kidney image is chosen, and the segmented vasculature from dark voxels is shown in red and merged with the raw slice to show the localization of the vascular pixels in the image. The 3D vasculature is reconstructed from all 2D segmented pixels. The 3D rendered images of NADH and vasculature can be found in Video S1 (Video S1, 685
Co-Registration with TdTomato to Confirm VMI Vasculature
The transgenic rat model expressing endothelial-specific TdTomato was used to validate the vascular segmentation by VMI. Figure 6 shows a histological assessment that illustrates the expression of TdTomato in endothelial cells in transgenic rat kidneys (upper row). The renal tubules and most non-endothelial cells of the transgenic TdTomato rat kidneys do not express TdTomato (Fig. 6). Both wild type and transgenic vascular endothelial cells are also stained with endothelial-specific antibody RECA-1 in sections adjacent to those stained for TdTomato. Though the TdTomatostaining in glomeruli was not always as distinct as that of RECA-1, the sections demonstrated co-registration primarily with blood vessels and not with renal tubules (open black arrows).
Using the TdTomato transgenic rat model, the cryo-imaging was performed in the two channels of fluorescence, NADH (excitation 350 nm and emission at 460 nm), and red (excitation 545 and emission 645). The bright voxels in the red channel (segmentation algorithm without step 2) and the dark voxels in the NADH channel are segmented and reconstructed [ Figs. 7(a) and 7(b), respectively]. In the kidney, the anatomy of the vasculature extracted from the NADH using VMI [ Fig. 7(a)] is then combined with the vasculature segmented from red fluorescence [ Fig. 7(b)] to make a hybrid image [ Fig. 7(c)]. The overlap voxels between the two images [Figs. 7(a) and 7(b)] are displayed in yellow color [ Fig. 7(c)]. The co-registration gives a Dice coefficient of 0.91, which shows a high degree of overlap/merge between the two segmented vasculatures.
The branching of the vasculature between the two signals is also compared in Fig. 8. The relationship between the cubed diameter of the parent vessels to the summation of the cubed diameter of their corresponding branched daughter vessels is presented. Using linear regression, the two lines are fitted to each set of data points as shown in Fig. 8. According to Murray's law, the data should be fitted to y ¼ x line, i.e., a line with a slope of 1 and y intercept of 0. The y intercept for both lines is ∼0, and the slopes for both VMI and red channel are close to 1, indicating that the VMI branching like the red vascular branching follows Murray's law of the "minimum work" hypothesis successfully. A single branch from the two signals is also evaluated for more insights into smaller vessel branches, and both VMI and red vascular branching follows Murray's law on smaller branches as well (the data are provided in Fig. S2 in the Supplementary Material).
VMI of Organs from Partial Body Irradiated Rats
Here we present an application of VMI to uniquely drive the topography of two sets of parameters simultaneously: mitochondrial redox state and the 3D vascular network of whole organs. Figure 9 illustrates the representatives of the 3D rendered vascular networks of the kidney, liver, and lung from rats exposed to different doses of irradiation. The corresponding 3D RR (NADH/ FAD) of the kidney and liver are also presented in Fig. 9. The RR images of the lungs are not presented due to the interference of FITC with FAD.
The vascular networks in Fig. 9 illustrate the regression of the vessel networks after PBI. The vascular damage in kidneys and lungs also appears to qualitatively correlate with the dose of irradiation in the PBI rats. The RR images are presented in pseudocolor with higher RR voxels shown in red and the lower RR voxels in blue. The kidneys and livers exposed to a higher dose of irradiation show a greater decrease in the RR, representing a more oxidized mitochondrial redox state.
Discussion
Due to the low levels of autoflourescence signals in tissue autofluorescence metabolic imaging, the autofluorescence images have limited tissue contrast anatomically when compared to the histology images. This limitation was partially circumvented in the current study using VMI to provide a 3D vascular network of a whole organ. Here we demonstrate the feasibility of VMI to generate anatomical and metabolic information simultaneously. The dark voxels inside the autofluorescence images were segmented to provide the 3D vascular network of whole organs. The injection of exogenous fluorescent dye into the airway of the lungs helped to highlight the airway so that the dark voxels solely represent the vasculature.
The organ-level vasculature is the focus and strength of our study, which is hard to achieve with existing technologies. There are multiple vascular imaging modalities such as OCT that can be used on small field of view, while VMI can provide the whole-organ vascular structure. Like VMI, Kaushik et al. 47 performed vascular imaging using autofluorescence signal. However, Kaushik et al. performed imaging on engineered tissue using synthetic hydrogel, while VMI imaged frozen rodent organs with blood and tissue around vascular structures. This would explain the difference in the two methods: Kaushik et al. perform vascular imaging using NADH signal from synthetic vasculature, while VMI uses light-absorbing properties of hemoglobin and invert the NADH image to segment the vasculature.
It was also shown that the VMI has high co-localization with the red fluorescence of transgenic rats expressing endothelial-TdTomato. A genetically modified rat model of vascular endothelium selective expression has been chosen to confirm the selection of the vasculature by VMI. The RR images are presented in pseudo-color with higher RR voxels shown in red and lower RR voxels in blue. The mean RR is also provided by the images.
The high overlap/merge between the red fluorescence of transgenic-TdTomato rat kidney and VMI vasculature indicates the specificity of VMI in the segmentation of vascular networks. Also, we have shown that the "minimum work" hypothesis proposed by Murray 46 has been satisfied by both approaches. This suggests that the VMI vasculature has similarities in branching with the ground truth vasculature that was generated by TdTomato red fluorescence.
The potential interest of combining exquisitely sensitive autoflourescence metabolic information with vascular information was demonstrated in a proof-of-concept study of radiationinduced damage to multiple organs. 3D mitochondrial redox state of PBI rat kidneys and livers were examined. The mitochondrial redox state of kidneys and livers appear to decrease in an irradiation dose-dependent manner. This result is consistent with our previous study 40 showing that irradiation diminished the ability of the cells to maintain balanced mitochondrial redox state necessary for normal bioenergetics in kidneys. Using VMI, the vascularization during exposure to different doses of irradiation was examined in the kidneys, livers, and lungs. We have seen that exposure to irradiation could also cause vascular regression. Comparing the observed radiationinduced vascular damage with the previously seen impact of radiation on potentially increased oxidation of the mitochondrial RR 40 implies a link between the deregulation of mitochondrial metabolism and the regression of the vasculature typical of radiation injuries. [31][32][33] Together, this study showed that VMI using autofluorescence can successfully stratify the dose of irradiation based on these two biomarkers of injury.
The vascular segmentation algorithm in VMI uses the same 3D autofluorescence cryoimages that we have used previously to produce tissue mitochondrial redox state. 26,40,48,49 VMI can be applied to quantitatively characterize the organ vasculatures and the metabolic state simultaneously. The VMI can also be used to explore the pathophysiology of rodent injury and treatment models. Optical metabolic imaging has been applied for several years, 26,40,[48][49][50][51][52] and by adding the proposed segmentation technique, another key biomarker of injury, vascular density, would also be measured.
The major limitation of this study is that VMI has only been, to date, applied to autofluorescence images of frozen tissue. The application of the technique on in vivo autofluorescence images has not been studied. In FITC airway injected lungs, due to the interference of FITC with the FAD signal, accuracy of the mitochondrial redox imaging in the lung may be compromised. A challenge in performing VMI on hearts was that there are cavities that needed to be masked, the optional step 5 has been added to segment out the unwanted spaces (Fig. 1).
The proposed algorithm in this study has generated both vascular and metabolic information with major implications.
(a) The vascular images are produced without the use of any extrinsic contrast agents or tissue clearing-solvents, which might induce artifacts and/or structural deformity. 53 (b) In multimodal imaging technologies, the co-registration of metabolic and vascular images is of paramount importance. 54 VMI has perfect co-registration precision because vascular and metabolic images are originating from the same 3D images, making it ideal for studying the interaction of tissue metabolism and vasculature. (c) In this study, vascular images of the kidney, lung, heart, and liver were segmented using VMI. This VMI approach could be extended to other organs and pathologies, such as eye, skin wounds, and tumor as we 48,50,55 and others 56 have investigated their mitochondrial RR in the previous reports. (d) Since the technique uses optical imaging technologies, it is capable of high-resolution imaging compared to x-ray or ultrasound instruments. By increasing the resolution of the fluorescence cryo-imaging instrument, VMI can present additional details in the vascular networks. Also unlike the laborious, complex, and time-consuming sample preparations in micro-CT, 15 the only sample preparation in VMI before performing fluorescence imaging is snap-freezing the tissue in liquid nitrogen. (e) VMI is implemented by adding an image processing algorithm to the existing 3D fluorescence cryo-imaging. Therefore, no hardware modification is needed to extract the vascular network of organs from autofluorescence. Also fluorescence metabolic imaging systems are much more cost-effective in comparison to other similar 3D whole organ vascular imaging modalities, such as micro-CT.
|
2021-07-10T06:16:41.001Z
|
2021-07-01T00:00:00.000
|
{
"year": 2021,
"sha1": "c2a0d6521d3d63ff5fb244f4bc009f11368e72a5",
"oa_license": "CCBY",
"oa_url": "https://www.spiedigitallibrary.org/journals/journal-of-biomedical-optics/volume-26/issue-7/076002/Three-dimensional-vascular-and-metabolic-imaging-using-inverted-autofluorescence/10.1117/1.JBO.26.7.076002.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "118c551a9ffc8e2c07b5abaa1ce2ab41a3110c14",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Engineering"
]
}
|
251229007
|
pes2o/s2orc
|
v3-fos-license
|
Exogenous Semaphorin 3E treatment protects against chlamydial lung infection in mice
Recent studies reported that semaphorins play a significant role in various settings of the immune response. In particular, Semaphorin 3E (Sema3E), a secreted semaphorin protein, is involved in cell proliferation, migration, inflammatory responses, and host defence against infections. However, the therapeutic function of Sema3E in bacterial infection has not been investigated. Our data showed that exogenous Sema3E treatment protects mice from chlamydial infection with lower bacterial burden, reduced body weight loss, and pathological lung changes. Cytokine analysis in the lung and spleen revealed that Sema3E-Fc treated mice, compared to saline-Fc treated mice, showed enhanced production of IFN-γ and IL-17 but reduced IL-4 and IL-10 production. Cellular analysis showed that Sema3E treatment leads to enhanced Th1/Th17 response but reduced Treg response in lungs following chlamydial infection. Moreover, Sema3E treatment also enhanced the recruitment of pulmonary dendritic cells, which express higher co-stimulatory but lower inhibitory surface molecules. The data demonstrate that Sema3E plays a vital role in protective immunity against chlamydial lung infection, mainly through coordinating functions of T cells and DCs.
Introduction
Chlamydiae are gram-negative obligate intracellular bacterium (1). Two major species of Chlamydia that infects human are Chlamydia pneumonia and Chlamydia trachomatis (2)(3)(4). Mouse infection with Chlamydia muridarum (Cm) has been commonly used to study the immunobiology of respiratory and genital tract chlamydial infections. Studies in recent years have elucidated the critical role of cellmediated immunity in the protective immune response to chlamydial infection. Previous studies by us and others have suggested that IFN-g production by T cells plays a major function in resolving chlamydial infection (5)(6)(7). In addition, IL-17 production by T cells is also found to protect against chlamydial infection in the lungs (7)(8)(9). In contrast, Th2 and regulatory T cells (Tregs) response lead to an immunopathological response to chlamydial infections (10,11). Many studies documented the incredible ability of dendritic cells to influence T cell response to chlamydial infection. Surface expression of molecules such as CD40, CD80, and CD86 provide co-stimulation to enhance T cell response (12). The expression of inhibitory molecules, such as programmed cell death ligand-1 (PD-L1) by dendritic cells (DCs), reduces Th1/Th17 response to chlamydial infection (13).
Semaphorins are a large family of proteins described initially as axon guidance cues involved in neural development (14). Studies over the past decade indicated the critical role of semaphorins in cell proliferation, tumour growth, angiogenesis and immune functions (15). Semaphorin 3E (Sema3E) is a secreted protein that plays diverse functions in immune cells in a context-dependent manner (16)(17)(18). Sema3E acts as a chemoattractant for macrophages in adipose tissue, and p53induced upregulation of Sema3E leads to tissue inflammation (19). On the other hand, Sema3E is expressed in macrophages of atherosclerotic plaques and inhibits macrophage migration to chemokine by disrupting the re-organization of the actin cytoskeleton (20). Sema3E is expressed in numerous cell types such as adipocytes (19), DCs (21), thymic epithelial cells (22), macrophages (20), tumour cells (17), and osteoblasts (23). Sema3E binds to the receptor PlexinD1(PlxnD1), which is highly expressed in embryonic tissues, osteoblasts, lung mesenchyme, adrenal gland, mammary gland, small intestine, and immune cells (24). Sema3E deficiency in mice leads to exaggerated allergic airway inflammation, remodelling, and airway hyperresponsiveness, while intranasal recombinant Sema3E treatment reduced house dust mite-induced allergic asthma (25,26). Intranasal exogenous Sema3E protects mice from allergic asthma by reducing eosinophilic inflammation, serum IgE, and Th2 cytokines (26). We recently reported that Sema3E played an important role in host defence against bacterial infection by showing that Sema3E-deficient mice exhibited more severe disease and higher bacterial growth following Chlamydia muridarum lung infection (27). In addition, we found that DCs from Chlamydia-infected Sema3E KO mice failed to induce protective T cell responses (27). The purpose of this study by delivering exogenous Sema3E to wildtype (WT) and Sema3E deficient mice was, on the one hand, to further test the conclusion using a complementary approach and, on the other hand, to test the therapeutic potential of Sema3E in promoting immunity against bacterial infections.
This study found that treatment of WT and Sema3E KO mice with recombinant Sema3E-Fc protected the mice from chlamydial infection with lower bacterial burden and inflammation in the lung. The protective effect was associated with enhanced Th1/Th17 response but reduced Treg response. The exogenous Sema3E treatment also increased the expression of co-stimulatory molecules and reduced the immune-inhibitory marker PD-L1 on the surface of DCs. The recruitment/ expansion of DCs, including the CD103+ subset to the site of infection, was also increased. Altogether, the study verified the importance of Sema3E in host defence against bacterial infection and suggested that a supplement of Sema3E could be a potential strategy to enhance protection against bacterial infection or vaccination.
Materials and methods
Animals Sema3E -/-BALB/c mice were obtained from a GMC animal house at the University of Manitoba, Winnipeg, Canada. Sema3E -/mice in B6 background were gifted by Dr. F. Mann, Universitéde la Mediterraneé, Marseille, France. These mice were backcrossed for ten generations to obtain Sema3E -/mice in BALB/c background. All mice were maintained in the Animal Care Facility of the University of Manitoba. The immunophenotypic analysis of Sema3E −/− mice has been shown previously (25). The use of all mice in this study was in adherence to the ethical standards prescribed by the Canadian Council on Animal Care (CCAC) and The University of Manitoba Animal Ethics Committee (Protocol # 19-029).
Organism
Chlamydia muridarum (Cm) used in this study was propagated and cultured, as described previously (28). Shortly, HeLa 229 cell monolayers in Eagle's MEM (composed of 10% FBS and 2 mM L-glutamine) were infected with Cm for 48 h. Infected cells were harvested using sterile glass beads, and Cm elementary bodies (EBs) were isolated by discontinuous density gradient centrifugation. The purified Cm elementary bodies were stored in the sucrose-phosphate-glutamic acid buffer (SPG) at −80°C.
Infection of mice and quantification of chlamydial in vivo growth
Mice were infected intranasally with 1×10 3 inclusionforming units (IFU) of Cm in 40 ml of SPG buffer. Infected mice were sacrificed on day 7 post-infection. Three mice were used per group in each experiment. The chlamydial load in the lung was determined as described previously (28). Briefly, lung tissue suspensions aseptically isolated from mice were homogenized using a cell grinder in SPG buffer. Homogenized tissue was centrifuged at 1900 × g for 30 min at 4°C, and the supernatant was collected and kept at −80°C. HeLa 229 cells were grown to confluence in 96-well flat-bottom microtiter plates for Cm quantitation. The monolayers were then washed in 100 ml of Hank's Balanced Salt Solution (HBSS), inoculated in triplicates with 100 ml of serially diluted samples, and incubated at 37°C for 2 hours. After washing plates, 200 ml of MEM containing cycloheximide (1.5 mg/ml) and gentamicin (10 mg/ ml) was added. The plates were incubated at 37°C in 5% CO2. After 48 hours, the culture medium was removed, and the cells were fixed with absolute methanol. Fixed cells were washed and incubated with Chlamydia genus-specific murine mAb at 37°C for 70 minutes. The cells were washed, stained with HRP conjugated goat anti-mouse IgG, and developed with a 4chloro-1-naphthol (Sigma-Aldrich) substrate. The number of inclusions was counted under a microscope. Five fields per well were counted, and the chlamydial load was analyzed based on dilution titers of the original inoculum.
Semaphorin 3E treatment of mice
Recombinant mouse Semaphorin 3E Fc protein (Sema3E-Fc) and control IgG Fc in saline (saline-Fc) were purchased from R&D SYSTEMS and used according to the manufacturer's instructions. WT BALB/c mice and Sema3E KO BALB/c mice were treated intranasally with either Sema3E-Fc or saline-Fc (0.3 mg per mouse) two hours before Cm infection and day 1 to day 6 consecutively after Cm infection. Mice were sacrificed on day 7 post-infection and analyzed for bacterial load and cytokine response (Supplementary Figure 1A). To analyze the surface phenotype of DCs following Cm infection, Sema3E KO and WT mice were infected intranasally with Cm and treated with Sema3E-Fc, or saline-Fc, and sacrificed after 3 days.
Isolation of lung and spleen for the preparation of single-cell suspensions
For obtaining single lung cell suspensions, lung tissues isolated from mice at designated time periods of infection were digested in 2 mg/ml collagenase XI (Sigma-Aldrich, Oakville, Ontario, Canada) dissolved in RPMI 1640 medium at 37°C for 60 min. In the last 5 min of incubation, EDTA (2 mM, pH 7.2) was also added to the medium. After filtering cells through 70 mm cell strainers, red blood cells (RBC) were lysed by ACK lysis buffer (composed of 150 mM NH 4 Cl, 10 mM KHCO 3 , and 0.1 mM EDTA). Spleen single-cells were made by digesting spleens with 2 mg/mL collagenase D (Roche Diagnostics, Meylan, France) in RPMI 1640 for 30 min at 37°C. The RBCs were then lyzed by using ACK lysis buffer. The isolated cells were washed and resuspended in complete RPMI-1640 medium (RPMI-1640 supplemented with 10% FBS, 1% L-glutamine, 25 mg/ml gentamicin) for further analysis.
DC purification
Purification of DC was performed, as described previously (29). Briefly, splenic and lung single-cell suspensions were incubated with CD11c microbeads (Miltenyi Biotec) for 15 minutes at 4°C and passed through magnetic columns for positive selection. The purity of DC was found to be >95%.
For T cell intracellular cytokine analysis, lung single-cell suspensions were cultured at 7.5 × 10 6 cells/well in the presence of phorbol 12-myristate 13-acetate (50 n g/ml; Sigma-Aldrich, St Louis, MO, USA) and ionomycin (1 mg/ml; Sigma-Aldrich) in complete RPMI 1640 medium for 6 h at 37°C. Brefeldin A (5 mg/ ml; eBioscience) was added at the last 3 hours of incubation to accumulate cytokines intracellularly. Cultured cells were then washed twice and incubated with Fc receptors blocking Abs (anti-CD16/CD32 antibody; eBioscience) for 15 min on ice to prevent non-specific staining. Following this step, surface marker staining was done on cells using fluorescent-labelled anti-CD3 PE-Cy7, anti-CD4 FITC, anti-CD25 APC, or anti-CD8a PE mAbs (eBioscience). The surface-stained cells were then fixed and permeabilized using Cytofix/Cytoperm (BD Pharmingen). Later, cells were intracellularly stained with anti-IFN-g-APC, anti-IL-17-APC, or isotype control antibodies (eBioscience). After staining, cells were then washed, resuspended in staining buffer, and data were collected by BD FACSCanto ™ II (BD Biosciences) and analyzed using FlowJo. FoxP3 staining was done using Foxp3/Transcription Factor Staining Buffer Set (eBioscience) according to the manufacturer's instructions and stained using anti-Foxp3-PE or isotype control antibodies (eBioscience). For CD4 and CD8 T cells, the analysis was performed on gated CD3 + cells. CD4 + FoxP3 + CD25 + cells gated on CD4 T cells were analyzed as Tregs (Supplementary Figure 1C).
Histopathological analysis
The lung tissues aseptically obtained from different mice groups at indicated time points were fixed in 10% formalin. Haematoxylin and Eosin (H&E) staining was done on tissue sections, and histopathological changes were observed under light microscopy, as described (7). The degree of lung inflammation was analyzed using a semi-quantitative grading system (9); grading scale: 0, normal; 1, mild inflammation, granuloma formation, cellular infiltration of less than 25% of the area, no prominent infiltration into adjacent alveolar septae or air space; 2, mild interstitial pneumonitis, diffused cellular infiltration in some region (25%-50%), septal congestion, interstitial edema; 3, inflammatory cell infiltration into perivascular, peribronchiolar, alveolar septae, and air space (50%-75% of the area); 4, over 75% of the area of the lung filled with infiltrating cells.
Statistical analysis
Unpaired Student's t-test (GraphPad Prism software v4, GraphPad, San Diego, USA) was used to assay the statistical significance of comparing two different groups. For comparing several groups of mice, a one-way analysis of variance (ANOVA) was used. A p-value of less than 0.05 was considered significant.
Semaphorin 3E treatment protects against chlamydial lung infection
To evaluate the potential of exogenous Sema3E to enhance the capability to fight against chlamydial infection, we treated WT mice and Sema3E KO mice intranasally with either recombinant Sema3E-Fc or control saline-Fc, two hours before Cm infection and day 1 to day 6 consecutively after infection (Supplementary Figure 1A). In both WT and Sema3E KO mice, we observed a significant decrease in chlamydial load in the mice that received Sema3E-Fc compared to those that received saline-Fc alone ( Figure 1A). In addition, lung histological analysis showed that mice treated with Sema3E-Fc exhibited a reduced inflammatory cell infiltration compared to saline-Fc-treated controls ( Figures 1B, C). Similarly, it was found that Sema3E-Fc treatment reduced body weight loss in both WT and Sema3E KO mice compared to saline-Fc treated mice after Cm infection ( Figure 1D). The observations that the supplement of exogenous Sema3E corrected the failure of KO mice in protection against infection and further enhanced the protection of WT mice confirm our previous finding on the crucial role of Sema3E in host defence against infection and the effectiveness of supplementing Sema3E to enhance immunity to infection further.
Sema3E treatment promotes Th1/Th17 responses while reducing IL-4/IL-10 responses after chlamydial infection at the population level
To address the mechanism of Sema3E function in protection, we next investigated the effect of Sema3E-Fc treatment on cytokine response of mice after Cm lung infection. We first examined cytokine responses in the local lung tissues. Sema3E-Fc-treated WT and Sema3E KO mice showed a significant increase in IFN-g and IL-17 cytokines in the lungs compared to those given control saline-Fc (Figure 2A). In contrast, IL-4 and IL-10 cytokines were reduced in Sema3E-Fc-treated mice (Figure 2A). To further understand the effect of Sema3E treatment on antigen-driven T cell immune response, we first studied cytokine responses by ex vivo splenocytes isolated from Cm infected mice upon UV killed Cm restimulation. Sema3E-Fc-treatment of WT mice, compared to saline-Fc treatment, increased antigen-driven IFN g and IL-17 but reduced IL-4 production ( Figure 2B). Similarly, Sema3E-Fctreatment of KO mice also enhanced antigen-driven IFN g but reduced IL-10 and IL-4 production. The results suggest that Sema3E can modulate cytokine responses differently by preferentially promoting Th1/Th17 while reducing IL-4/IL-10 responses after chlamydial infection.
Sema3E treatment enhances Th1/Tc1 and Th17 cytokine response in the lung after chlamydial infection
We performed an intracellular cytokine analysis of lung CD4+ and CD8+ T cells by flow cytometry to analyze T cell cytokine responses at a single-cell level. The intracellular cytokine analysis showed a higher number of IFN-g producing CD4 + T cells and CD8 + T cells in the lung of Sema3E-Fc treated mice compared to saline-Fc treated mice (Figures 3, 4). Moreover, we found a more significant number of IL-17 + T cells in the lungs of Sema3E-Fc treated mice than saline-Fc treated mice after Cm infection (Figures 5A, B). Together, these data confirm that Sema3E can promote Th1/Tc1 and Th17 cell responses in the local tissues (lung) after Cm infection in vivo.
Sema3E treatment reduced CD4 + CD25 + FoxP3 + regulatory T cells in the lung after Cm infection
Since we observed lower IL-10 levels in the spleen and lung of Sema3E-Fc treated mice and Treg is one of the primary sources of this cytokine, we next examined CD4 + CD25 + Foxp3 + T cells in the lungs of WT and Sema3E KO mice treated with Sema3E-Fc following Cm infection. We found that the proportion and number of CD4+CD25+ Foxp3+ T cells were significantly lower in Sema3E-Fc treated mice than saline-Fc treated mice in both Sema3E intact and deficient conditions ( Figures 6A, B). The results suggest that Sema3E treatment can impede Treg responses during Cm infection.
Sema3E treatment alters DC surface molecule expression and the development of DC subsets
We next evaluated whether the exogenous Sema3E treatment of WT and Sema3E KO mice affects the surface expression of co-stimulatory and inhibitory molecules on dendritic cells after chlamydial infection. Compared to saline-Fc treated mice, Sema3E-Fc treated WT, and Sema3E KO mice showed higher MHC-II, CD40, CD80, and CD86 molecules on the surface of DCs following infection (Figure 7). In contrast, a lower percentage of PD-L1, an inhibitory surface molecule expressing DCs, was found in Sema3E-Fc treated mice compared to saline-Fc treated mice (Figure 7).
Previous studies showed that different DC subsets exhibit variable capacity in inducing a protective immune response to Cm infection (13,28). Notably, the CD103 + lung DC subset is reportedly more potent in inducing Th1/Th17 response to Cm infection than the CD11b + lung DC subset (13). We, therefore, also analyzed CD103 expression in the lung DCs of Sema3E-treated mice. We found that Sema3E treatment increased the proportion and number of CD103 + DC subsets in the lung (Figures 8A, B). The results suggest that Sema3E plays a critical role in the recruitment and development of Th1/Th17 promoting DCs, including the CD103+ DC subset following chlamydial infection.
Discussion
A large body of evidence shows that Sema3E is involved in regulating immune responses (19, 22, 25, 30). We recently reported that Sema3E is critical for host defence against bacterial infection, mainly based on a study to compare WT and Sema3E KO mice. In this study, we used a different approach, supplement of exogenous Sema3E, to further confirm the role of this molecule in host defence and explore the potential of Sema3E Higher IFN-g production by CD4 and CD8 T cells after Sema3E-Fc treatment of Cm infected Sema3E KO mice. CD4 + and CD8 + T cells isolated from the lungs of Cm infected mice at day 7 post-infection were stained intracellularly for IFN-g. (A) Representative flow cytometric images and summary of flow cytometric analysis (B) to show the percentage and absolute number of IFN-g producing CD4 and CD8 T cells. Data are shown as mean ± SD (n = 3) and represent one of three independent experiments with similar results. *p < 0.05, **p < 0.01. administration as prevention and therapeutic strategy for bacterial infections. We found that exogenous Sema3E treatment can significantly improve the condition of mice with intact or deficient Sema3E following chlamydial infection in the lung. The improvements are characterized by lower chlamydial growth, less severe lung pathology, and reduced body weight loss following infection. Interestingly, we found that the administration of exogenous Sema3E to either Sema3E KO or WT mice can modulate T cell responses, preferentially enhancing the right cytokine profile to fight against the infection. Specifically, we have observed that Sema3E treatment enhances IFN-g and IL-17 production but reduces IL-10 and IL-4 cytokine response in the lungs and spleen after chlamydial infection. This observation was similar to what was shown in the study of allergic asthma, where Sema3E treatment enhanced the secretion of IFN-g and reduced IL-4 in the airways upon house dust mite (HDM) challenge (26). The T cell-modulating effect of Sema3E appears related to its impact on DC development, including DC subsets. The changes are consistent with the increase of Sema3E levels in the exogenous Sema3E-treated mice (data not shown). These studies further confirm the promoting role of Sema3E for protective immunity against bacterial infection. More importantly, the finding that exogenous Sema3E can promote protective immunity in both Sema3E deficient and intact mice strongly suggests the potential of this protein as a target in developing novel preventive and therapeutic approaches against bacterial infections and other diseases.
The most significant observation in this study is the influence of Sema3E on DC development/recruitment. We observed that Sema3E treatment enhanced the frequency and numbers of CD103 + DC subset in the lung. The preferential protective role of the CD103 + pulmonary cDCs subset has been reported in chlamydial lung infection. Our previous studies using the adoptive transfer approach have shown that CD103 + pulmonary DCs induces more robust Th1 and Th17 responses than CD11b + DCs (13). The finding in the present study concurs with the reported data, showing an association of increased CD103 + DCs with enhanced Th1/Th17 response in the Sema3E treated mice. Notably, the modulating function of Sema3E in lung DC subsets was recently reported in an allergic asthma model induced by HDM. The study found that CD103 + DCs from Sema3E treated mice induced higher IFN-g production by T cells in an ex vivo DC-T co-culture system than DCs from saline-Fc treated mice (26). Future studies are needed to delineate the role of Sema3E in the preferential development of specific DC subset and consequent promotion of the types of T cell response in infections.
In addition to the enhanced CD103+ DC subset, we also found that Sema3E-Fc treated mice, compared to saline-Fc treated mice, exhibited higher surface expression of MHC-II and the co-stimulatory CD40, CD80, and CD86 molecules but a lower expression of PD-L1, an inhibitory surface molecule. The maturation of DCs with higher expression of MHC and costimulatory molecules by DC is critical for developing protective immunity against chlamydial infection (31).
Another interesting finding in the present study is the inhibitory effect of Sema3E treatment on Treg responses in chlamydial infection. Treg plays a significant role in regulating immune response, which can be favourable and detrimental in host defence against infection. In Chlamydia studies, Treg is more associated with the inhibition of protective immunity. Several recent studies have shown the detrimental role of Treg cells in immunity to chlamydial infection. Depletion of Tregs reduced genital Cm infection by enhancing Th1 response (32).
Inhibition of Treg response is one of the mechanisms by which NK cells protect against chlamydial lung infection (11,33). In agreement with reduced Treg response, we also observed lower IL-10 cytokine in the lungs and spleen of Sema3E-treated mice. The inhibitory effect of Sema3E on PD-L1 expression on the surface of DCs is likely necessary for the suppression of Treg by Sema3E. In addition, the promoting effect of Sema3E on Th1 and Th17 responses can also participate in the inhibition of Treg. Indeed, it has been found that the expression of PD-L1 in DCs can inhibit Th1/Th17 responses, while Th17 is essential for the Th1-inducing DCs (13). Moreover, higher expression of PD-L1 in DC was also associated with lower IL-12 but more elevated IL-10 cytokines, thereby promoting Treg responses (11).
Chlamydial infections are routinely treated with antibiotics such as azithromycin or doxycycline (34). Considering that antibiotic treatments are associated with concerns such as side effects of treatment, recurrent infections, antibiotic resistance, and the safety in pregnancy (35)(36)(37)(38)(39), the development of nonantibiotic-based therapy is needed. Our studies showed that exogenous Sema3E treatment could be considered a novel FIGURE 7 Sema3E treated mice showed the altered surface phenotype of DCs following Cm infection. Sema3E KO and WT mice were infected intranasally with Cm, treated with Sema3E-Fc, or saline-Fc, and sacrificed after 3 days. DCs were isolated from the lungs using CD11c microbeads, and cells were stained for surface markers and analyzed using flow cytometry. Expression of CD40, CD80, CD86 and PDL-1 on CD11c + MHCII + cells (dark shaded histograms) and isotype control (light shaded histograms) were shown. MHCII expression on CD11c + cells and isotype control were recorded. The percentages and mean fluorescence intensity (MFI) of positive cells were indicated. One of the two independent experiments with similar results is shown (n = 3). *p < 0.05, **p < 0.01, ***p < 0.001. therapeutic approach to treat chlamydial infections or be used as a supplement to mitigate the side effect of antibiotics. We have previously reported the endogenous secretion of Sema3E in response to chlamydial lung infection in mice (27). The present study suggests that higher levels of Sema3E would further enhance immunity to chlamydial infection in vivo. Enhancing protective immunity in both Sema3E deficient and intact mice by exogenous Sema3E treatment is particularly encouraging. It would be important to study the prevalence of Sema3E deficiency in humans and the relationship with chlamydial persistency in patients. Apart from chlamydial infection, accumulating evidence suggests the relevance of semaphorins as novel targets in cancer, autoimmune and allergic disorders. The therapeutic ability of the uncleavable variant of Sema3E (Uncl-Sema3E) was demonstrated in multiple tumour models where it acts as a novel inhibitor of tumour growth, metastasis, and angiogenesis (40). Also, in vivo studies on mice indicate that Sema3E treatment reduces allergic asthma by reducing eosinophilic inflammation, serum IgE level, and Th2 cytokine response and is proposed as a novel treatment option for allergic asthma (26). However, the efficacy of semaphorins has remained to be examined in clinical settings. Much more study in this area is needed.
In summary, using a model of administration of exogenous Sema3E, we confirmed the role of this molecule in host defence against chlamydial infection and revealed the potential of Sema3E treatment to restore the defect of Sema3E deficient individuals in immune protection and to enhance the protection in intact individuals. The study suggests that the immunomodulatory function of Sema3E on T cell and DC function may be considered in the development of preventive/ therapeutic strategies in infectious/inflammatory diseases.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Material. Further inquiries can be directed to the corresponding author.
Ethics statement
The animal study was reviewed and approved by The University of Manitoba Animal Ethics Committee (Protocol # 19-029). Higher CD103 + lung DC subset in Sema3E-Fc treated mice than saline-Fc treated mice after Cm infection. Mice were intranasally inoculated with 1x10 3 IFUs of Cm. Lung cells were collected from Sema3E-Fc, or saline-Fc treated WT and Sema3E KO mice on day 3 post-infection, stained for surface markers and analyzed using flow cytometry. (A) Representative flow cytometric images of CD103 + lung DC subset. (B) The percentages and number of CD103 + lung DC subset. Data are shown as mean ± SD (n = 3) and represent one of three independent experiments with similar results. **p < 0.01.
|
2022-08-02T13:09:35.237Z
|
2022-08-02T00:00:00.000
|
{
"year": 2022,
"sha1": "c12446dca732a8e70af1dab498920d117b303918",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "c12446dca732a8e70af1dab498920d117b303918",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52015029
|
pes2o/s2orc
|
v3-fos-license
|
POI Information Enhancement Using Crowdsourcing Vehicle Trace Data and Social Media Data: A Case Study of Gas Station
: Points of interest (POIs) such as stores, gas stations, and parking lots are particularly important for maps. Using gas station as a case study, this paper proposed a novel approach to enhance POI information using low-frequency vehicle trajectory data and social media data. First, the proposed method extracted spatial information of the gas station from sparse vehicle trace data in two steps. The first step proposed the velocity sequence linear clustering algorithm to extract refueling stop tracks from the individual trace line after modeling the vehicle refueling stop behavior using movement features. The second step used the Delaunay triangulation to extract the spatial information of gas stations from the collective refueling stop tracks. Second, attribute information and dimension sentiment semantic information of the gas station were extracted from social media data using the text mining method and tripartite graph model. Third, the gas station information was enhanced by fusing the extracted spatial data and semantic data using a matching method. Experiments were conducted using the 15-day vehicle trajectories of 12,000 taxis and social media data from the Dazhongdianping in Beijing, China, and the results showed that the proposed method could extract the spatial information, attribute information, and review information of gas stations simultaneously. Compared with ground truth data, the automatically enhanced gas station was proved to be of higher quality in terms of the correctness, completeness, and real-time.
Introduction
Points of interest are an indispensable component in basic geographic information and play an important role in a variety of fields, including Location Based Service (LBS), scientific research, and commercial applications [1][2][3].Keeping point of interest (POI) data up-to-date, correct, and complete in a short updating cycle and low cost way is extremely important [1][2][3].However, POI data such as gas stations and parking lots are mainly traditionally obtained from field surveys, remote sensing, and manual annotation [4][5][6].These methods are costly, have a long update cycle, and are time-consuming, leading to limiting POI data services [5,6].Consequently, using a low cost and fast method to enhance and update POI data automatically is very significant.
In the era of big data, Volunteer Geography Information (VGI) data (e.g., trajectory) and User-Generated Content (UGC) data (e.g., social media data) are available and provide a new direction in which to enhance and update POI data in three ways.One is through collaborative mapping programs [5,6], such as OpenStreetMap, which are constructed manually from crowdsourcing.However, these sources are limited (e.g., focused on a single domain, limited spatial coverage) and are updated slowly due to insufficient user participation [5,6].Another option is to extract POI data from global positioning system (GPS) trajectory data [7][8][9][10][11].At present, extracting and updating spatial data using collections of GPS traces is a relatively new area of research, and these works include map updates [8], data extraction such as road data [9,10], POI data [4,7], and activity behavior analysis [7,11].Compared with collaborative mapping programs, this method is real-time, low-cost, updates quickly, and can monitor the temporal and spatial change of activity patterns in the POI place.Nevertheless, the GPS trajectory only embodies spatial information and cannot provide attribute information (e.g., name, descriptions) necessary for POI.The last option is to extract POI from social media data [1,3,[12][13][14].Many stores use web sites such as the Yellows Pages, Dianping [15], to advertise their services and publish attribute information such as telephone numbers and addresses.Meanwhile, comment text data containing user reviews about POIs are posted on websites.The geotagged UGC data have opened a new way of acquiring POI semantic information but cannot be obtained from GPS traces [12,13].However, POI information from social media also has problems such as outdated, poor geometry information (only point), redundant POI due to the frequent opening, moving, or closing of a business, and web map data update lag [12].Moreover, social media data are sparse in space and time compared with GPS trace [16], which makes it difficult to sense the dynamic change of spatial information (e.g., the area of POI, subsidy facility of POI, infrastructure, etc.) and activity behavior in the POI place [17].
Based on the above analysis, it is difficult to obtain high quality, up-to-date, rich semantic POI information using only one type of VGI data (GPS trace data or social media data).To accomplish this, the POI data must be enhanced by coupling multi-sourced data [14,16].In this work, using the case of the gas station, a big data-driven method was developed for enhancing POI data coupling sparse vehicle GPS trace and geo-tagged social media data.Unfortunately, there are many challenges using multisource data to enhance POI information, they are as follows: • First, using sparse vehicle GPS trace to extract the spatial information of gas stations has two key points to be solved: one is to extract the refueling stop sub-track from each trace.However, vehicle trace contains various stop behaviors and there is no model and algorithm to identity and extract refueling stop behavior sub-tracks.Therefore, it needs to establish a refueling stop behavior model to distinguish the refueling behavior from other stop behaviors and propose an efficient algorithm to extract refueling stop tracks by coupling the model.The other problem is that it requires a new way to extract the spatial information accurately from the collective refueling tracks.
•
Second, social media data not only contains attribute data of gas stations (e.g., name, address), but also contains review information of different dimensions (e.g., service, product).Therefore, it is necessary to present a new method to mine the attribute and dimensional sentiment semantic information from the unstructured comment text data simultaneously.
•
Third, an efficient method should be proposed to enhance the POI information by fusing the two kinds of different dimension information from different sources.
To overcome the above challenges, a novel method for enhancing POI data coupling vehicle GPS trace data and social media data is presented.The main contributions of this article as follows: 1.
The vehicle refueling stop behavior model and the velocity sequence linear clustering algorithm (VSLC) are proposed to identity and extract refueling stop sub-tracks from each trace.Then, the spatial information of gas stations is extracted from collective refueling stop tracks by the Delaunay triangulation.
2.
A new way of coupling the text mining method and tripartite graph model is presented to extract the attribute information and dimension sentiment semantic of gas stations from the social media data of www.dazhongdianping.com(Dianping) [15].
3.
The POI information is enhanced using a matching method by fusing the spatial and attribute information extracted from different VGI data. 4.
An experiment using 15-day taxi GPS traces and social media data from Dianping in Beijing, China verifies the novel method.
The remainder of the paper is as follows: A review of the related work is introduced in Section 2. Section 3 outlines the procedure of the gas station information extraction method.Section 4 describes the experiments on real GPS traces and social media datasets and evaluates the proposed method.The conclusions and directions of future research are discussed in Section 5.
Activity Stop Behavior Detection from GPS Trajectory for Extracting POI Information
A trajectory, according to the Stop/Move model [18,19], is composed of a series of stops and moves.Stop detection is important for recognizing significant locations or place, such as POI extraction from the GPS trajectory.There is a rich body of research on how to detect stops from the trajectory data, which can be divided into two categories: static methods [19][20][21] and dynamic methods [22].For the static approach [19][20][21], significant places such as gas stations are defined in advance, then the stop location is detected by computing their spatial intersections with trajectories and the stay duration.However, the static algorithm is not suitable for extracting gas stations without well-defined POI data.
More studies have investigated the dynamic approaches, where no prior knowledge regarding stops is given and stops can only be discovered by considering the trajectory's spatial-temporal features such as velocity, distance, and direction [22][23][24][25][26][27][28][29][30], etc.Several classical clustering algorithms have been introduced to extract stops from trajectory, such as the modified K-Means methods that have been adopted to detect stops [22].The DBSCAN [23], the modified DBSCAN algorithm, and the density and join-based clustering algorithm (DJ-Cluster) [24] have been proposed to detect meaningful places.However, there are only a few stop points in the refueling process due to the low sampling rate of trajectory data.These density-based clustering algorithms only consider spatial characteristics, which are not suitable for the low-frequency trajectory in this work.Many studies have taken both the spatial and temporal characteristics into consideration.Based on the Stop/Move model, the CB_SmoT [25] and DB_SmoT algorithm [26] have been presented to extract stop tracks considering temporal, speed, and spatial features.Reference [27] improved the CB-SMoT algorithm by proposing an alternative for calculating the Eps parameter.Unfortunately, there are many missing track points and drifts in vehicle traces due to signal loss caused by buildings, trees, and other shelters in the gas station.These algorithms are not robust enough to signal missing and data drifts.To solve the data loss problem, some stop extraction approaches have been proposed [28][29][30] by taking different trajectory features.However, these methods cannot be used directly to detect vehicle refueling stop events and extract the spatial information of the gas station.There have been some works to model and analyze vehicle refueling stop behavior [17,21,31], but these methods are also unable to extract the gas station spatial information automatically without other data.Therefore, it is necessary to propose a method to detect refueling stop behavior and extract refueling tracks for the extraction of gas station spatial information from sparse vehicle traces.
Extracting POIs' Semantic Information from Social Media Data
A POI database that contains spatial information and attribute semantic information (e.g., names, addresses, review descriptions) to support user queries is especially useful.However, the absence of semantic attribute content (e.g., name, address, sentiment) in GPS tracks hampers the integrity of data extraction and deeper analysis such as user preferences [16,32].Social media data are generated when people post, comment or check-in on social networking sites such as Foursquare, Twitter, Weibo, and Dianping [13].Social media data not only contains the attribute information of POIs, but also contains opinions about the services and POIs by sharing users' photos, comments, videos, etc.Consequently, extracting POI attribute semantic information from social media data has drawn much attention [33].For example, the attribute information of POI, including the address and store name, is extracted from the Yellow Pages [1,3].The authors in paper [6] proposed a method to enhance POI data (improve the coverage and diversity of POIs) by integrating crowdsourced POIs from different Web sources.These works only extracted the attribute information of POIs, while these methods did not extract semantic information such as the sentimental evaluation of POIs nor paid attention to the users' emotional perception of POI place or service.For semantic extraction, the term frequency-inverse document frequency (TF-IDF) approach and semantic annotation techniques have been used to infer semantic tags of POIs using flick geo-tagged photos and Twitter check-in data [34,35].The work in [36] proposed a method and interest measures to discover important locations that considered historical user data and each user's preferences.Although these works consider semantic information and the user's preferences, the overall sentiment sensing was only analyzed and the users' evaluation of different dimensions of POIs was not considered.
POI Informationen Ehancement Using Multisourced VGI Data
Keeping POI data up-to-date, of a high quality, and richly semantic in a short updating cycle and low cost way is significant.However, it is difficult to achieve this goal only using a single data (trajectory data or social media data).For example, the POI data extracted from social media data [16] (e.g., Yellow Pages, Dianping) have drawbacks such as low accuracy, redundancy, duplicates, and outdated [1,3,6], while GPS trace data lacks attributesemantic information.Therefore, multisourced VGI data should be fused to enhance POI information.Compared with social media data, trajectory data has a high temporal and spatial resolution, and can dynamically sense the change of facilities and human activity [4,7,17].Particularly, for extracting the change of geometric features of POI, such as expansion of POI, change of subsidiary facilities (e.g., road, infrastructure) of POI, as well as analyzing the temporal and spatial change of activity behavior including space-time distribution of activity and activity patterns [8,17], trajectory data have more advantages than social media data.Compared with GPS trace data, social media data provides rich attribute semantic information.Therefore, it is necessary to combine the real-time of the GPS trajectory with the semantic richness of the web data to obtain updated, accurate, and rich semantic POI information.Currently, there have been some studies on integrating multisource data, including GPS trajectories, mobile phone data, and remote sensing to construct road maps and better understand urban functions [16,32,37].To the best of our knowledge, no existing work has extracted both the spatial and semantic information of POI using GPS trajectory data and social media data in an automated manner to enhance POI information.
Methodology
A method for enhancing the gas station POI information using low-frequency vehicle GPS traces and social media data is shown in the Figure 1, and the method includes three key steps:
•
First, after modeling the refueling stop behavior using trajectory movement parameters, the vehicle refueling stop sub-trajectories are extracted by the proposed VSLC, and then the gas station spatial information is extracted from collective stop tracks by the Delaunay triangulation.
•
Second, attribute information and each dimension sentiment evaluation of the gas station are extracted by the text mining method and tripartite graph model.
•
Third, the spatial information, attribute information and review semantic information of the gas station are integrated to enhance the POI information by using the buffer matching method.methods did not extract semantic information such as the sentimental evaluation of POIs nor paid attention to the users' emotional perception of POI place or service.For semantic extraction, the term frequency-inverse document frequency (TF-IDF) approach and semantic annotation techniques have been used to infer semantic tags of POIs using flick geo-tagged photos and Twitter check-in data [34,35].The work in [36] proposed a method and interest measures to discover important locations that considered historical user data and each user's preferences.Although these works consider semantic information and the user's preferences, the overall sentiment sensing was only analyzed and the users' evaluation of different dimensions of POIs was not considered.
POI Informationen Ehancement Using Multisourced VGI Data
Keeping POI data up-to-date, of a high quality, and richly semantic in a short updating cycle and low cost way is significant.However, it is difficult to achieve this goal only using a single data (trajectory data or social media data).For example, the POI data extracted from social media data [16] (e.g., Yellow Pages, Dianping) have drawbacks such as low accuracy, redundancy, duplicates, and outdated [1,3,6], while GPS trace data lacks attributesemantic information.Therefore, multisourced VGI data should be fused to enhance POI information.Compared with social media data, trajectory data has a high temporal and spatial resolution, and can dynamically sense the change of facilities and human activity [4,7,17].Particularly, for extracting the change of geometric features of POI, such as expansion of POI, change of subsidiary facilities (e.g., road, infrastructure) of POI, as well as analyzing the temporal and spatial change of activity behavior including space-time distribution of activity and activity patterns [8,17], trajectory data have more advantages than social media data.Compared with GPS trace data, social media data provides rich attribute semantic information.Therefore, it is necessary to combine the real-time of the GPS trajectory with the semantic richness of the web data to obtain updated, accurate, and rich semantic POI information.Currently, there have been some studies on integrating multisource data, including GPS trajectories, mobile phone data, and remote sensing to construct road maps and better understand urban functions [16,32,37].To the best of our knowledge, no existing work has extracted both the spatial and semantic information of POI using GPS trajectory data and social media data in an automated manner to enhance POI information.
Methodology
A method for enhancing the gas station POI information using low-frequency vehicle GPS traces and social media data is shown in the Figure 1, and the method includes three key steps: • First, after modeling the refueling stop behavior using trajectory movement parameters, the vehicle refueling stop sub-trajectories are extracted by the proposed VSLC, and then the gas station spatial information is extracted from collective stop tracks by the Delaunay triangulation.
•
Second, attribute information and each dimension sentiment evaluation of the gas station are extracted by the text mining method and tripartite graph model.
•
Third, the spatial information, attribute information and review semantic information of the gas station are integrated to enhance the POI information by using the buffer matching method.The vehicle refueling stop event took place in a gas station area, and the vehicle refueling behavior included the vehicle driving off the road, entering the gas station, staying for refueling, leaving the gas station, and returning to driving on the road, which is a typical Stop/Move pattern [18,19], as shown in Figure 2a.According to the relationship between the location of the stop track points and the gas station, the refueling stop track can be divided into the five types seen in Figure 2b.Type A in Figure 2b shows that vehicles leave a large number of stop points in the process of waiting for refueling and these GPS points may contain many more drifts and noises.Types B, C, and D in Figure 2b indicate that vehicles leave few stop track points due to sparse trajectory sampling.Type E in Figure 2b shows that the refueling stop points are missing and the vehicle leaves no stop trace points.Unfortunately, most of the refueling trajectories belong to types B, C, and D.
Refueling Stop Behavior Analyzing and Modeling Using Movement Parameters
The vehicle refueling stop event took place in a gas station area, and the vehicle refueling behavior included the vehicle driving off the road, entering the gas station, staying for refueling, leaving the gas station, and returning to driving on the road, which is a typical Stop/Move pattern [18,19], as shown in Figure 2a.According to the relationship between the location of the stop track points and the gas station, the refueling stop track can be divided into the five types seen in Figure 2b.Type A in Figure 2b shows that vehicles leave a large number of stop points in the process of waiting for refueling and these GPS points may contain many more drifts and noises.Types B, C, and D in Figure 2b indicate that vehicles leave few stop track points due to sparse trajectory sampling.Type E in Figure 2b shows that the refueling stop points are missing and the vehicle leaves no stop trace points.Unfortunately, most of the refueling trajectories belong to types B, C, and D. Movement parameters [38], including velocity, time, direction and its change, can describe and model a refueling stop behavior.In Figure 2c, due to the vehicle speed first decelerating and then accelerating in the process of refueling, the average velocity variation of the refueling trajectory presents a V shape by calculating the average velocity of the trace [17].A refueling stop is embodied by a track sequence with a velocity of zero or a low speed at a short distance.Although a refueling track has a few stop points, the average speed (represented by Av) of the refueling stop track is much lower than the normal speed [17,21].Therefore, the stop tracks can be extracted from each vehicle trace by giving a velocity threshold (represented by MaxAv, which is 6km/h in this work).Naturally, it is necessary to integrate other features to separate refueling from other types of stops.In Figure 2d, the range of direction change (the direction change of the track point is called the turning angle [38], which is represented by TurnAng) of each refueling track point ranged from 0 to 90° (e.g., the TurnAng of point p1 is θ in Figure 2d).Commonly, there are four consecutive obvious direction changes in the process of refueling, as shown in Figure 2d.Due to sparse trajectory sampling, it was not obvious that the direction change continued when the stop was type C or D, but there should be at least one change.When the stop was type E in Figure 2b, it disturbed the refueling event detection.So, for a refueling stop track, the number of the direction changes (represented by Dcnum) was greater than or Movement parameters [38], including velocity, time, direction and its change, can describe and model a refueling stop behavior.In Figure 2c, due to the vehicle speed first decelerating and then accelerating in the process of refueling, the average velocity variation of the refueling trajectory presents a V shape by calculating the average velocity of the trace [17].A refueling stop is embodied by a track sequence with a velocity of zero or a low speed at a short distance.Although a refueling track has a few stop points, the average speed (represented by Av) of the refueling stop track is much lower than the normal speed [17,21].Therefore, the stop tracks can be extracted from each vehicle trace by giving a velocity threshold (represented by MaxAv, which is 6km/h in this work).Naturally, it is necessary to integrate other features to separate refueling from other types of stops.In Figure 2d, the range of direction change (the direction change of the track point is called the turning angle [38], which is represented by TurnAng) of each refueling track point ranged from 0 to 90 • (e.g., the TurnAng of point p 1 is θ in Figure 2d).Commonly, there are four consecutive obvious direction changes in the process of refueling, as shown in Figure 2d.Due to sparse trajectory sampling, it was not obvious that the direction change continued when the stop was type C or D, but there should be at least one change.
When the stop was type E in Figure 2b, it disturbed the refueling event detection.So, for a refueling stop track, the number of the direction changes (represented by Dc num ) was greater than or equal to one.Moreover, the time of the refueling stop (represented by T stop ) was generally between 3 min to 15 min [17,21].Therefore, the refueling event (RE) of the individual track line can be modeled using movement parameters: In addition, collective refueling track features and gas station geometric features can also help to identify the gas station.As vehicles travel in one direction in the gas station, the move direction of the collective refueling tracks (represented by Dir coll ) of each gas station is one-way, as shown in Figure 2d.The area of the gas station polygon extracted from the collective refueling tracks should meet the construction standard of the gas station (represented by Area, Area ∈ 600, 20, 000 m 2 in this work) [39].Therefore, the extracted gas station is constrained by the GS model:
Refueling Stop Tracks Extraction Using Velocity Sequence Linear Clustering Algorithm
As the Section 3.1.1analysis showed, the refueling stop sub-trajectory could be extracted from each trace by giving a velocity threshold of MaxAv.Meanwhile, stop and moving behavior should last for a certain time in reality.For example, let S 1 be a stop sub-track, if the time duration (S 1 .last,S 1 .first)≤ MinStop, the S 1 is a moving sub-track.Let S 2 be a moving sub-track, if the time duration (S 2 .last,S 2 .first)≤ MinMove, then S 2 is a stop sub-track.Here, MinMove is the minimum duration that a normal moving behavior should last and MinStop is the minimum duration that a normal stop behavior should last.Based on the above analysis, the VSLC algorithm was developed to extract refueling stop sub-tracks from each trace line.For each trace line, the stop and move state of the trajectory segments are serialized according to the velocity threshold MaxAv taking the trace segment as the basic unit (the trace segment is a line segment formed by two adjacent points on time).Extracting the sub-tracks through clustering the same state trace segments following the time linear sequence.Then the sub-tracks are clustered again to extract the stop sub-tracks according to the threshold MinStop and MinMove.Last, the refueling stop sub-tracks are extracted by removing the non-refueling stop sub-tracks according to the refueling behavior features (the RE model in Section 3.1.1).The VSLC algorithm consists of the following steps: 1.
Step 1, parameters values are defined.Determine the average speed threshold MaxAv, minimum movement duration threshold MinMove, and minimum stop duration threshold MinStop.
2.
Step 2, trajectory segments speed serialization.For each trace, the average velocity of each trace segment as represented by TSAv is calculated.The trace segment is considered in the stop state when TSAv ≤ MaxAv, and the state is represented by 0; conversely, the trajectory segment is considered in the move state, and the state is represented by 1, as per Step 2 in Figure 3.
3.
Step 3, clustering trajectory segments.Sub-tracks are generated by merging trajectory segments with the same state in accordance with the direction of time, as per Step 3 in Figure 3. 4.
Step 4, extracting stop sub-tracks.For each sub-track generated in Step 3, if the time duration of the sub-track is lower than MinMove or MinStop, the sub-track is changed into opposite state as trajectory noise.Then, extract the stop sub-tracks by direction clustering sub-tracks again according to the state, as per Step 4 in Figure 3.
5.
Step 5, extracting refueling stop sub-tracks.For each stop sub-track, the stop sub-track can be considered as refueling stop tracks when the trajectory features accord with the RE model in Section 3.1.1,as per Step 5 in Figure 3. 6.
Step 6, collective refueling stop tracks extraction.When all vehicle traces are processed repeat the above steps, the extraction results are the collective refueling stop tracks.
Spatial Geometric Information Extraction Using Collective Refueling Stop Tracks
Collective refueling track lines are interpolated by the linear interpolation method [40] first, then constructing the constraint Delaunay triangulation (DT) within interpolation track lines in Figure 4a.Refueling tracks of each gas station are clustered by removing the global long edge of the Delaunay triangulation, and the global long edge threshold is calculated as follows [40,41]: where, Mean(DT) is the mean length of all edges in DT; Variation(DT) is the standard deviation of the length of all edges in DT; and α is the adjustment factor and is set to 1 by default.The triangle edge will be deleted when the length of the edge is bigger than the GlobalLength.Then, it still needs to delete local long edges in each cluster to extract the gas station polygon accurately, as shown in Figure 4b.The local long edge threshold is calculated as follows [41]: where, Mean 2 Gj(pi) is the mean length of the edges formed by the points in the second-order neighbors of pi in the cluster Gj; β is a control factor that is used to control the sensitiveness of the LocalLength(pi); and β is set to 1.5 by default in the work [41].The triangle edge in a cluster will be deleted when the length of the edge is bigger than the LocalLength.In Figure 4c, the gas station polygons can be extracted by deleting the local long edges.
Spatial Geometric Information Extraction Using Collective Refueling Stop Tracks
Collective refueling track lines are interpolated by the linear interpolation method [40] first, then constructing the constraint Delaunay triangulation (DT) within interpolation track lines in Figure 4a.Refueling tracks of each gas station are clustered by removing the global long edge of the Delaunay triangulation, and the global long edge threshold is calculated as follows [40,41]: where, Mean(DT) is the mean length of all edges in DT; Variation(DT) is the standard deviation of the length of all edges in DT; and α is the adjustment factor and is set to 1 by default.The triangle edge will be deleted when the length of the edge is bigger than the GlobalLength.Then, it still needs to delete local long edges in each cluster to extract the gas station polygon accurately, as shown in Figure 4b.The local long edge threshold is calculated as follows [41]: where, Mean 2 Gj (p i ) is the mean length of the edges formed by the points in the second-order neighbors of p i in the cluster G j ; β is a control factor that is used to control the sensitiveness of the LocalLength(p i ); and β is set to 1.5 by default in the work [41].The triangle edge in a cluster will be deleted when the length of the edge is bigger than the LocalLength.In Figure 4c, the gas station polygons can be extracted by deleting the local long edges.
Spatial Geometric Information Extraction Using Collective Refueling Stop Tracks
Collective refueling track lines are interpolated by the linear interpolation method [40] first, then constructing the constraint Delaunay triangulation (DT) within interpolation track lines in Figure 4a.Refueling tracks of each gas station are clustered by removing the global long edge of the Delaunay triangulation, and the global long edge threshold is calculated as follows [40,41]: where, Mean(DT) is the mean length of all edges in DT; Variation(DT) is the standard deviation of the length of all edges in DT; and α is the adjustment factor and is set to 1 by default.The triangle edge will be deleted when the length of the edge is bigger than the GlobalLength.Then, it still needs to delete local long edges in each cluster to extract the gas station polygon accurately, as shown in Figure 4b.The local long edge threshold is calculated as follows [41]: where, Mean 2 Gj(pi) is the mean length of the edges formed by the points in the second-order neighbors of pi in the cluster Gj; β is a control factor that is used to control the sensitiveness of the LocalLength(pi); and β is set to 1.5 by default in the work [41].The triangle edge in a cluster will be deleted when the length of the edge is bigger than the LocalLength.In Figure 4c, the gas station polygons can be extracted by deleting the local long edges.
(a) (b) For a gas station polygon (or a cluster), if it can meet the GS model in Section 3.1.1,the gas station polygon can be used as the gas station, otherwise, the gas station polygon is filtered.In Figure 4c, clusters A, E and C were removed as these clusters could not be used as the gas station according to the track direction (the direction of the collective refueling tracks was not one-way), and the area of polygons B and D was too small to be the gas station.Last, the center point of the gas station polygon was extracted as the gas station point in Figure 4d.
Social Media Data from Dianping
Dianping (www.dazhongdianping.com)[15] is one of the most popular Chinese web forums and service platforms.It provides information based on users' feedback on the POI such as restaurants, hotels, gas stations, and tourism, etc. Social media data from Dianping includes attribute information (e.g., location, name, address, telephone) and review information (e.g., comments, photographs, videos) of the gas station.Such review data offer good opportunities for researchers to study how humans perceive, experience, and describe the POI point, and consequently to represent place semantics [12].In particular, the unstructured review text not only includes the overall evaluation of the POI, but also the evaluation of different dimensions (e.g., service, product, etc.) of the POI.The attribute and sentiment semantic information of the POI can provide suggestions for users and managers based on many characters, such as the style of flavor or quality of service.
Attribute and Semantic Information Extraction Using the Text Mining Method
The gas station attribute information includes names, addresses, locations, telephone numbers, and opening times that are extracted from the web pages [15] by a web crawler [1,3].Meanwhile, the user's comment text data of each gas station is crawled from Dianping.We aimed to mine the dimensions, along with their sentiments, referred by reviews.Therefore, gas station features were divided into four dimensions: service (e.g., service attitude, management, coupon); environment (e.g., convenient, queuing, area); products (e.g., diesel oil, gas, charge); and subsidiary function (e.g., car washing shop, restroom).The dimension feature dictionary and the sentiment dictionary were constructed using prior knowledge [17,21] and HowNet [42].Dimensional sentiment information was extracted using the python NLP library, and the steps were as follows: 1. Step 1, text reprocessing.Review text reprocessing including sentence segmentation, tokenization, removing stop words, and POS tagging by the NLP module of Python [43].2. Step 2, extracting feature-opinion word pairs.The assumption was that each sentence was an evaluation of a dimension of the feature object [44].The direct links of opinion words (sentiment) and noun words (feature) in a clause, e.g., co-occurrence, were extracted according to syntactic structure and the sentiment dictionary.This step collected all the co-occurrence pairs between For a gas station polygon (or a cluster), if it can meet the GS model in Section 3.1.1,the gas station polygon can be used as the gas station, otherwise, the gas station polygon is filtered.In Figure 4c, clusters A, E and C were removed as these clusters could not be used as the gas station according to the track direction (the direction of the collective refueling tracks was not one-way), and the area of polygons B and D was too small to be the gas station.Last, the center point of the gas station polygon was extracted as the gas station point in Figure 4d.
Social Media Data from Dianping
Dianping (www.dazhongdianping.com)[15] is one of the most popular Chinese web forums and service platforms.It provides information based on users' feedback on the POI such as restaurants, hotels, gas stations, and tourism, etc. Social media data from Dianping includes attribute information (e.g., location, name, address, telephone) and review information (e.g., comments, photographs, videos) of the gas station.Such review data offer good opportunities for researchers to study how humans perceive, experience, and describe the POI point, and consequently to represent place semantics [12].In particular, the unstructured review text not only includes the overall evaluation of the POI, but also the evaluation of different dimensions (e.g., service, product, etc.) of the POI.The attribute and sentiment semantic information of the POI can provide suggestions for users and managers based on many characters, such as the style of flavor or quality of service.
Attribute and Semantic Information Extraction Using the Text Mining Method
The gas station attribute information includes names, addresses, locations, telephone numbers, and opening times that are extracted from the web pages [15] by a web crawler [1,3].Meanwhile, the user's comment text data of each gas station is crawled from Dianping.We aimed to mine the dimensions, along with their sentiments, referred by reviews.Therefore, gas station features were divided into four dimensions: service (e.g., service attitude, management, coupon); environment (e.g., convenient, queuing, area); products (e.g., diesel oil, gas, charge); and subsidiary function (e.g., car washing shop, restroom).The dimension feature dictionary and the sentiment dictionary were constructed using prior knowledge [17,21] and HowNet [42].Dimensional sentiment information was extracted using the python NLP library, and the steps were as follows: 1.
Step 1, text reprocessing.Review text reprocessing including sentence segmentation, tokenization, removing stop words, and POS tagging by the NLP module of Python [43].2.
Step 2, extracting feature-opinion word pairs.The assumption was that each sentence was an evaluation of a dimension of the feature object [44].The direct links of opinion words (sentiment) and noun words (feature) in a clause, e.g., co-occurrence, were extracted according to syntactic structure and the sentiment dictionary.This step collected all the co-occurrence pairs between noun words and opinion words in sentences and modeled the problem as a bipartite graph, as shown in Figure 5b.3.
Step 3, sentiment scores calculation.Each sentence was a sentiment unit, the sentiment values of feature were calculated by considering the effect of opinion words, negative words, and degree words.The sentiment value can be calculated as follows: n is the number of opinion words of the sentence; p i is the sentiment polarity of the sentiment word i; and w i is the degree of the sentiment word i.The sentiment polarity including positive, neutral, and negative [43] and using 1, 0, and −1 to indicate them.The degree words were classified into three grades: strong, middle, and weak, which are presented by 3, 2, and 1 respectively.Then, a tripartite graph was constructed by incorporating the sentiment scores, as shown in Figure 5c.4.
Step 4, feature word merging.Considering that different sentences may be the evaluation of the same feature of the gas station, and indirect links between the opinion words and noun words are links through inter-sentences, the feature words of the tripartite graph were merged into four dimensions according to the dimensional dictionary defined previously.In Figure 5d, the dimension sentiment information of the gas station extraction by tripartite graph. 5.
Step 5, dimension sentiment scores calculation.Dimension sentiment scores were calculated for each dimension in a gas station.Calculation scores in the k dimension of document d as follows: m is the number of sentences in the document d marked with the k dimension.The dimension score of each gas station was obtained, as shown in Figure 5d.6.
Step 6, the algorithm was stopped when all the documents of the gas station were processed.
ISPRS Int.J. Geo-Inf.2018, 7, x FOR PEER REVIEW 9 of 20 noun words and opinion words in sentences and modeled the problem as a bipartite graph, as shown in Figure 5b.3. Step 3, sentiment scores calculation.Each sentence was a sentiment unit, the sentiment values of feature were calculated by considering the effect of opinion words, negative words, and degree words.The sentiment value can be calculated as follows: n is the number of opinion words of the sentence; pi is the sentiment polarity of the sentiment word i; and wi is the degree of the sentiment word i.The sentiment polarity including positive, neutral, and negative [43] and using 1, 0, and −1 to indicate them.The degree words were classified into three grades: strong, middle, and weak, which are presented by 3, 2, and 1 respectively.Then, a tripartite graph was constructed by incorporating the sentiment scores, as shown in Figure 5c.4. Step 4, feature word merging.Considering that different sentences may be the evaluation of the same feature of the gas station, and indirect links between the opinion words and noun words are links through inter-sentences, the feature words of the tripartite graph were merged into four dimensions according to the dimensional dictionary defined previously.In Figure 5d, the dimension sentiment information of the gas station extraction by tripartite graph.
POI Information Fusion and Enhancement
After removing or fusing the duplicate POI from Dianping (D-POI) by considering both the spatial and attribute information [6], the POI from trace data (T-POI) and D-POI were enhanced by the matching method [6,45].The buffer zone (the buffer radius was 50m in this work) was established for the T-POI points, and the D-POI falling into the buffer were fused.For the one to zero case, the T-POI data was inserted into the database directly.For the one to one, the center point of the line, which was connected by the two point as the new POI point, and the attribute data were inserted into the database.For the one to many, the center point of the polygon that was constructed by the points as the new POI point, and one of the most complete attribute data was written into the database.The zero to one case was divided into two situations, one was that the D-POI has been dismantled or changed to other types of POI, but the web data have not been updated [6].Another was that the T-POI has not been extracted, but the D-POI exists in the real world.Therefore, the two situations were separated by deleting the outdated D-POI when no comments of this POI had been made by users in two months.
Experimental Data
To verify the validity of the proposed approach, taxi GPS trajectory data and social media data from Dianping [15] in Beijing, China were tested.The trajectory data was obtained from the database of the Beijing City government data resources [46], which provided access to trajectory information in Beijing.Taxi GPS trajectory data came from 12,000 taxis that had been equipped with GPS devices for approximately 15 days of daily travel in Beijing.The taxi trajectory dataset contained the following fields: taxi number, latitude, longitude, travel speed, travel direction, and time.The trajectory sampling interval was 10 s-120 s, the average sampling interval was 40 s, the standard deviation was 8.28 s and device positioning accuracy of the data was 10-30 m.In Figure 6, we selected part of Beijing City as our study area, and the trajectory data within this area during the period from 1-15 November 2015 were gathered to detect refueling events and extract the gas station.The social media data, including basic information and review text of the gas station stated in Section 3.2.1, were uploaded by users and is chosen as our second study dataset.For the ground truth data, we selected gas station POI data obtained from the National Geomatics Center of China.There were 820 gas stations in our study area, and the data included location, name, address, type, and telephone.
POI Information Fusion and Enhancement
After removing or fusing the duplicate POI from Dianping (D-POI) by considering both the spatial and attribute information [6], the POI from trace data (T-POI) and D-POI were enhanced by the matching method [6,45].The buffer zone (the buffer radius was 50m in this work) was established for the T-POI points, and the D-POI falling into the buffer were fused.For the one to zero case, the T-POI data was inserted into the database directly.For the one to one, the center point of the line, which was connected by the two point as the new POI point, and the attribute data were inserted into the database.For the one to many, the center point of the polygon that was constructed by the points as the new POI point, and one of the most complete attribute data was written into the database.The zero to one case was divided into two situations, one was that the D-POI has been dismantled or changed to other types of POI, but the web data have not been updated [6].Another was that the T-POI has not been extracted, but the D-POI exists in the real world.Therefore, the two situations were separated by deleting the outdated D-POI when no comments of this POI had been made by users in two months.
Experimental Data
To verify the validity of the proposed approach, taxi GPS trajectory data and social media data from Dianping [15] in Beijing, China were tested.The trajectory data was obtained from the database of the Beijing City government data resources [46], which provided access to trajectory information in Beijing.Taxi GPS trajectory data came from 12,000 taxis that had been equipped with GPS devices for approximately 15 days of daily travel in Beijing.The taxi trajectory dataset contained the following fields: taxi number, latitude, longitude, travel speed, travel direction, and time.The trajectory sampling interval was 10 s-120 s, the average sampling interval was 40 s, the standard deviation was 8.28 s and device positioning accuracy of the data was 10-30 m.In Figure 6, we selected part of Beijing City as our study area, and the trajectory data within this area during the period from 1-15 November 2015 were gathered to detect refueling events and extract the gas station.The social media data, including basic information and review text of the gas station stated in Section 3.2.1, were uploaded by users and is chosen as our second study dataset.For the ground truth data, we selected gas station POI data obtained from the National Geomatics Center of China.There were 820 gas stations in our study area, and the data included location, name, address, type, and telephone.
Refueling Stop Events Extraction Experimental and Evaluation
In this section, two groups of experiments were designed to test the performance of the VSLC through a comparison to the three other algorithms: K-Medoids, DJ-Cluster [24], and CB-SMoT [25].The details of the two trajectory datasets are presented in Table 1, where the "stops" column gives
Refueling Stop Events Extraction Experimental and Evaluation
In this section, two groups of experiments were designed to test the performance of the VSLC through a comparison to the three other algorithms: K-Medoids, DJ-Cluster [24], and CB-SMoT [25].
The details of the two trajectory datasets are presented in Table 1, where the "stops" column gives the number of manually labeled stops in each dataset.Dataset 1 contained 50 taxis' data over seven days.The trajectory data was relatively complete, which had a large amount of data.Datasets 2 was selected from Dataset 1, which only contained a day's trajectory data from five taxis.To evaluate the result correctness and parameter sensitivity, the Precision (P), Recall (R), and F_Measure (F) [25] of the results were calculated by using manually extraction as the evaluation reference.The parameters of the VSLC were determined by the discussion in Section 4.2.2, while the other three algorithms selected parameters according to the optimal value of the Precision, Recall, and F_Measure.Refueling stops of the other three algorithms were obtained from extracted stops by visual approach as these methods cannot automatically extract.The results of the first experiment are listed in Table 2. From the table, K-Medoids, DJ-Cluster and CB-SMoT were more time-consuming than the VSLC algorithm due to their complex computation.In terms of precision, the VSLC was over 0.8, which was significantly larger than the other algorithms.In terms of recall, when compared with the others, the VSLC was slightly improved.Table 3 shows the second experiment's results from Dataset 2. As shown in Table 3, the results of the different algorithms were generally in accordance with the results of the first experiment.The VSLC had high precision and recall.The reason for this is that the VSLC algorithm can extract stops from sparse sampling trajectory, while the other three algorithms are suitable for dense sampling trajectories.In addition, the other three algorithms could not extract the refueling stops from trajectory directly, which is the main purpose of the proposed algorithm.Based on the experimental analysis on different refueling stop models, the results of the comparison between the VSLC and the other three algorithms are shown in Figure 7.In Figure 7a, as the vehicle leaves a large number of stop points in the gas station, the VSLC, CB_SMoT, and DJ_Cluster algorithm could identify this kind of stop, but the K-Medoids algorithm may separate one stay into two stops.Figure 7b shows that many stop points are left when the vehicle enters the gas station, but leaves few stop points when the vehicle is leaving.The four algorithms could identify the type of stop.In Figure 7c-d, the vehicle left few stop points in the gas station, and the K-Medoids and DJ_Cluster algorithm could not extract this kind of stop as the two algorithms are based on density clustering.Although there are only three stop points of traj 2 in Figure 7c, this kind of stop could be identified and the stop trajectory could be extracted by the VSLC.The CB_SMoT algorithm could also partially identify the kind of stop, but the method is very sensitive to noise, and GPS points are missing as the method considers only speed when generating stops.Luckily, the VSLC algorithm is adaptable to the noise and missing trajectory points when compared with the CB_SMoT due to considering the stop and move duration.For example, the p 2 in Figure 7d was a GPS noise, the trajectory segments p 1 p 2 , p 2 p 3 were correctly identified as a stop trajectory segment by the VSLC considering a time duration of p 1 p 3 less than MinMove.Naturally, the refueling event cannot be detected by the four methods when no stop track points are left but the vehicle has a refueling event.Based on the experimental analysis on different refueling stop models, the results of the comparison between the VSLC and the other three algorithms are shown in Figure 7.In Figure 7a, as the vehicle leaves a large number of stop points in the gas station, the VSLC, CB_SMoT, and DJ_Cluster algorithm could identify this kind of stop, but the K-Medoids algorithm may separate one stay into two stops.Figure 7b shows that many stop points are left when the vehicle enters the gas station, but leaves few stop points when the vehicle is leaving.The four algorithms could identify the type of stop.In Figure 7c-d, the vehicle left few stop points in the gas station, and the K-Medoids and DJ_Cluster algorithm could not extract this kind of stop as the two algorithms are based on density clustering.Although there are only three stop points of traj2 in Figure 7c, this kind of stop could be identified and the stop trajectory could be extracted by the VSLC.The CB_SMoT algorithm could also partially identify the kind of stop, but the method is very sensitive to noise, and GPS points are missing as the method considers only speed when generating stops.Luckily, the VSLC algorithm is adaptable to the noise and missing trajectory points when compared with the CB_SMoT due to considering the stop and move duration.For example, the p2 in Figure 7d was a GPS noise, the trajectory segments p1p2, p2p3 were correctly identified as a stop trajectory segment by the VSLC considering a time duration of p1p3 less than MinMove.Naturally, the refueling event cannot be detected by the four methods when no stop track points are left but the vehicle has a refueling event.
Parameters Setting and Evaluation
A set of experiments were conducted to evaluate the influence of three key parameters on the performance of the VSLC.The first experiments tried to evaluate the influence of MaxAv when MinMove and MinStop were fixed to the default.In Figure 8a, it was very likely to cause more undetected stops when the threshold decreased as the low speed track segments are identified as moving trajectory.Conversely, there were more false positive stops when the threshold increased.In the following two experiments, MinMove and MinStop varied but MaxAv/MinStop and MaxAv/MinMove were fixed to the default.In Figure 8b, the one stop was divided into multiple stops causing a reduction in the accuracy when the MinStop was too short; and some stops could not be extracted leading to a reduced integrity rate when the MinStop was too long.As the MinMove decreased, it was more likely to cause more separated stops and was difficult to process the track noises.In Figure 8b, as the MinMove increased, and the MinMove was larger than 120s, the parameters have little influence on the algorithm results.To summarize, MaxAv and MinStop had a greater influence on the detection results than the MinMove.According to our experiments, the VSLC achieved a desired result when the MaxAv was 1~2 m/s, the MinStop was 240~320 s, and the MinMove was 80~120 s.
Parameters Setting and Evaluation
A set of experiments were conducted to evaluate the influence of three key parameters on the performance of the VSLC.The first experiments tried to evaluate the influence of MaxAv when MinMove and MinStop were fixed to the default.In Figure 8a, it was very likely to cause more undetected stops when the threshold decreased as the low speed track segments are identified as moving trajectory.Conversely, there were more false positive stops when the threshold increased.In the following two experiments, MinMove and MinStop varied but MaxAv/MinStop and MaxAv/MinMove were fixed to the default.In Figure 8b, the one stop was divided into multiple stops causing a reduction in the accuracy when the MinStop was too short; and some stops could not be extracted leading to a reduced integrity rate when the MinStop was too long.As the MinMove decreased, it was more likely to cause more separated stops and was difficult to process the track noises.In Figure 8b, as the MinMove increased, and the MinMove was larger than 120s, the parameters have little influence on the algorithm results.To summarize, MaxAv and MinStop had a greater influence on the detection results than the MinMove.According to our experiments, the VSLC achieved a desired result when the MaxAv was 1~2 m/s, the MinStop was 240~320 s, and the MinMove was 80~120 s.
Gas Station Spatial Information Extraction from Vehicle GPS Trajectories
After preprocessing the raw vehicle traces including range out of bounds, speed anomalies, etc. [29], there were about 77,028,505 trajectory lines in 15 days.According to the described in Section 4.2.3, the threshold of the parameters of the VSLC set included the speed threshold MaxAv at 6 km/h, the MinStop was 300 s and the MinMove was 120 s. Figure 9a shows that about 132,529 stops were extracted from one day trajectory lines by step 4 of the VSLC, and then about 18,686 refueling stop tracks were extracted by step 5 of the VSLC, as shown in Figure 9b-c.The extracted stop tracks have two patterns: a clustering distribution pattern and a single track line scattered distribution in Figure 9c.Gas station polygons were extracted by the constraint Delaunay triangulation in Figure 9d.Non gas station polygons including polygon area were small and the refueling tracks that did not have the same direction were eliminated according to the GS model in Section 3.1.2,as shown in Figure 9e.Finally, 664 gas stations were extracted from 15 days of taxi GPS trajectory lines in Beijing, as shown in Figure 9f.The extracted gas station POI points were distributed between the Fourth ring road and sixth ring road and along the road in Figure 9f, while a few gas stations were extracted in the suburbs (out of the study area) of Beijing.
Gas Station Spatial Information Extraction from Vehicle GPS Trajectories
After preprocessing the raw vehicle traces including range out of bounds, speed anomalies, etc. [29], there were about 77,028,505 trajectory lines in 15 days.According to the described in Section 4.2.2, the threshold of the parameters of the VSLC set included the speed threshold MaxAv at 6 km/h, the MinStop was 300 s and the MinMove was 120 s. Figure 9a shows that about 132,529 stops were extracted from one day trajectory lines by step 4 of the VSLC, and then about 18,686 refueling stop tracks were extracted by step 5 of the VSLC, as shown in Figure 9b-c.The extracted stop tracks have two patterns: a clustering distribution pattern and a single track line scattered distribution in Figure 9c.Gas station polygons were extracted by the constraint Delaunay triangulation in Figure 9d.Non gas station polygons including polygon area were small and the refueling tracks that did not have the same direction were eliminated according to the GS model in Section 3.1.2,as shown in Figure 9e.Finally, 664 gas stations were extracted from 15 days of taxi GPS trajectory lines in Beijing, as shown in Figure 9f.The extracted gas station POI points were distributed between the Fourth ring road and sixth ring road and along the road in Figure 9f, while a few gas stations were extracted in the suburbs (out of the study area) of Beijing.
Gas Station Attribute Semantic Information Extraction from Dianping
Figure 10a shows that the gas station point and the attribute information of the gas station, including name, address, telephone number, and opening time, were extracted from Dianping.A total of 702 gas stations were extracted from the Dianping website, and these gas stations had names
Gas Station Attribute Semantic Information Extraction from Dianping
Figure 10a shows that the gas station point and the attribute information of the gas station, including name, address, telephone number, and opening time, were extracted from Dianping.A total of 702 gas stations were extracted from the Dianping website, and these gas stations had names and addresses.However, only 392 had a telephone number and only 168 had opening times.Then, all the text comments data of each gas station was crawled from Dianping and taken as the text corpus of the gas station.According to Section 3.3, after preprocessing the review text, we extracted feature-opinion word pairs, merged feature dimensions, and constructed a tripartite graph to calculate the dimension sentiment scores using sentences as analysis units, using the Python NLP tool.In Figure 10b, the different dimension sentiment scores of each gas station are visualized by the bar graph, which is also a good response to the user's emotion and experience perception of a place.The sentiment semantic information in Figure 10b can provide suggestions and recommendations for gas station management.However, some gas stations did not have or have had only part of the dimension emotional information due to sparse and incomplete comment text (Figure 10b).Although the total number of gas stations extracted from social media data is more than that extracted by the taxi tracks, the correctness was not high, and included some duplicate points (Figure 10b) and outdated points as stated in Section 4.3.3.and addresses.However, only 392 had a telephone number and only 168 had opening times.Then, all the text comments data of each gas station was crawled from Dianping and taken as the text corpus of the gas station.According to Section 3.3, after preprocessing the review text, we extracted featureopinion word pairs, merged feature dimensions, and constructed a tripartite graph to calculate the dimension sentiment scores using sentences as analysis units, using the Python NLP tool.In Figure 10b, the different dimension sentiment scores of each gas station are visualized by the bar graph, which is also a good response to the user's emotion and experience perception of a place.The sentiment semantic information in Figure 10b can provide suggestions and recommendations for gas station management.However, some gas stations did not have or have had only part of the dimension emotional information due to sparse and incomplete comment text (Figure 10b).Although the total number of gas stations extracted from social media data is more than that extracted by the taxi tracks, the correctness was not high, and included some duplicate points (Figure 10b) and outdated points as stated in Section 4.
Final Gas Station POI Map and Evaluation
The spatial and the attribute semantic information extracted from the different sources were fused by constructing the 50 m buffer for the extracted gas station point.After 86 gas stations included duplicate points and the outdated points were filtered using matching detection, the POI data of 742 gas stations was enhanced to construct the gas station POI map by the vehicle GPS trajectory and the social media data, as shown in Figure 11a.The enhanced POI information included both spatial and attribute semantic information (name, address, telephone, opening time, and dimension evaluation).Attribute information of the gas station, such as names, are displayed in Figure 11b.However, a small number of gas stations had no names due to the limitations of the data.We could see the user's perception difference of different dimension features of gas station, shown in Figure 11c-d.The gas stations can be classified according to dimension sentiment score.Taking the service dimension as an example in Figure 11d, we discovered the spatial distribution of the worst or the best service of gas stations, which can provide suggestions and recommendations for management.Moreover, gas stations can be divided into diesel oil, gasoline (includes 90#, 92#, 95#, 93#, 97#, 98#), charge and mixing type, according to the extracted product feature words.
Final Gas Station POI Map and Evaluation
The spatial and the attribute semantic information extracted from the different sources were fused by constructing the 50 m buffer for the extracted gas station point.After 86 gas stations included duplicate points and the outdated points were filtered using matching detection, the POI data of 742 gas stations was enhanced to construct the gas station POI map by the vehicle GPS trajectory and the social media data, as shown in Figure 11a.The enhanced POI information included both spatial and attribute semantic information (name, address, telephone, opening time, and dimension evaluation).Attribute information of the gas station, such as names, are displayed in Figure 11b.However, a small number of gas stations had no names due to the limitations of the data.We could see the user's perception difference of different dimension features of gas station, shown in Figure 11c-d.The gas stations can be classified according to dimension sentiment score.Taking the service dimension as an example in Figure 11d, we discovered the spatial distribution of the worst or the best service of gas stations, which can provide suggestions and recommendations for management.Moreover, gas stations can be divided into diesel oil, gasoline (includes 90#, 92#, 95#, 93#, 97#, 98#), charge and mixing type, according to the extracted product feature words.As a qualitative evaluation of the results, the generated gas station point was overlaid on a GoogleEarth image of the corresponding area to check its correctness by visual inspection, as shown in Figure 12a.Of the 742 extracted gas stations, 694 were correct, 48 were wrong, and the accuracy was 93.5%.There were 820 gas stations in the study area.126 gas stations were not extracted and the integrity rate of the results was 84.6%.Part of the gas station was wrong or un-extracted for two reasons: First, the limitations of GPS trace data.The gas station that can be extracted requires a certain amount of trace.In this work, we only used 15 days of taxi GPS trace, and some gas stations could not be extracted.By using long time series vehicle tracks or multi-type vehicle traces, this problem can be solved in future research.Second, the method for mining social media data should be improved.The POI data of one website were not complete and up to date.Meanwhile, there were a few duplicate points and outdated points that could not be detected by our method.Therefore, the future works including multi-web data fusion and deeply mining comment text data to perceive dynamic changes of POIs need to further study.As a qualitative evaluation of the results, the generated gas station point was overlaid on a GoogleEarth image of the corresponding area to check its correctness by visual inspection, as shown in Figure 12a.Of the 742 extracted gas stations, 694 were correct, 48 were wrong, and the accuracy was 93.5%.There were 820 gas stations in the study area.126 gas stations were not extracted and the integrity rate of the results was 84.6%.Part of the gas station was wrong or un-extracted for two reasons: First, the limitations of GPS trace data.The gas station that can be extracted requires a certain amount of trace.In this work, we only used 15 days of taxi GPS trace, and some gas stations could not be extracted.By using long time series vehicle tracks or multi-type vehicle traces, this problem can be solved in future research.Second, the method for mining social media data should be improved.The POI data of one website were not complete and up to date.Meanwhile, there were a few duplicate points and outdated points that could not be detected by our method.Therefore, the future works including multi-web data fusion and deeply mining comment text data to perceive dynamic changes of POIs need to further study.To quantitatively evaluate the location accuracy of gas station points, buffer zones with different buffer radiuses were established to match the extraction results based on ground truth data.The number of gas stations that fell into the buffer zone were counted and the statistical results are shown in Figure 12b.The number of gas stations in the high precision range of 5 m reached 52.02%, and the number of gas stations within the range of 30 m precision reached 85.58%.The accuracy of some gas stations was less than 30 m, which was due to the influence of the GPS positioning errors, including trajectory drift or noises and the amount of track data.Few gas stations could not match the reference data, which may have been caused by the POI demolition or the incompleteness of the reference data, but this indicates that the result can be used for POI data update.
The extracted attribute data was compared with the reference data to detect its correctness, as shown in Table 4.We could see that the accuracy of the attribute information extracted by our method had a higher accuracy but lower integrity from Table 4.Moreover, the opening time and sentiment semantic information were extracted by our method, which was difficult to obtain from the traditional authority data.However, partial attribute data and sentiment information could not be extracted, resulting in incomplete data for two reasons: One was that the new POI from GPS trace were added but were without attribute data, as the social media data had not been updated.Although data were incomplete, it could tell where there were new POIs and make field surveys.Another was that few gas stations were unable to extract semantic information without a review text.
Discussion
As our evaluation results indicated, this novel method of taking vehicle tracks and social media data as input for enhancing gas station information, including spatial, attribute, and sentiment semantic information, was validated as an effective approach.Compared with collaborative mapping programs, our method required less manual work for contributors.However, it was clear that as both the trajectory data and text data are complicated, the following problems are worth discussing:
•
First, some gas stations were not extracted due to only using taxi GPS traces.It needs to use multiple sources of trajectory data (e.g., car, electric vehicle GPS trace) to detect more refueling To quantitatively evaluate the location accuracy of gas station points, buffer zones with different buffer radiuses were established to match the extraction results based on ground truth data.The number of gas stations that fell into the buffer zone were counted and the statistical results are shown in Figure 12b.The number of gas stations in the high precision range of 5 m reached 52.02%, and the number of gas stations within the range of 30 m precision reached 85.58%.The accuracy of some gas stations was less than 30 m, which was due to the influence of the GPS positioning errors, including trajectory drift or noises and the amount of track data.Few gas stations could not match the reference data, which may have been caused by the POI demolition or the incompleteness of the reference data, but this indicates that the result can be used for POI data update.
The extracted attribute data was compared with the reference data to detect its correctness, as shown in Table 4.We could see that the accuracy of the attribute information extracted by our method had a higher accuracy but lower integrity from Table 4.Moreover, the opening time and sentiment semantic information were extracted by our method, which was difficult to obtain from the traditional authority data.However, partial attribute data and sentiment information could not be extracted, resulting in incomplete data for two reasons: One was that the new POI from GPS trace were added but were without attribute data, as the social media data had not been updated.Although data were incomplete, it could tell where there were new POIs and make field surveys.Another was that few gas stations were unable to extract semantic information without a review text.
Discussion
As our evaluation results indicated, this novel method of taking vehicle tracks and social media data as input for enhancing gas station information, including spatial, attribute, and sentiment semantic information, was validated as an effective approach.Compared with collaborative mapping programs, our method required less manual work for contributors.However, it was clear that as both the trajectory data and text data are complicated, the following problems are worth discussing:
•
First, some gas stations were not extracted due to only using taxi GPS traces.It needs to use multiple sources of trajectory data (e.g., car, electric vehicle GPS trace) to detect more refueling stop events.How to accurately detect refueling events and extract gas stations from different sources of trace data, since the positioning accuracy or sampling frequency might be variable in different datasets, is still a challenge [32].
•
Second, mining unstructured comment data is a significant challenge.In this work, mining sentiment information by constructing an emotion and feature dictionary that used sentences as a unit.However, it needs prior knowledge and manual operation.Further work needs to improve the sentiment semantic mining method, such as using a Deep Learning [33] technology.
•
Third, the attribute semantic information of the experimental results was still incomplete.
As social UGC data are added by non-professional volunteers, some gas stations did not have attributes, comments or only part of attributes, which resulted in the extraction attribute semantic information being incomplete and inaccurate.Multi-web sources and multimodal data (such as video, photos) need to be fused to extract more detailed attribute data [6,33].
•
Four, mining hidden information, such as the detection of outdated POI, POI demolition, POI temporary maintenance, and POI change relations [6], etc.In this paper, it was far from enough to use the time information of the comments and vehicle tracks to detect outdated POI.
Identifying outdated or emerging POI relations is an important future work to enhance the POI data quality.Moreover, UGC review data and other VGI data should be integrated to perceive the temporal dynamic change of the POI place attribute semantic in the future.
Conclusions and Future Work
A POI database that contains spatial (such as locations) and attribute semantic information (such as names, addresses, descriptions) to support user queries is especially useful.However, using a low cost and fast method to obtain up-to-date, high quality, and rich semantics of POI data is very challenging.Currently, VGI data (e.g., GPS trace data, social media data, etc.) open a new horizon for POI data extraction, updating and enhancement.However, a single type of VGI data has its own drawbacks.For example, GPS track data lack semantic information, while social media data have problems, such as being outdated and varying significantly in quality and accuracy.Multisourced VGI data should be fused to enhance POI information.Therefore, this article developed a new approach to enhance and update gas station data by coupling sparse trajectory data and social media data.The method was validated and evaluated using taxi GPS trace data and social media data from Dianping in Beijing.The results showed that the proposed method improved the coverage and richness of gas station information when compared with the authority data.It was demonstrated to be useful for data enhancement and data updating when handling the VGI data in a feasible way.
Furthermore, there are still some problems that should be tackled to improve the usability of the proposed method.First, the complexity of social media text data requires more new technology, such as machine learning and deep learning to improve accuracy.Second, how to evaluate and analyze the data quality of different sources to provide decisions for multi-source data fusion is an important future task.Although much more work needs to be conducted, this work lays the foundation for automatically enhancing POI information from VGI data and can also be applied in POI map updating.
Figure 1 .
Figure 1.General workflow of gas station information enhancement.
Figure 1 .
Figure 1.General workflow of gas station information enhancement.
3. 1 .
Spatial Information of Gas Station Extraction from Sparse Vehicle Trajectory Data 3.1.1.Refueling Stop Behavior Analyzing and Modeling Using Movement Parameters
Figure 2 .
Figure 2. Analysis of trajectory features of an individual refueling stop track.(a) Refueling stop track of individual vehicle; (b) Stop/Move pattern of refueling stops; (c) Average velocity and velocity change; (d) Direction and direction change.
Figure 2 .
Figure 2. Analysis of trajectory features of an individual refueling stop track.(a) Refueling stop track of individual vehicle; (b) Stop/Move pattern of refueling stops; (c) Average velocity and velocity change; (d) Direction and direction change.
Figure 3 .
Figure 3. Refueling stop sub-tracks extraction by the Velocity Sequence Linear Clustering (VSLC) algorithm.
Figure 4 .
Figure 4. Refueling stop track lines clustering and gas station information extraction.(a) Constructing a Delaunay triangulation (DT) within interpolated stop tracks; (b) Removing global long edge of DT; (c) Non gas station deleting by gas station model; (d) Gas station polygon and point extraction.
Figure 4 .
Figure 4. Refueling stop track lines clustering and gas station information extraction.(a) Constructing a Delaunay triangulation (DT) within interpolated stop tracks; (b) Removing global long edge of DT; (c) Non gas station deleting by gas station model; (d) Gas station polygon and point extraction.
5 .mFigure 5 .
Figure 5. Dimension semantic information of gas station extraction.(a) Review text of a gas station; (b) Extracts feature-opinion word pairs; (c) Sentiment scores of each sentence calculation; and (d) Construct tripartite graph of dimension sentiment.
Figure 5 .
Figure 5. Dimension semantic information of gas station extraction.(a) Review text of a gas station; (b) Extracts feature-opinion word pairs; (c) Sentiment scores of each sentence calculation; and (d) Construct tripartite graph of dimension sentiment.
Figure 7 .
Figure 7.Comparison analysis of the refueling stops extraction results of the different algorithm.(a) Vehicle leaves many stop points; (b) Vehicle leaves many stop points only when entering gas station; (c) Few stop points and track points missing; and (d) Trajectory noises.
Figure 7 .
Figure 7.Comparison analysis of the refueling stops extraction results of the different algorithm.(a) Vehicle leaves many stop points; (b) Vehicle leaves many stop points only when entering gas station; (c) Few stop points and track points missing; and (d) Trajectory noises.
Figure 8 .
Figure 8.The precision and recall of different parameter thresholds.(a) Analysis of the MaxAv parameter; (b) Analysis of the MinStop and MinMove parameters.
Figure 8 .
Figure 8.The precision and recall of different parameter thresholds.(a) Analysis of the MaxAv parameter; (b) Analysis of the MinStop and MinMove parameters.
Figure 9 .
Figure 9.The process of gas station extraction and the results.(a) Extracting stops from one day vehicle traces by step 4 of the VSLC; (b) Refueling stops extraction from Figure 9a by step 5 of the VSLC; (c) Extracting refueling stops (part); (d) Constructing the Delaunay triangulation using collective stop tracks; (e) Gas station polygon extraction; (f) Experimental results.
Figure 9 .
Figure 9.The process of gas station extraction and the results.(a) Extracting stops from one day vehicle traces by step 4 of the VSLC; (b) Refueling stops extraction from Figure 9a by step 5 of the VSLC; (c) Extracting refueling stops (part); (d) Constructing the Delaunay triangulation using collective stop tracks; (e) Gas station polygon extraction; (f) Experimental results.
Figure 10 .
Figure 10.The results of the attribute and sentiment semantic information extraction from Dianping.(a) Gas station point of interest (POI) and attribute information extracted from Dianping; (b) Dimension sentiment scores visualization using Bar graph (part).
Figure 10 .
Figure 10.The results of the attribute and sentiment semantic information extraction from Dianping.(a) Gas station point of interest (POI) and attribute information extracted from Dianping; (b) Dimension sentiment scores visualization using Bar graph (part).
Figure 11 .
Figure 11.Final gas station POI point map.(a) The overall results after fusion; (b) Attribute information of gas station after fusion; (c) Classification gas station by environment score (only for POIs with score information); (d) Classification gas station by service score.
Figure 11 .
Figure 11.Final gas station POI point map.(a) The overall results after fusion; (b) Attribute information of gas station after fusion; (c) Classification gas station by environment score (only for POIs with score information); (d) Classification gas station by service score.
Figure 12 .
Figure 12.The experimental results analysis and evaluation.(a) Gas station point overlaid on remote sensing images (ID = 556); (b) Accuracy evaluation result of gas station.
Figure 12 .
Figure 12.The experimental results analysis and evaluation.(a) Gas station point overlaid on remote sensing images (ID = 556); (b) Accuracy evaluation result of gas station.
Table 1 .
Details of different datasets.
Table 2 .
Results of different stop extraction algorithms for Dataset 1.
Table 3 .
Results of different stop extraction algorithms for Dataset 2.
Table 4 .
Evaluation of attribute and semantic information of the gas station.
Table 4 .
Evaluation of attribute and semantic information of the gas station.
|
2018-08-18T21:15:57.469Z
|
2018-05-08T00:00:00.000
|
{
"year": 2018,
"sha1": "9cd558e71902e0b2f697e2236ac802578b7145b4",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2220-9964/7/5/178/pdf?version=1525788633",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "65b726dfdbe7aa0727cb022d68ae3fbee8396ff2",
"s2fieldsofstudy": [
"Computer Science",
"Engineering",
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
250370355
|
pes2o/s2orc
|
v3-fos-license
|
The Relationship Between Caffeinated Coffee and CVD Risk as well as Blood Pressure
. Aspects around cardiovascular disease (CVD), especially its predisposing factors, have been studied for a long period. Recent studies have largely concentrated on coffee intake by elderly individuals with type 2 diabetes and its association with CVDs. Others have looked at the potential of coffee and tea in preventing CVD. The association between intake of caffeinated coffee with heart valve disease among the elderly has been another area of interest. However, there is still a gap in research in this area, explaining why this review was conducted to explore the effects of caffeinated coffee on human cardiovascular function as well as the association between coffee consumption and the blood pressure level. The review concludes that Caffeinated coffee consumption is associated with a lower risk of CVD. However, the association between caffeinated coffee intake and blood pressure needs further study.
Introduction
The association between the intake of coffee and the occurrence of CVD and blood pressure in populations has been a subject of interest for researchers for many years. Despite the extent of research, there are multiple knowledge gaps that require examination, especially due to contradictory or unspecified research findings on the topics. This paper explores both aspects of the relationship of coffee intake to CVD and blood pressure. CVD is a leading cause of death globally and its risk factors include hypertension, smoking, high cholesterol levels, diabetes, inactivity, and family history of CVD. From the current research, coffee is not a known risk factor for CVD.
An examination of coffee consumption patterns reveals that Europe and the Americas prefer coffee while the other parts of the world consume more tea than coffee. The major component of coffee, caffeine, is known to cause an increase in alertness and attention, reduce the risk of contracting diabetes, and increases the metabolic rate. Caffeine also causes addiction, anxiety, diuresis, and reduced control of fine motor movements. However, there is no evidence that caffeine increases the risk of CVD. Other key components of coffee such as antioxidants, diterpenes, and various minerals do not increase the risk of CVD. Studies also show that coffee intake can cause a short-term rise in blood pressure but the effect subsides as coffee concentration declines in the body.
CVD risk factors
The term cardiovascular diseases (CVDs) are used to refer to a collection of disorders of the blood vessels and the heart. They include but are not limited to coronary heart disease, peripheral arterial disease, cerebrovascular disease, rheumatic heart disease, deep vein thrombosis, and congenital heart disease. Being among the top 3 leading causes of death, CVD needs to be monitored and analyzed to decrease its risk in the community. The global cases of CVD increased from 271 million in 1990 to over 523 million in 2019, indicating the high prevalence of the diseases . The figures represent a 93% increase in CVD cases over a 20-year period. Further, there were over 18.6 million CVD-related deaths in 2019. Coffee remains one of the most popular drinks in the modern world hence can be associated with the risk of CVD. Therefore, it is crucial to find out the association between these two factors.
High blood pressure or hypertension is the main risk factor for CVD . Very high blood pressure can damage an individual's blood vessels thereby causing one or more of the CVDs. Smoking has also been described as a major risk factor for CVD. The nicotine and other harmful substances contained in tobacco have the potential to cause the narrowing of blood vessels thereby damaging them. High levels of cholesterol, a fatty substance contained in a person's blood, can increase the risk of CVD . Cholesterol is associated with narrowing blood vessels, a situation that increases the likelihood of developing a blood clot, which can cause a stroke. Diabetes is known to be an important CVD risk factor, although the biology behind the relationship between the two disorders is highly complex. The effect of diabetes on heart muscles that later causes diastolic and systolic heart failure is the main cause of CVD in diabetic individuals. High blood sugar linked to diabetes causes damage to the inner lining of blood vessels. In response, the body deposits plague along the injured vessels leading to the narrowing of the blood vessels .
Inactive individuals are at high risk of suffering from CVD. People who do not engage in regular exercise are likely to have high levels of cholesterol, high blood pressure, and be overweight. A high body mass index of over 25, worsens CVD risk factors such as blood pressure, blood sugar, and inflammation in individuals. Although still under investigation, researchers have suggested that individuals with a family history of CVD are likely to develop one of the CVDs in their life, indicating some genetic factors may fall into play. A genetic predisposition to CVD increases the likelihood of suffering from the diseases irrespective of other factors. Additional CVD risk factors include older age, gender, unhealthy diet, alcohol consumption, and impaired kidney function. An unhealthy diet and alcohol can lead to an increase in body weight as well as cholesterol levels. Men are also more likely to suffer From CVD compared to women.
The impairment of the renal function increases the risk of CVD two to four times . There is a close relationship between CVD and kidney disease and the occurrence of disease in either organ increases the probability of dysfunction in the other organ. CVD is a leading cause of mortality for end-stage kidney disease patients. The occurrence of CVD in kidney disease patients is linked to disturbances in mineral and vitamin D metabolism . The Fibroblast Growth Factor 23 hormone responsible for vitamin D synthesis is a key factor in the occurrence of CVD in kidney disease patients. Moreover, the occurrence of CVD in patients with kidney diseases can be attributed to the release of kidney hormones, cytokines, and enzymes that cause changes in the blood vessels . Besides, hemodynamic alterations and chronic kidney disease mediators can cause cardiac malfunctions. The general information about CVD risk factors indicates that those interested in reducing their risk should avoid tobacco, consume less fat and salt in their diet, consume more fruits and vegetables, and exercise regularly. The figure 1 below shows the prevalence of CVD globally.
Coffee consumption patterns
Coffee is a popular beverage across the globe. In the year 2020/2021 over 166 million 60kg bags of coffee were consumed globally . Researchers have found that the average caffeine intake in a day is 240 mg, which translates to about 150 ml of instant coffee . One-third of the world's population consumes this or more every day and can be described to be caffeine dependent. Some studies report that coffee consumption patterns remain similar in all genders and ages, although others note that younger individuals consume more coffee than older persons as the latter consumer more tea . Coffee is the preferred drink in the Americas and Europe while tea is the preferred drink in other parts of the world. Finland, Sweden, Iceland, Norway, and Denmark are the top five consumers of coffee as shown in figure 2 below.
As evident from figure 2, a single person in Finland consumes 12kgs of coffee every year. These figures stand at 9.9kgs in Norway and 9kgs in Iceland . The figures are lower in the USA and the UK, where the average consumption per person is 4.2 kg and 2.8kg respectively . Overall, there is higher consumption in Europe compared to North America. The consumption differences are due to stronger coffee culture in Europe than in America. In Europe, coffee consumption is a highly social activity that often happens in roadside cafes. In the USA, most people consume coffee as a habitual process of stimulating the mind rather than as a social activity. In the United States, over 70% of the people drink coffee weekly while 62% consume coffee daily . The average consumption is 3 cups per day and most people prefer to get coffee from drive-through stores rather than preparing it at home. Further, the most preferred coffee beverages are expresso, lattes, cappuccinos, and flat whites. The consumption of coffee is the United States is integrated into the busy work culture in comparison to the extended coffee breaks that are common in Europe.
Caffeinated coffee contains a range of compounds that contribute to the physiological effects associated with coffee as well as its unique flavor. Below is figure 3 that shows the common compounds that are available in the coffee products.
Though as indicated in figure 3 numerous compounds are within the coffee product, only a few of the compounds have a significant effect on individuals. Caffeine is the main pharmacologically active component in coffee and is known to stimulate the central nervous system. Most of the other components are destroyed during the roasting process.
Caffeine consumed in beverages is absorbed quickly from the gastrointestinal tract and released into body water . Further, the liver does not filter out caffeine as it passes from the intestinal tract to the blood circulatory system. It has been noted that caffeine reversibly binds to plasma proteins and protein-bound caffeine amounts to 10-30% of all caffeine in the plasma. Caffeine is hydrophilic and moves freely in the intracellular tissue water. The average distribution volume of caffeine in the body is 0.7L/kg. moreover, caffeine is lipophilic which enables it to pass through biological membranes . The body eliminates caffeine through first-order kinetics. Caffeine metabolism occurs in the liver and the process is catalyzed by microsomal enzyme systems. The products of metabolism include uric acids, uracil derivatives, dimethylxanthines, and trimethylallantoin. The primary metabolite in humans is paraxanthine, which is excreted through urine. Paraxanthine and caffeine cause an increase in the concentration of epinephrine in plasma, high diastolic blood pressure, and free fatty acids . Caffeine is usually metabolized completely and very little amounts are excreted in urine.
Caffeine causes an increase in alertness and attention, reduces fatigue, lowers the risk of contracting diabetes, and increases the metabolic rate. Further, caffeine helps to regulate body weight by increasing energy usage while decreasing energy intake. Caffeine also promotes weight maintenance through fat oxidation and thermogenesis. An average cup of coffee contains about 75-100 mg of caffeine .
Research shows that intake of up to 400mg of caffeine a day has no safety issues in non-pregnant adults. However, pregnant and breastfeeding mothers are advised to take a maximum of 200mg of caffeine per day. Caffeine can readily cross the placenta and reach similar concentrations in the fetus as is in the mother's body . Research implications show that excessive consumption of coffee among pregnant women can result in impaired fetus growth and spontaneous abortion. Caffeine has multiple negative effects on adults. The effects include addiction and increased anxiety, higher stimulation of urination, reduced control of fine motor movements, and higher vasoconstriction . Caffeine inhibits fine motor movements by affecting the sodium-potassium-adenosine triphosphate pump activity, which leads to decreased potassium concentrations in the plasma. As a result, it affects the depolarization-repolarization process leading to reduced fine motor coordination. Ceasing the usage of caffeine causes various withdrawal symptoms that include irritability, headache, nervousness, and energy reduction. Caffeine causes a slight reduction in calcium absorption in the intestinal tract. The substance inhibits the uptake and storage of calcium in striated muscles and increases the translocation of calcium ions through the plasma membrane. Among the elderly, the consumption of a maximum of 2-3 cups of coffee a day coupled with regular intake of calcium and vitamin D may help to reduce incidences of osteoporosis and bone fractures. In children, caffeine can cause nervousness and anxiety. Therefore, the recommended maximum coffee consumption for children is 2.5mg/ kg of body weight/day . The fatal oral dose of caffeine is 10-14g/kg body weight. Oral dosages of up to 150mg cause vomiting, convulsions and recovery occurs within 6 hours. Dosages of 1g can also cause nervousness, restlessness, irritability, emesis, delirium, neuromuscular tremors, increased respiration, and tachycardia.
Antioxidants are another key component of coffee and they include melanoidins, caffeic acid, and chlorogenic acids, and they help to deactivate oxidants . Studies have concluded that there is an increase in the level of antioxidants in the blood every time a person consumes coffee. These antioxidants have diverse effects on the body and research is still on about the potential roles of these antioxidants. Chlorogenic acids influence lipid and glucose metabolism and are also antiinflammatory, anti-carcinogenic, and anti-obesity. Moreover, the degree of roasting determines the level of antioxidant activity in coffee. Medium-roasted coffee has the maximum level of antioxidant activity. Diterpenes are a naturally occurring compound in the oil found in coffee. A high intake of diterpenes (cafestol and kahweol) can cause an increase in low-density lipoprotein cholesterol (bad cholesterol). However, using paper filters eliminates diterpenes in coffee thus reducing the incidence of serum cholesterol. Cafestol and kahweol have anticarcinogenic properties and they help to inhibit the activity of aflatoxin BA in the body cells. The diterpenes also help to reduce the genotoxicity of multiple carcinogens and they also cause apoptosis by controlling the expression of certain proteins in malignant pleural cancer. Additional compounds are found in coffee and are said to form during the storage process of coffee. They include but are not limited to furan, acrylamide, and Ochratoxin A. Other components in coffee include phenols, potassium, lactones, niacin, and the vitamin B3 precursor trigonelline. Overall, coffee contains over 1,000 phytochemicals in varying quantities. Coffee has minimal energy content due to the low amount of carbohydrates, fat, or proteins in its composition. However, it contains many vitamins and minerals. Some of the minerals include manganese, iron, sulfates, lanthanum, cesium, bromine, calcium, Sodium, magnesium, copper, zinc, strontium, barium, nickel, cobalt, lead, cadmium, scandium, rubidium, and phosphorus. Some of the amino acids in coffee include amino acids such as glutamic acid, arginine, alanine, cysteine, asparagine, isoleucine, glycine, histidine, leucine, proline, lysine, serine, methionine, phenylalanine, tyrosine, threonine, and valine. In general, coffee consumption has an inverse relationship with the occurrence of various diseases such as Alzheimer's, liver damage, Parkinson's, and various cancers. Coffee consumption increases endurance in long physical activities and reduces suicide risk by 13% for every cup consumed. Evidently, coffee has multiple benefits for its consumers.
Association between caffeinated coffee consumption and CVD
A 2012 systematic review coupled with a dose-response meta-analysis of several prospective studies concluded that there is a J-shaped association between coffee intake and heart failure. When compared with individuals who did not consume coffee, the most significant inverse relationship was observed at four cups per day and higher risks were reported for those consuming more than four cups a day . There is a significant inverse relationship between intake of coffee and the risk of CVD mortality, particularly in women . However, a study by Liu et al. concluded that there is a positive relationship between coffee intake and mortality rate in adults below 55 years of age . Another study by Rebello and van Dam explains that there is no association between coffee intake and risk of coronary heart disease . They add that there is a weak relationship between coffee intake and a lower risk of heart failure and stroke. The same study concluded that coffee intake has no association with increased risk of fatal cardiovascular incidences.
Individuals who consume 3-5 cups of caffeinated coffee reduce their risk of CVD by 15% . The study by Rodríguez-Artalejo and López-García concluded that those who consume more than 3-5 cups have no elevated risk of CVD. Another study indicated that consumption of coffee regularly lowers the risk of cardiovascular death as well as a myriad of adverse cardiovascular outcomes, such as stroke, congestive heart failure, and coronary heart disease . Again, habitual intake of 3-4 cups of coffee every day was not associated with positive or negative effects on hypertension and arrhythmias. Voskoboinik, Koh, and Kistler argue that moderate consumption of tea and coffee can have beneficial effects on different cardiovascular conditions, such as arrhythmias, heart failure, and coronary heart disease.
Highlights in Science, Engineering and Technology
Association between caffeinated coffee consumption and Blood Pressure
Geleijnse concluded that there is a U-shaped or linear relationship between habitual intake of coffee and blood pressure in diverse populations . Although further investigation is needed to confirm the results, it is suggested that coffee intake has protective effects against hypertension, especially in women who take at least four cups per day. It is not yet clear whether abstainers are associated with a higher or lower risk of hypertension than people who take one or two cups per day. When randomized control studies were used, it was concluded that those who take about five cups every day may experience a slight increase in blood pressure in comparison with those who take decaffeinated coffee or those who abstain from coffee intake . Intake of 200-300mg of caffeine is said to lead to an 8.1mmHg increase in the systolic blood pressure and a 5.7mmHg in the diastolic blood pressure . The increase is observed within one hour of caffeine intake and is found to last for about 3 hours. Caffeine causes an increase in coronary blood flow in the heart as well as provides antiasthmatic effects through activating the relaxation of the smooth muscles in the lungs and dilation of the bronchi. Caffeine is a well-known natural alkaloid methylxanthine. Methylxanthines stimulate the heart and kidney functioning, excite the central nervous system, act as bronchodilators, and promote the psychical and physic activities of organisms . Around 99% of the caffeine in coffee is absorbed after ingestion and the blood concentration peaks after 60 minutes. The half-life of caffeine in human adults is 3-6 hours. Studies following participants for at least two weeks found that there was no significant increase in blood pressure. Caffeine causes an increase in blood flow and renin secretion. Rennin hormone causes an increase in blood pressure. The conclusion was that coffee intake can cause a temporary increase in the blood pressure of hypertensive individuals. Besides, caffeinated coffee is associated with a higher increase in blood pressure than decaffeinated coffee . Caffeinated coffee induces a higher concentration of adrenaline leading to high blood pressure . However, the effect is only noted in people who do not take coffee regularly and not in habitual coffee consumers.
Future directions
There is a lot of information in the public domain about the possible association of specific factors with the risk of CVD. However, some of this information is not grounded on any scientific study hence should be consumed with caution. Most studies have associated tobacco intake, lack of regular exercise, and high levels of cholesterol with increased levels of CVD risk. Thus, it is necessary to consider such pre-disposing factors when making effort to reduce the risk of CVD. It should be noted that research is still on about the link between these factors and CVD.
Conclusion
Recent studies have indicated that caffeinated coffee consumption is associated with a lower risk of CVD, especially for those consuming about 5 cups per day. However, the association between caffeinated coffee intake and blood pressure needs further study because there are unclear associations and conflicting results in current studies. As one of the most popular drinks in the modern world, coffee plays an important role in regulating the risk of CVD, which is one of the leading causes of death in China, as well as blood pressure. Results from the consulted studies are not conclusive hence further studies are needed to explore the effects of caffeinated coffee consumption on blood pressure and CVD.
|
2022-07-09T15:32:44.056Z
|
2022-06-22T00:00:00.000
|
{
"year": 2022,
"sha1": "21bef335bf5a20e7c9f3005cc3bdc1c9059f3f9d",
"oa_license": "CCBYNC",
"oa_url": "https://drpress.org/ojs/index.php/HSET/article/download/559/497",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "d1fc79220d169cb2b6f61298ebc199f14fe87f0b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
12949014
|
pes2o/s2orc
|
v3-fos-license
|
Steroid responsive encephalopathy associated with autoimmune thyroiditis (SREAT) presenting as major depression
Background Hashimoto’s encephalopathy is a neuropsychiatric disease with symptoms of cognitive impairment, stroke-like episodes, seizures, and psychotic or affective symptoms associated with autoimmune thyroiditis and excellent steroid responsiveness; therefore, it is also called “steroid responsive encephalopathy associated with autoimmune thyroiditis” (SREAT). Case presentation We present the case of a 50-year-old woman who developed a first-onset depressive syndrome with predominant cognitive impairment and inability to work. Antidepressive treatment and cognitive behavioral therapy over two years were unsuccessful. Neurological examination was unremarkable. Serum analysis showed increased thyroid peroxidase and thyroglobulin antibodies. Cerebrospinal fluid protein and albumin quotient were increased. Magnetic resonance imaging depicted unspecific, supratentorial white matter lesions and frontal accentuated brain atrophy. Electroencephalography was normal. Neuropsychological testing for attentional performance was below average. High-dose intravenous treatment with methylprednisolone over 5 days and oral dose reduction over 3 weeks led to the sustained improvement of clinical symptoms. Following discharge from the hospital, the patient returned to work, and 6.5 months after the start of therapy, no neuropsychological deficit remained. Conclusion This case report illustrates that SREAT might present with purely depressive symptoms, thus mimicking classical major depression. In such cases, corticosteroid therapy may be an effective treatment option.
Background
Hashimoto's encephalopathy is a neuropsychiatric disease with symptoms of cognitive impairment, stroke-like episodes including transient aphasia, tremor, myoclonus, gait disorders, or seizures [1]. It is associated with autoimmune thyroiditis and excellent steroid responsiveness and is therefore also called "steroid responsive encephalopathy associated with autoimmune thyroiditis" (SREAT). Usually, thyroid hormone abnormalities and, in particular, anti-thyroid peroxidase (TPO) and thyroglobulin (TG) antibodies are found. Here, we present the case of a patient with SREAT suffering from typical psychomotor and neurocognitive symptoms of a major depression, while neurological sequelae were absent. This case illustrates that SREAT without neurological deficits and with normal thyroid hormones can mimic major depression. It is important to be aware of this possible association between symptoms of the most common psychiatric disease (major depression) and the most frequent autoimmune disease (Hashimoto thyroiditis), because it implies specific and more causal treatment options, such as corticosteroids.
Case presentation
We present the case of a 50-year-old female receptionist who, early in 2011, developed loss of energy and feelings of exhaustion but without any identified psychosocial stressors. At the end of the same year, she presented with a classical depressive syndrome (suffering from impaired concentration, slowed thinking processes, disturbed memory, low mood, decreased activity, reduced energy, fearfulness, symptoms of demoralization with hopelessness, reduced self-awareness, excessive guilty, sleep disturbance, inability to work, and social withdrawal). For the patient, the adynamia and cognitive impairment were the most debilitating symptoms. Hence, a major depression was diagnosed. Treatment with venlafaxine (up to 112.5 mg; higher doses were not tolerated) plus agomelatine (25 mg) together with cognitive-behavioral therapy was started without sufficient therapeutic success. In the autumn of 2012, occupational reintegration with reduced working hours was initiated, but it soon had to be stopped because of cognitive and also physical exhaustion. The patient would forget things and fail to understand more complex tasks. Reaction times were described as extended; memory was still disturbed, mood depressed, and energy level reduced.
In January 2014, the patient was admitted to our hospital. Her somatic history only showed high blood pressure, which was treated with telmisartan (80 mg), and a history of a clinically remitted lumbar disc prolapse. Thyroxine (T4) substitution had been started at the beginning of 2012 (because TPO and TG antibodies were elevated in an initial outpatient examination, whereas thyroid hormones were normal). On admission to our hospital, 75 μg T4 were taken. The patient had no history of psychiatric disorders prior to 2011, and her familial history was negative for psychiatric, neurological, or autoimmune disorders.
Differential diagnosis
The most important differential diagnosis was major depression, because the main symptoms of this were present (lowering of mood, reduction of energy, and decrease in activity [www.dsm5.org]). The leading symptoms were cognitive impairments, including forgetfulness (in particular for short-term memory), so pre-senile dementia also had to be considered. Other reasons for inflammatory brain diseases, such as limbic encephalitis or metabolic disorders, were excluded [2].
Treatment
Intravenous treatment with high dose methylprednisolone 1000 mg over 3 days and 500 mg over 2 days was provided and well tolerated. Methylprednisolone treatment was continued with 40 mg orally and tapered by halving the daily dose every fifth day. Treatment with venlafaxine (112.5 mg), agomelatine (25 mg), and T4 (75 μg) was continued unchanged.
Outcome and follow-up
Directly after the high-dose intravenous steroid treatment, the patient reported reduced cognitive impairment and improved alertness. Neuropsychological testing confirmed this rapid improvement with reduced response latencies in all attention tasks, compared with measurement before corticosteroid treatment. The basal alertness and processing speed was improved, but still below-average. After around five weeks, however, the mood and energy levels had normalized and cognitive impairment disappeared. After three months, the patient was fully reintegrated at work without cognitive deficits. In the follow-up testing for attentional performance (6.5 months after therapy onset), no relevant neuropsychological deficit remained (Fig. 2). No changes in the follow-up MRI were detected (Fig. 1). TPO (440.6 IU/mL; initial 804 IU/mL) and TG antibodies (581.7 IU/mL; initial 661.4 IU/mL) decreased but still exceeded the reference limit.
Conclusions
SREAT was first described in 1966 [3], and several case reports and series have been described since [1,4,5]. To date, no definite diagnostic criteria have been established for SREAT [6]. Preliminary suggestions remain very vague with respect to the question of which precise psychiatric symptoms and syndromes should lead to the diagnoses when thyroid antibodies are present [7]. Typical SREAT symptoms include not only neuropsychiatric syndromes with behavioral or cognitive abnormalities (in nearly all cases, and including ours), but also tremor (80 %), stroke-like episodes including transient aphasia (80 %), myoclonus (65 %), gait ataxia (65 %), seizures (60 %), and sleep abnormalities (55 %) [1]. Only a few case reports have documented predominant depressive symptoms [8][9][10][11][12]. In an overview of five such cases until 2013, only one presented with isolated affective symptoms [8]. A case report from our clinic described a patient with predominant depressive symptoms as well as an epileptic seizure [9]. Laske et al. described a 74-year-old female patient with depression and EEG slowing; after steroid treatment, the affective symptoms normalized parallel with the EEG [10]. Other psychiatric cases have been associated with neurological signs, such as myoclonic jerks or ataxia [11]. SREAT can also be found in children, often associated with epilepsy, but in single cases also, with primarily behavioral presentations [12].
Here we present a unique SREAT case, one that presented clinically only with typical symptoms of a major depression. Remarkably, the levels of T3, T4, and TSH were normal, as in some earlier reports [1,2]. Thus, as pointed out by Castillo et al. it is very unlikely that T3, T4, and TSH are directly involved in the Fig. 2 Tests for attentional performance measuring reaction time, divided attention, mental flexibility and working memory before (black bar), directly after (grey bar) and 6.5 months after cortisone therapy (shaded bar) pathophysiology of SREAT. By definition of SREAT, TPO and also TG antibodies are elevated; however, it is known that these levels do not correlate with the relevant symptoms [1]. Antibodies are not considered to be directly involved in the pathophysiology of SREAT, because they can also be found elevated in healthy subjects or in patients with other autoimmune diseases [2]; also, in line with this, antibodies were still elevated after remission of depressive symptoms in our patient. The CSF protein level and albumin quotient in our subject were increased, as generally described in earlier reports [1]. EEG findings were normal in our subject. This contrasts to the about 95 % of SREAT patients with pathological EEGs in earlier case summaries with generalized slowing, indicating diffuse brain dysfunction, being the most common finding [1]. The normal EEG along with the absence of other neurological deficits in our case might be a marker of a less severe variant of SREAT or, alternatively, the consequence of an early initial diagnosis. The MRI indicated unspecific changes, as noted in the majority of published cases [1]. In our patient, the presence of TPO and TG antibodies in combination with an excellent response to steroid treatment helped to establish the correct diagnosis after careful exclusion of other relevant disorders.
In our view, therefore, this case and literature review illustrates that TPO and TG antibodies should be performed routinely in depressive patients with atypical symptoms (discrete neurological symptoms like tremor or myoclonic jerks) and predominant cognitive impairments, because SREAT may manifest with just symptoms of major depression plus increased thyroid antibodies. Moreover, CSF analysis, EEG, and MRI might be helpful in establishing the correct diagnosis. This is particularly important due to a high association between TPO antibody levels and depressive episodes [13]. In a queried case of SREAT, a probatory therapy course with steroids is the only way to clarify the diagnosis.
|
2018-04-03T00:34:13.805Z
|
2016-06-06T00:00:00.000
|
{
"year": 2016,
"sha1": "4906bf139eebe5a3f0ffb5920dbab7f5d6389cd1",
"oa_license": "CCBY",
"oa_url": "https://bmcpsychiatry.biomedcentral.com/track/pdf/10.1186/s12888-016-0897-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4906bf139eebe5a3f0ffb5920dbab7f5d6389cd1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
236864685
|
pes2o/s2orc
|
v3-fos-license
|
The Students’ Use of Google Classroom in Learning English
This study investigates the students’ use of Google Classroom in English language learning. The data were derived from Likert Scale questionnaires including open-ended questions distributed to 119 English Education students. There were five aspects covered in the questionnaire: access to Google Classroom, perceived usefulness, communication and interaction, instructional delivery and students’ satisfaction. Meanwhile, open-ended inquired students’ real experiences. The result showed the mean score with the following distribution: 4,49 for easy access to GC, 3,93 for perceived usefulness, 3,63 for communication and interaction, 4,10 for instructional delivery, and 3,82 for students’ satisfaction. Some students shared their experiences in using Google Classroom. Some of them said that Google Classroom brought their courses to their face so that they can participate and continue work on their classes beyond the working hours. Even many of them still worked and uploaded their assignments till midnight. In spite of these positive findings, the study revealed that some students fell into serious addiction to social media technology.
Introduction
The advancement of technology like what we experience today, majority, if not all, occur in the developed world. Many studies reported summaries of the benefits of digital technologies to support students' learning activities (Martono and Salam, 2017). Digital technology creates rich learning environments to allow students not only to access information and communication they need, but also to provide venues so that they could exercise inquiry learning, critical thinking, creativity, and collaboration (Akcan, 2018;Bond, 2020;Lin et al., 2020). Meanwhile, in the developing countries, with wide gaps among them, educational institutions still struggle to embrace technology with full acceptance. Among the reasons are there infrastructures, teachers' acceptance, and students' skills to use for their learning purposes (Ifinedo, Rikala, & Hämäläinen, 2020;Lucas, 2020).
The students the instructors teach are in their Net generation. It goes without saying that they are very familiar with the use of networked technology that allows them to access information and communication technologies such as Instagram, WhatsApp, Facebook, Line and so forth, not exceptionally games and entertainment applications. Enriched with and being familiar to such kind of technology, the students should have extended their learning experiences (Gan, Menkhoff, & Smith, 2015). However, such privileges do not guarantee learning to happen. Educators are urged to compete for their students' attention as they are skillful in multitasking among those who are multi-living (Conrad & Dunek, 2020). Crittenden, Biel, and Lovely (2019) argue that the students who use technological devices during teaching learning processes in the classroom tend to lack their attention to their teacher explanation compared to those who take notes using hand writing. They are distracted by what appear in their devises. This phenomenon, in fact, indicates the students' inability to use ICT for learning intention.
The students' ability to use and to accept such technologies for learning activities requires instructional design provided by their teachers. In developing countries, some higher educational institutions are still in their ways to introduce policies to oblige their educators to integrate technologies in their instructional delivery. In such situation, both lecturers and students entail transformation of their perception of technology in their educational practices.
On the one hand, the lecturers demand upgrading skills in technology use, particularly for those who are in the group of digital immigrant (Alaniz & Wilson, 2015). Digital immigrants, according to Alaniz and Wilson, are lecturers who are not comfortable employing technology in their instruction. These lecturers need convincing debate to encourage them utilizing technology. This is in line with what (Rushby and Surry's, 2016) suggestion that investing in technology does not guarantee technology acceptance. They continue to argue that particularly in developing countries where technology is not equally installed, there has been little change in traditional educational practices despite great investment in technological infrastructures. That is why it is important to have initiative to help digital immigrants possess sufficient confidence to integrate technology into their teaching practices.
In a similar vein, the students, mostly digital natives, if not all, are greatly familiar with the current technology. They, as described by (Mudrikah et al., 2019;Turkle, 2005), are equipped with wearable technological devices. For them technologies are not simply tools but constitute an environment that emerges new culture; for young generation living on the screen is inseparable from living in the real world (Turkle, 2011). Situated in technology rich environments, the students are so much attached to digital devices. The young generation as always on; they are busy with social networking, video entertainments, and gaming online. The challenge is whether the students are skillful in employing technology for learning endeavors.
Their reluctance to use technology for learning purposes could be caused by the insufficiency and the scarcity of the introduction of technology to drive learning, and hence they are not accustomed to using technology in their learning processes. When they are encouraged by their universities to use technology in the learning process, the students need to adapt to that specific use of technology. In other words, it is not because they are not familiar with the technology, but more because they are unfamiliar with utilizing technology for learning purposes.
Google Classroom nowadays has become one of the popular teaching platform used by teachers and lectures. Google Classroom had managed to host over 30 million assignments uploaded by teachers and students. It indicates that this application is an interesting medium for teaching-learning process that can be brought to our Education (Iftakhar, 2016).
Google Classroom is a web-based course management system (CMS). It provides venue for instructional delivery and learning processes where students obtain education through communication, interaction, and discussion. This is also a platform for teacher to deliver their courses; they can assign students to upload works and other assignments. By this way, Google Classroom facilitates teachers and lecturers in creating and organizing assignments, feedback, and communication with the classes (Shaharanee, Jamil, & Rodzi, 2016b). They believe that Google Classroom is a good innovation in teaching because using this application, the educators and the students can obtain many benefits including easy access wherever and whenever they want as long as they have the internet connection. By this way, the students feel belonging to their courses as it blends with their social engagement (Coffman & Klinger, 2016).
A study found that Google Classroom is an appropriate application for LMS because it is already linked to university's and school's system and it seemingly meets the students' request for a simpler interface allowing more interaction (Heggart & Yoo, 2018). The students just need to have the class code from the teachers or lecturers. Then, after having the code, they can open the Google Classroom by clicking "Join Class", and writing the code given and the last is by clicking "enter" or "ok". Afterward, the students will be successfully enrolled to the class. In addition, the educators and the students can also interact actively in the application. Google Classroom has features that enable both educators and students communicate in group or privately for every task posted by the educators.
Furthermore, Google Classroom can create a new environment of learning for the students (Shaharanee et al., 2016a). The students and the educator will not see each other directly, hence, it can trigger the students to ask more about the lesson learned in the application. They can also discuss the answers of any questions from their friends. The educator will control the questions and answers from the students. It means that when the students have gone out of the track, the educator could re-direct the students to the right one again. It goes without saying that by implementing Google Classroom in English teaching, learners will have more space and time to work on asynchronous pace basis.
The study shown that the nature of asynchronous technology has enabled students' interaction and collaboration (O'Rourke & Stickler, 2017). By this, they obtain socio-affective advantages, including sharing opinions, insights, feeling, and works, establishing network, helping and motivating each other, and providing scaffolding each other. The significant conclusion from this study is that asynchronous technology democratizes the classroom process where every student could participate including less participative, unconfident, and shy students. For foreign language students, the study by (Satar & Akcan, 2018;Hew & Cheung, 2008) suggested that the implementation of on-line discussions improved their English language development and also increased more engagement compared to that in face-to-face class situations. In other words, the use of asynchronous technology could be a better choice for learning English as it does not only allow students to reach more speaking English people around the world, it also provides scaffolding for those who had linguistic insufficiency (Osborne, Byrne, Massey, & Johnston, 2018).
In terms of learning English as foreign language, using asynchronous communication platform like Google Classroom suggests less pressure so that it encourages more participation, more collaboration, more confidence (Satar & Akcan, 2018). Those studies found that students participations increased exponentially in asynchronous environments. The absence of physical attendance reduces their anxiety that usually hindrance their participation. In such situation, applying course management system in the process of teaching is one of the real alternatives. Google classroom can be the solution to this issue. The current study investigated the students' use of Google Classroom and their experiences in how the application has helped them maintain their learning English.
Method
This study employed a descriptive design. In this design, the investigator does not interfere in a situation where the data come from as it could damage the natural process of the phenomena being captured (Cohen et al., 2007). This way, the design tries "describes and interprets what is" (Best & Kahn, 2006). They explain that this design can be used to explore the students' opinions about a phenomenon, the existing processes, and the current trends that develop among populations. The current study investigated the students' use of Google Classroom (GC) in their learning English activities as well as their experiences in how it became part of their learning endeavors.
The participants of this study were 119 college students from English major. They had used Google Classroom largely for almost every subject they pursued for last two semesters. The activities in GC varied from subject to subject. In addition, some lecturers used GC in different level of activities from simple posting of announcements, uploading materials and assignments, to full blended learning; they used Google Classroom extensively in their course delivery.
The data of this study were derived from two sources: Likert Scale questionnaires with five scale (Table 1) and open-ended questionnaires. The former were in five parts with 29 items. They were developed to reveal the students' access to Google Classroom and perceived usefulness of it (Davis, 1989). The questionnaires also included perception on quality communication and interaction, instructional delivery, as well as students' satisfaction (Shaharanee et al., 2016a). Meanwhile, the latter were to discover the students' meaningful experiences during their study. Using open-ended questionnaire, the students were required to write short description about how Google Classroom had helped them maintain their learning English as well as its challenges. All questionnaires were validated by lecturers who employed Google Classroom to some degree from the English Education Department. The data were analyze to determine the mean score of each item. Then, to identify the verbal interpretation of the range of mean score, the writer used Bringula's interval (Table 2) of 5-point scale (Bringula, 2012). The open-ended questionnaires were analyzed using thematic analysis (Creswell, 2012).
Results
The findings are presented in two parts following the research questions. The first one reports the students' use of Google Classroom Application in learning English. Meanwhile, the second part elucidates the students' experiences in using Google Classroom.
The Students' Use of Google Classroom Table 3 reports the students' easy access to Google Classroom. The data indicated that almost all students responded strongly agree toward easy access of GC. There are five aspects in the questionnaire with the average score of 4,49. Two aspects received the highest score, Easy to sign in (4,80) and Easy to send and receive assignment (4,60). Meanwhile the lowest score fell in Easy to understand the system (4.20). This indicates that some students might still feel unfamiliar with Google Classroom. This could be the first experience for them. The complete scores can be seen from the Table 3. Eventually, the data reveal above average between agree and strongly agree. This can be inferred that Google Classroom application for learning English was easily accessed by the students including accessing course materials, submitting assignments, and receiving tasks from lecturers.
The next table, Table 4, exhibits the perceived usefulness of Google Classroom. This means that the scores indicate the students' perception if GC facilitates their learning English. The questions cover the usefulness of Google Classroom in terms of the quality of learning, interaction, submitting assignments, engaging activities, receiving feedbacks, grading system, and consistency in course structures. The mean score for all those aspects was 3,93 verbally interpreted as agree. As can be seen in Table 4 above, the highest mean goes to Punctual assignment submission (4.53). This is true as Google Classroom has notification for almost due assignments. Meanwhile, the lowest mean goes to Excellent medium for social interaction (3,56). This implies that the students preferred to use other social media for quick communication channel like WhatsApp, Instagram, and Facebook. In other words, they did not use GC for communication. In spite of these conditions, the overall results indicated that the students agreed that Google Classroom brought benefits for their study.
The next section, Table 5, shows the results of communication and interaction. In fact, the table was the detail exploration of the second question of Table 4, i.e., Excellent medium for social interaction. Therefore, the average mean score of Table 5 was similar to that of second question of Table 4. Even though the average mean score was 3,63, verbally interpreted as agree, all scores in Table 5 are below 4 or below agree in Likert scale. This means that Google Classroom was perceived as less preferable for communication medium, approximately as indicated by question two of Table 4. Meanwhile, referring to the Table 5, the highest mean goes to Comfortable communication channel with 3,86 mean score and the lowest one belongs to Enthusiastic lecturers in teaching with 3,33 mean score.
The following section displays the mean score of students' opinions for Instruction Delivery. The data indicate the level of students' opinions about how Google Classroom had been used by their lecturers to deliver their instruction. Instruction Delivery includes clear instruction, deadline, course topic update, participation rules, and giving feedback. The data shows that all aspects in Instructional Delivery received scores which are verbally interpreted as agree. It indicates that the students claimed that their lecturers used Google Classroom for their instructional delivery and did necessary activities. For example, the students responded that Showing due dates or duration for some activities (4,36); Keeping the participants on course tasks (4,06); Clear course topics (4,00); and Clear instructions for course learning activities (3,93). All these evidenced the use of GC for instructional delivery.
The last part of the questionnaires results reveals the students' satisfaction with the use of Google Classroom in learning English. As presented in Table 7, the students' satisfaction mean score was 3,82 interpreted as agree. In other words, the students were satisfied in using Google Classroom for their learning English.
The Students' Experiences in using Google Classroom for Learning English
Even though not all participants in this study responded to open-ended questionnaire, some did share experiences using Google Classroom intensified their activities contributing to their learning. This section presents three main findings from qualitative data.
In the first place, the participants claimed that Google Classroom has eliminated the border of the classroom. The sense of anytime and anyplace really happens when their lecturers used GC for some courses. The students' claims as can be seen in Figure 1, suggest that their lives glue together between their social life in social media and their learning activities taking place in GC. We have GC chat, IG chat, WA chat, and FB chat. We spent lots of time busy chatting about everything, but GC only for our school.
Now I can't separate between my school and my social life; I do them all at once.
ended and class room had closed. The students continued interacting as normal official classroom processes, not as extension hours when they worked on their usual homework. The interesting part is that they were so attached to their mobile phones that no moment without checking them where they engaged in their Google Classrooms as well as their social congregation.
In the second place, as can be seen in Figure 2, integrating Google Classroom in the classroom processes intensified the students' awareness of their study. In fact, with technology they felt that they were forced to focus on their schoolwork.
Figure 2. Excerpt 2
These quotes indicate that the students became more attached to their classroom. The nature of flexibility even required them to increase their commitment to always go back to their school related activities in the middle of their engagement in social media routines.
Unlike the first two findings, the third point (Figure 3. Excerpt 3) indicates that the students were upset with inventing new life and new habits. They were overwhelmed with chained attractive information and entertainments through their gadgets and found themselves wasting time on unproductive activities. They were actually aware that they procrastinated things they should have done; in most cases, they did not know what and why exactly they were doing. These quotes imply that the students grew their addiction to gadgets so they kept being distracted from their study.
Discussion
The utilization of various course management systems or 'platforms' in university is growing as the blended and online learning gain their popularity. Various studies are reported on how instructors used different platforms including Moodle (Eskandari & Soleimani, 2016;Khabbaz & Najjar, 2015), Edmodo (Balasubramanian, Jaykumar, & Fukey, 2014;Purnawarman, Susilawati, & Sundayana, 2016), Google Classroom (Kumar & Bervell, 2019), WebCT (Salam, 2009), and any other online or cloud-based tools. The reasons for their popularity are that they promote intensive engagement in learning activities evidenced with greater interaction among students-students-instructors. suggest that The use of course management system could alter instructional delivery in tertiary education (Kumar & Bervell, 2019); Tao, et al., 2018;Yulian & Salam, 2014;Borup et al., 2020).
All these are in line with the current study's results. Easy access and perceived usefulness of Google Classroom (GC) suggested that the students' daily routine interacting in their social media glued together with academic activities facilitated by GC. By this, the students boosted their commitment to their study and enhanced their learning. For example, while using WhatsApp, Instagram, Facebook and other social media accounts to interact with their networks, the results indicate that the students simultaneously participated in GC. They claimed that Google Classroom had brought their course in front of their face to mean the close proximity between social and academic intercourse. This reinforces (Sulissusiawan & Salam, 2017) study, that claimed that flexible time and environments advanced students' commitment to their learning.
Furthermore, (Mercer & Dörnyei, 2020) argue that learning commitment is critical for successful learning in the fast-paced reality of the twenty-first century; their commitment save them from irrelevant activities and unproductive exploration intensified by social media. The introduction of technology does not seem to manifest in task pursuit automatically, because although a student with technology savvy is not likely to always do well at various distractions. They need to struggle to get familiar with school related activities and academic tasks among multiple channels of temptation. There are simply too many competing activities on a student's mind in the middle of technology-rich environments. We must ensure that the students' positive commitment win at the end.
In addition, the current study showed that Google Classroom provided extended opportunities for students to congregate with peers as well as to continue their school-work. The data from questionnaire suggested that GC had helped them submit their assignments on time (4,53). They also claimed that flexibility means not only to allow them to customize their study hours, but also to increase their commitment to work till midnight. The students' ability to maintain their engagement to work so hard indicates students' positive disposition without being ruined by the abundance of other pressing and ever-salient distractions.
The students of (Satar & Akcan, 2018) held that Google Classroom values a constructive result on their learning endeavours as well as social participation with their networks and therefore result in positive attitudes for this kind of technology. Furthermore, the study by (Amadin et al., 2018) found that Google Classroom enabled students to achieve better schoolwork and increases learning productivity.
Despite those benefits to have access to information and communication technology, some students still failed to gain the merit of technology. It sounds pity to find students distracted and even worsted by technology. Technology is genuinely created to help humankind but for those who fail to self-regulate themselves will be disadvantaged. This study exemplifies students being trapped by technology due to their inability to prioritize their study and their life. They failed to understand the use of technology for learning. They indeed need help to get back to their Google Classroom and stop procrastination.
Numerous studies in fact show similar findings. Students with mix experiences and feelings both being motivated and optimistic with access to rich perspectives of information, but at the same time being overloaded with distraction (Sulissusiawan & Salam, 2017). The other study, (Conard & Marsh, 2014) found that students, even though they were interrupted by simultaneous information access, managed to accomplish their work. Conard and Marsh argued that their students used some advantageous multitasking strategies. In a similar vein, (Judd, 2015) investigated the tract records of computer logs during students' independent learning sessions. The study found that the students switched tasks frequently in averagely every 31 second across Academic, Communication, Information, Recreation and Applications tasks. However, students managed to keep almost half of the activities related to their studies.
Conclusion
The current study has provided significant findings about how course management system, Google Classroom (GC), was experienced by students in learning activities in their
|
2021-08-04T00:04:16.604Z
|
2020-12-03T00:00:00.000
|
{
"year": 2020,
"sha1": "a6f496cb2cd733b463be06129c5be1c604288a59",
"oa_license": "CCBYSA",
"oa_url": "https://ejournal.undiksha.ac.id/index.php/JPI/article/download/27163/17436",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "538cdc5a12a9be2254bf720dbcac7467caadd52e",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
7867362
|
pes2o/s2orc
|
v3-fos-license
|
Increased shedding of HU177 correlates with worse prognosis in primary melanoma
Background Increased levels of cryptic collagen epitope HU177 in the sera of melanoma patients have been shown to be associated with thicker primary melanomas and with the nodular histologic subtype. In this study, we investigate the association between HU177 shedding in the sera and clinical outcome in terms of disease-free survival (DFS) and overall survival (OS). Methods Serum samples from 209 patients with primary melanoma prospectively enrolled in the Interdisciplinary Melanoma Cooperative Group at the New York University Langone Medical Center (mean age = 58, mean thickness = 2.09 mm, stage I = 136, stage II = 41, stage III = 32, median follow-up = 54.9 months) were analyzed for HU177 concentration using a validated ELISA assay. HU177 serum levels at the time of diagnosis were used to divide the study cohort into two groups: low and high HU177. DFS and OS were estimated by Kaplan-Meier survival analysis, and the log-rank test was used to compare DFS and OS between the two HU177 groups. Multivariate Cox proportional hazards regression models were employed to examine the independent effect of HU177 category on DFS and OS. Results HU177 sera concentrations ranged from 0-139.8 ng/ml (mean and median of 6.2 ng/ml and 3.7 ng/ml, respectively). Thirty-eight of the 209 (18%) patients developed recurrences, and 34 of the 209 (16%) patients died during follow-up. Higher HU177 serum level was associated with an increased rate of melanoma recurrence (p = 0.04) and with increasing mortality (p = 0.01). The association with overall survival remained statistically significant after controlling for thickness and histologic subtype in a multivariate model (p = 0.035). Conclusions Increased shedding of HU177 in the serum of primary melanoma patients is associated with poor prognosis. Further studies are warranted to determine the clinical utility of HU177 in risk stratification compared to the current standard of care.
Background
Limitations of the current melanoma staging paradigm beget limitations in our ability to determine the most appropriate treatment for primary melanoma patients with regard to maximizing therapeutic benefit and minimizing morbidity. Well-characterized clinical prognostic markers such as tumor thickness and ulceration only partly explain the variability in the clinical course of melanoma. Patients with thin melanoma <1 mm, characterized as having a favorable prognosis, have reported rates of metastasis ranging from 3-22% [1]. Conversely, patients with thicker lesions not uncommonly have extended periods of disease-free survival. Although sentinel lymph node biopsy has improved our ability to predict prognosis for patients with intermediate thickness lesions, further markers are needed to determine which of these patients are most likely to develop metastases and thus are most likely to benefit from post-surgical adjuvant therapy.
There is a need for development of new biomarkers that reflect the underlying melanoma biology. Mitotic rate has recently become part of the American Joint Committee on Cancer staging criteria based on studies demonstrating that its addition to a morphologicallybased classification system improved risk stratification for patients with thin primary melanoma [2]. Advances in the understanding of melanoma biology have resulted in the discovery of other promising protein biomarkers that are predictive of melanoma-specific mortality and reflective of varying aspects of tumorigenesis including resistance to antigrowth signals (p16/ INK4a), limitless replicative potential (Ki-67), tissue invasion (matrix metalloproteinase-2), and sustained angiogenesis (iNOS) [3]. None of these biomarkers, however, have been adopted into clinical practice which may be attributable to several reasons including lack of multivariate analyses with subsequent overestimation of prognostic utility [3].
Recent efforts in genomics research have focused on the development of tumor specific and patient specific gene expression signatures that are predictive of clinical outcome or response to treatment. Even in large scale studies, however, the prognostic accuracy of gene classifiers has not yet proven to be superior to thickness and ulceration in predicting metastasis [4]. Furthermore, gene expression profiling typically requires fresh frozen tissue from the surgical resection, and studies of the effect of sampling melanocytic lesions for research have raised concerns about the possibility of compromising the accuracy of the pathologic diagnosis and subsequent staging [5]. At present, the emerging technology is labor-intensive and likely prohibitively expensive for integration into the common clinical practice for melanoma patients. Immunohistochemistry-based biomarkers are also limited by experimental variability, lack of reproducibility, and inter-observer variation in the classification of staining intensities [6]. By contrast, serum-based biomarkers are non-invasive, relatively low cost, and can easily be incorporated into clinical practice as a way to monitor disease progression over time.
It is known that cellular interactions with the extracellular matrix (ECM) can regulate a wide range of biologic functions including adhesion, migration, proliferation, and angiogenesis [7]. Previous studies have identified cryptic regulatory epitopes that, under normal physiologic conditions, are hidden within the 3-dimensional structure of the ECM protein collagen [8,9]. Following proteolytic remodeling of the collagenous ECM during tumor growth and invasion, however, these unique cryptic epitopes are exposed and shed into the serum. Cryptic collagen epitope HU177 has been specifically associated with increased angiogenesis and tumor growth in vivo [9]. We have successfully developed an ELISA assay to detect and quantify levels of cryptic epitope HU177 in the serum of melanoma patients and demonstrated that the level of HU177 correlated with tumor thickness and with the nodular histologic subtype [10]. In the current study, we sought to determine the prognostic relevance of HU177 serum levels. We demonstrate that HU177 shedding in the sera is associated with increased recurrence and decreased overall survival independent of tumor thickness suggesting that it may have potential as a biomarker of aggressive disease in primary melanoma. Additionally, HU177 serum levels may be useful in the stratification of patients for inclusion in clinical trials of anti-angiogenesis based chemotherapeutics.
Methods
The study cohort consisted of 209 primary melanoma patients prospectively enrolled in the Interdisciplinary Melanoma Cooperative Group (IMCG) at the New York University (NYU) Langone Medical Center between September 2002 and November 2006. Demographic and clinicopathologic data were recorded prospectively for all patients, and patients were followed through July 2008. Follow-up ended in July 2008 to allow sufficient time for data auditing, which was completed by December 2008. The NYU Institutional Review Board approved this study and informed consent was obtained from all patients at the time of enrollment.
All blood samples were collected at the time of primary melanoma diagnosis in 10 ml BD serum tubes, stored immediately at 4°C, and then centrifuged at 10°C for 10 minutes at 1,500 × g. In 178 patients, serum was collected after surgery. In 29 patients, serum was collected on the day of surgery, and in 2 patients, serum was collected before surgery. Previously published results demonstrated that time of collection does not influence the relationship between HU177 level and tumor characteristics [10]. The supernatant serum was aliquoted into 1.5 ml cryovials and stored at -80°C until further use. All samples studied with the ELISA assay were subjected to only one freeze-thaw cycle.
HU177 cryptic epitope concentration (ng/ml) was quantified by a capture assay described in detail previously [10]. Briefly, 96-well microtiter plates were coated with a monoclonal antibody to HU177. Patient samples and denatured collagen IV standards were incubated in each well in triplicate, followed by incubation with biotinylated anti-collagen IV antibody (Southern Biotech, Birmingham, Alabama), subsequently with anti-biotin monoclonal antibody conjugated to horseradish peroxidase (Sigma Aldrich, St. Louis, Missouri), and lastly with 3, 3',5,5'-tetramethylbenzidine (TMB) substrate. Substrate absorbance was measured at 400 nm using a model 680 Bio-Rad microplate reader (Bio-Rad Laboratories, Hercules, California). Although there is no true positive or negative with which to determine the sensitivity and the specificity of the assay, the accuracy of the levels was determined using a standard curve of known concentrations of denatured collagen that ranged from 0-40 ng/ml and fit with either a linear or a second degree polynomial equation (r 2 ≥ 0.993) from which the concentration of cryptic epitope in patient samples was extrapolated [10]. Random samples were also subjected to additions of 100 ng denatured collagen and recoveries were equal to the endogenous level plus the external spike. Investigators performing the HU177 ELISA assay were blinded to clinicopathologic data.
Descriptive statistics were calculated for baseline demographic and clinicopathologic characteristics. HU177 values were dichotomized into two groups using the mean (6.2 ng/ml) and median (3.7 ng/ml) values determined previously in this cohort [10]. The chisquare test or Fisher's exact test, as appropriate, was used to compare recurrence and mortality proportions between the two HU177 categories. Disease-free survival (DFS) and overall survival (OS) were estimated by Kaplan-Meier survival analysis and the log-rank test was used to compare DFS and OS between the two HU177 groups. Multivariate Cox proportional hazards regression models were employed to examine the effect of HU177 category (e.g. ≤ 3.7 ng/ml vs. >3.7 ng/ml) on DFS and OS, adjusting for tumor thickness (continuous), histologic subtype (nodular/other melanoma vs. superficial spreading melanoma), and ulceration status. The proportional hazards assumption was evaluated by statistically assessing the interaction of each predictor variable with time in the model. In addition, Schoenfeld residuals for each predictor variable in the model were examined when evaluating the proportional hazards assumption. All p-values were two-sided with statistical significance evaluated at the 0.05 alpha level. Ninety-five percent confidence intervals (95% CI) were calculated to assess the precision of the obtained estimates. All analyses were performed in SAS Version 9.1 (SAS Institute Inc., Cary, North Carolina) and Stata Version 10.0 (Stata Corporation, College Station, Texas).
Results
Clinical and pathologic characteristics of the 209 patients in the study population are presented in Table 1. The median follow-up time for survivors was 54.9 months. Follow-up ranged from 2 months to 81 months, with the lower end resulting from loss to follow-up or study withdrawal prior to the end of the study period. Thirtyeight of the 209 (18%) patients developed recurrences and/or metastases (13 skin, 8 lymph node, 17 visceral), and 34 of the 209 (16%) patients died during follow-up. The mean and median HU177 levels (ng/ml) for the entire cohort were 6.2 and 3.7 (range 0.003-139.8), respectively. The number of recurrences, deaths, and median HU177 levels by melanoma stage are displayed in Table 2.
The HU177 level was greater than the mean HU177 level of the cohort (6.2 ng/ml) in 59 patients (28%) and greater than the median concentration (3.7 ng/ml) in 106 patients (51%) (Figure 1). Because the distribution of HU177 levels was positively skewed, we analyzed the data using the median in addition to the mean. Analyses based on both mean and median HU177 concentration are provided to allow for a comparison of the two distinct cut points. However, the use of the median HU177 value as a categorical cut point is emphasized in our results. Elevated HU177 concentration is associated with increased melanoma recurrence HU177 sera concentration greater than the median (3.7 ng/ml) was associated with a higher recurrence rate compared to HU177 sera concentration less than or equal to the median (23.6% vs. 12.6%; p = 0.04). This association remained statistically significant when the mean (6.2 ng/ml) was used to dichotomize the HU177 distribution (27.1% vs. 14.7%; p = 0.04). Kaplan-Meier survival analysis demonstrated improved DFS for patients with HU177 sera concentration less than or equal to the median compared to patients with sera concentration greater than the median (p = 0.04 by log-rank test) (Figure 2).
Elevated HU177 concentration is associated with increasing mortality HU177 sera concentration greater than the median (3.7 ng/ml) was associated with a higher mortality rate compared to HU177 sera concentration less than or equal to the median (22.6% vs. 9.7%; p = 0.01). The observed association remained statistically significant when the mean HU177 level was used to dichotomize the HU177 distribution (28.8% vs. 11.3%; p = 0.002). Kaplan-Meier survival analysis demonstrated improved OS for patients with HU177 sera concentration less than or equal to the median compared to patients with sera concentration greater than the median (p = 0.01 by log-rank test) ( Figure 3).
HU177 concentration is associated with disease-free and overall survival after adjustment for tumor thickness and histologic subtype Because the number of recurrences in the cohort was relatively low (n = 38), the most balanced multivariate model included 3 variables inclusive of the epitope concentration. Variables that were most strongly correlated with epitope concentration in the univariate analyses (histologic subtype and thickness) were included in the multivariate model. High levels of HU177 remained an independent prognostic factor for DFS and OS when controlling for tumor thickness and for Table 3). The proportional hazards assumption was not violated for any of the predictor variables in the DFS and OS models. If ulceration is included in the multivariate model (instead of histologic subtype), the independent prognostic value of HU177 level remains statistically significant (DFS, p = 0.048; OS, p = 0.048), and tumor thickness loses its predictive significance (DFS, p = 0.257; OS, p = 0.199) (not shown). This suggests that the variables are collinear and thus only one should be added to the model. Because thickness was more closely associated with epitope concentration than ulceration in the univariate analysis, it was entered into the multivariate model along with histologic subtype.
Regarding the impact of sentinel lymph node (SLN) data, only 100/209 (48%) patients had SLN biopsies performed, thus its influence on survival could only be meaningfully assessed on a univariate analysis. A subset analysis, however, of the 100 patients who underwent SLN biopsy showed that SLN status was a significant predictor of both DFS (HR 3.73, 95% CI = 1.75-7.94; p = 0.0006) and OS (HR 2.58, 95% CI = 1.00-6.68; p = 0.05) on univariate analysis.
Discussion
Our results suggest that pro-angiogenic cryptic collagen epitope HU177 may have prognostic significance as a biomarker of poor outcome in primary melanoma. Higher levels of HU177 were associated with an increased rate of recurrence and increasing mortality.
Clinical decision making in the care of melanoma patients is based primarily on tumor morphology as thickness and ulceration consistently prove to be the most accurate predictors of survival [11]. Sentinel lymph node biopsy has been shown to be predictive of recurrence, but it is typically only considered standard of care for patients with intermediate thickness lesions. Our previously reported meta-analysis demonstrated that few patients with thin melanoma have a positive SLN, and there are no clinical or histopathologic criteria that can reliably identify thin melanoma patients who might benefit from this intervention [12]. As reflected in our cohort in which 51% of patients have melanomas <1 mm thick, trends in downward stage migration mean that a larger percentage of newly diagnosed melanoma patients will not be considered for SLN biopsy but could nonetheless benefit from non-invasive serologic prognostic markers.
A number of sera markers have been evaluated for their prognostic significance in primary melanoma with limited success. For example, angiogenic factors vascular endothelial growth factor (VEGF), basic fibroblast growth factor (bFGF), interleukin-8 (IL-8), and Figure 3 Kaplan-Meier analysis for overall survival by median epitope concentration. Patients with elevated HU177 concentrations above the median value demonstrated a reduced overall survival probability compared to patients with HU177 concentrations below the median (HU177 >3.7 ng/ml: n = 106 patients, 24 deaths; HU177 = 3.7 ng/ml: n = 103 patients, 10 deaths; p = 0.01 by log-rank test). angiogenin have been studied for their value in predicting outcome. One study reported that elevated concentrations of VEGF independently correlated with poor overall survival [13]. The results, however, have not been replicated by other investigators [14,15]. Similarly, IL-8 and bFGF were found to be independent predictors of overall survival [13], but additional studies to validate their findings are pending. Angiogenin showed less promise: serum levels were not found to correlate with outcome [13]. Other candidates such as S-100 beta, a wellestablished diagnostic marker for melanoma by immunohistochemistry, have been found to have limited prognostic relevance in early stage melanoma [16]. A serum-based marker of aggressive biology such as HU177 has the potential to identify primary melanoma patients at high risk for the development of distant metastases who should be treated in the post-surgical adjuvant period. Even if the appropriate risk stratification tools were developed, however, current data suggest that adjuvant therapy with interferon fails to confer a survival advantage [17]. Thus, it is imperative that the development of prognostic biomarkers and the development of novel molecularly targeted therapy occur simultaneously. Our results showing a correlation between pro-angiogenic collagen epitope HU177 and worse overall survival suggest that targeting angiogenesis in the post-surgical adjuvant period may be a rational approach for patients with primary melanoma. A shift in the balance between pro-and anti-angiogenic peptides towards angiogenesis promotes neovascularization, which is essential for tumor progression among other processes. Angiogenesis has been successfully targeted in other malignancies, resulting in the FDA approval of anti-VEGF agent bevacizumab for use as combination therapy in the treatment of metastatic colorectal and non-small cell lung cancer [18,19]. The utility of antiangiogenic therapy in melanoma, however, has not been clearly defined. Since metastatic melanoma has a poor prognosis, anti-angiogenic treatments would delay melanoma progression and have a great impact on cancerspecific mortality. We have already shown the potential utility of HU177 in prognosis but it may also serve as a therapeutic target, similar to bevacizumab but with its effect prior to metastasis. Metastasis requires changes in the vascular basement membrane, of which type IV collagen is a part. Both the pro-angiogenic factor HU177 and the angiogenesis inhibitor tumstatin are type IV collagen cleavage products. Disruption of this balance between pro-and anti-angiogenic peptides promotes neovascularization. Treatments targeting pro-angiogenic factors, such as HU177, appear to be more clinically relevant. A recent study demonstrated that tumstatin slows tumor growth in renal cell carcinoma and colorectal cancer cell lines, but all tumors eventually escaped tumstatin-induced growth inhibition and entered into an exponential growth phase. This rapid growth was shown to result from an up-regulation of genes encoding pro-angiogenic peptides, possibly in response to hypoxic conditions. Genes encoding anti-angiogenic factors were not silenced [20]. Another study investigating carboplatin/paclitaxel/bevacizumab combination therapy in stage IV melanoma demonstrated that the addition of bevacizumab was well tolerated and the median overall survival was higher than in previous reports of single agent treatment with dacarbazine (52 weeks vs. 25.6 weeks) [21]. Although limited conclusions can be drawn from this uncontrolled trial, the results do suggest that targeting angiogenesis, in particular pro-angiogenic factors, as part of a combination chemotherapy regimen may be a useful strategy.
The association between pro-angiogenic HU177 and poor prognosis in our study is consistent with other serum biomarker studies that have identified VEGF and serum angiopoietin-2 (sAng-2) as useful predictors of response to therapy. In a study of 59 patients with metastatic melanoma or renal cell carcinoma receiving high dose recombinant interleukin-2 (IL-2), serum was collected and analyzed for potential biomarkers of response using a customized protein array platform. Serum VEGF and fibronectin were shown to be independently predictive of response to IL-2 [22]. Another serum biomarker study of 98 patients with stage I-IV melanoma identified an increase in sAng-2 levels by 50-400% in 90% of patients during progression from stage III to IV melanoma leading authors to conclude that sAng-2 levels are associated with disease progression in metastatic melanoma [23]. Both of these studies, however, are focused on biomarkers of advanced disease. A notable advantage of our study is that 65% of patients included had stage I melanoma, and the level of HU177 shedding in the serum was predictive of decreased overall survival independent of tumor thickness. Because HU177 has potential as a biomarker that can be utilized early in the disease course, there is perhaps a greater chance that it will influence the clinical decisions that alter the disease course and ultimately impact outcomes.
Our findings emphasize the role of interactions with the cellular microenvironment as potential targets for therapy and biomarker development. A key limitation of current in vitro and in vivo models is that they often overlook the contribution of the ECM and the tumor microenvironment toward the initiation and progression of tumorigenesis. Increasing evidence, however, supports the notion that melanoma cells interact with the adjacent microenvironment in a bi-directional manner through molecular signals that can modulate the malignant phenotype [24]. Previous in vivo studies of HU177 demonstrated that cleavage of type IV collagen during ECM remodeling led to exposure of cryptic regulatory sites, such as HU177, and that an antibody directed at the HU177 cryptic site inhibited cell adhesion, migration, and proliferation on denatured collagen type IV [25]. It is thought that the HU177 measured in sera is shed not from the tumor but from the tumor microenvironment. Thus, while current efforts to target VEGF and other pro-angiogenic factors whose expression is regulated by the melanoma cell have thus far been unsuccessful, our approach focused on non-cellular epitopes as new targets for biomarkers and treatment is novel and highly selective. Preliminary data from preclinical trials demonstrate that anti-HU177 mAB TRC093 significantly enhances the anti-tumor activity of bevacizumab in a melanoma mouse xenograft model demonstrating the potential utility of monitoring HU177 as part of an anti-angiogenic therapeutic strategy [26].
We demonstrate that HU177 levels are associated with worse outcome independent of tumor thickness. These results emphasize that, while the shedding of HU177 is associated with tumor remodeling and invasion, it is not merely a surrogate read-out of thickness. In the multivariate model, although the p-value for tumor thickness is lower than that for epitope concentration, the hazard ratio for the epitope concentration is more than double that of thickness (Table 3). Because thickness was analyzed as a continuous variable and HU177 epitope concentration was evaluated as a categorical variable (high vs. low), a true comparison between the strength of these two prognostic factors cannot be undertaken. The analysis demonstrates, however, that HU177 maintains its prognostic value independent of well-characterized prognostic variables that constitute the current standard of care.
Conclusions
High levels of cryptic collagen epitope HU177 are associated with higher recurrence rates and increasing mortality. HU177 shows promise as a serum biomarker that is reflective of melanoma biology, that can be easily integrated into the clinical management of melanoma patients, and which may have potential as a molecular target for adjuvant therapy. These data justify further validation studies in a larger, independent cohort.
|
2014-10-01T00:00:00.000Z
|
2010-02-23T00:00:00.000
|
{
"year": 2010,
"sha1": "3cdafd24ad376170f2550abd3577e3575420b6bf",
"oa_license": "CCBY",
"oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/1479-5876-8-19",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a760047a5c408ddde73db9821c330246a058d162",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
111635195
|
pes2o/s2orc
|
v3-fos-license
|
Making Environmental Biology Central to a Course in Biology for Engineers
Engineers dealing with biological systems need to know how these systems interact with their physical, chemical, and biological environments when they propose solutions to problems involving living things. This awareness should start with their undergraduate education that includes an introduction to biological science. Such a course, developed and taught at the University of Maryland, is described in this paper.
Introduction
The opportunities in biology these days are vast, and all engineers should learn about and appreciate them. This extends even to all members of the general population, because there are and will be increasing numbers of ethical and societal issues involving modern biology. Engineers and other technologists, in particular, need to know how living things are and can be used to produce solutions to problems of human concern. Introductory biological science courses are, therefore, being required for all engineers at an increasing number of academic institutions.
Relying on traditional introductory biology courses taught to engineers does not generally satisfy engineering needs. This is because engineers are educated to become designers and problem solvers, and have a different outlook from traditional biologists. Especially in environmental biology, engineers are tinkerers and controllers rather than passionate observers. A different approach to teaching biology to engineers is needed.
The biological system
A main requirement for a course in biology for engineers is a comprehensive systems approach emphasizing environmental effects. Living things are influenced by, and themselves influence, their surroundings. At all hierarchical levels, hereby designated as Biological Units, or BU, these BU interact with their entire physical, chemical, and biological environments. These relationships are diagrammed simply in (Figure 1), with the arrows on the lines connecting a BU to its environmental elements pointing in both directions. This means that BU are affected by their environments, but that they also affect these environments and make them different.
This simple diagram is central to understanding of biology. Engineers and others need to realize that living things are not passive and compliant. Whenever living things are included in an engineering solution, they move, they change, and they affect the rest of the solution, sometimes so much that they may cause additional problems larger than the one that originally existed. With this appreciation, engineers can, at least, be aware of the challenges that they face when trying to fit living things into their designs.
Expectations
This brings us to the three major expectations for biological engineers [1]: 1. Possess the knowledge of biological principles and generalizations that can lead to useful products and processes.
2.
Have the ability to transfer information known about familiar living systems to those unfamiliar.
3. Know enough to avoid or mitigate the unintended consequences of dealing with any living system.
All engineers who deal with biological systems should, at least, have a level of awareness that, in biology, they are dealing with something more complex than steel, mechanical widgets, or computer programs.
Environmental components
The three environmental elements -physical, chemical, and biological -deserve explanation. All engineering is based on physics, so engineering students have always had one or more basic physics courses. In these courses are taught such things as mechanics, electricity, and optics. When considering physics as it relates to biology, however, the emphasis must be on those physical principles that apply particularly to living things. There are physical limitations due to fluidics, energetics, and mechanics that should be emphasized. Living things do not violate physical principles; they conform to them.
Chemistry courses are also a normal part of an engineering curriculum. These can include general chemistry, physical chemistry, and organic or biochemistry, depending on the engineering discipline. All of these have something to contribute to biological understanding. Knowing, for instance, that protein charges can change from positive to negative as solution pH changes can be important to know. The additional effects of temperature, ionic constitution, enzyme availability, mass action, surface configuration, toxicity, surface energy, and others must be familiar to the biological engineer.
When considering the biological environment, there is a large gap between information taught in traditional introductory biology courses and the information needed by engineers. At low hierarchical levels, there are many chemical interactions, as in quorum sensing by microbes. At higher levels, there are behavioral and psychological contributions to the biological environment. Knowledge is passed from one higher level animal to another (called memes) and changes the biological environment for them both. This range of communication types is hardly ever touched upon in traditional courses, but needs to be appreciated by the engineer designing animal confinement facilities or automobile dashboards, for instance.
Course structure
The course taught at the University of Maryland has five basic units:
Comparison between biological science and biological
engineering.
Basic sciences related to biology.
3. Biological responses to environmental stimuli.
Biological engineering applications.
In the first unit, students are introduced to differences in phylogeny, motivation, and methods used by scientists and engineers dealing with biology. The scientific method, central to science, and modeling, central to engineering, are covered. Scientific predictions are largely hypotheses, but engineering predictions are designs.
Once students appreciate that this biology course is to be different from a traditional biology course, they are taught topics in physics, chemistry, biology, mathematics, and engineering sciences that help to define the physical, chemical, and biological environments surrounding living things. This section is broad and foundational, and serves as a background for the next unit detailing typical biological responses to environmental stimuli.
The next section deals with many environmental factors, including the presence of oxygen, water, wastes, toxins, heat, mechanical stresses, and other living things. Included in this section are communications, optimization, redundancies, antagonistic actions, sensing and control, cycling, competition, cooperation, and reproduction. Death is considered in the context of product reliability, a topic all engineers should know about. It is in this unit that students are particularly primed to consider environmental factors important to biological responses.
Scaling is the ability to project the magnitude of a trait known for one type of organism to that for another unfamiliar type. It allows biological engineers to make quantitative estimates for their designs. It is, by far, the most quantitative unit for the course. Students are introduced to many allometric relationships usually based on powerlaw principles. Students are not expected to memorize these equations, but, instead, to be aware of general trends.
The last section deals with applications, from biotechnology, biomedicine, and bioenvironmental engineering. The range of applications is as comprehensive as possible so that students can see connections and discern principles among all possible applications.
Examples introduced in the course are meant not to emphasize one possible application over another so much as to be balanced among all possible application types. There are examples from environment, food, biotechnology, medicine, psychology, ecology, physiology, human factors, and imaging, as well as others. In this way, students can see biology as a wonderfully broad opportunity for engineering activity. By emphasizing the fundamental nature of environmental biological interactions, real life designs have a greater chance of success.
Textbook and teaching materials
This course is, unfortunately, unique at this time. Hence, there were no textbooks or teaching materials to support it. However, a book, Biology for Engineers [2], has been written and newly published following the course structure previously described. The book had been in development for ten years, and, as a result, has been thoroughly student-tested. Reviews of the book have been positive [3,4], and, with the availability of this textbook, courses covering biology in this way should become more common [5].
Other supporting materials are being developed. Multiple choice examination questions are available from the author, and letting students see these questions without the answers has successfully been used to help them learn the vast amount of material in this course. An extensive addendum covering recent advances in biology is available, also [6]. This addendum is being updated frequently. Power-point slides may become available from instructors teaching similar courses at other institutions.
Outcomes
Student responses have been positive. One student has remarked that is course is appropriate for those contemplating which area of biological engineering specialty to choose. Another biology student used the course to explore what biological engineering is about and decided to choose engineering as a major. All students have been made aware of the intricate interplays between living things and their
|
2019-04-14T13:06:09.800Z
|
2011-01-01T00:00:00.000
|
{
"year": 2013,
"sha1": "48d277f7b922b0b3e1d2e30b2ea3e970c1708768",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.4172/2157-7625.s1-002",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "b105d55690f24de8644c7288aa372642a14e9092",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Biology",
"Environmental Science",
"Engineering"
]
}
|
123218007
|
pes2o/s2orc
|
v3-fos-license
|
An integrable generalization of the sine-Gordon equation on the half-line
We analyze a generalization of the sine-Gordon equation in laboratory coordinates on the half-line. Using the Fokas transform method for the analysis of initial-boundary value problems for integrable PDEs, we show that the solution $u(x,t)$ can be constructed from the initial and boundary values via the solution of a $2\times 2$-matrix Riemann-Hilbert problem.
Introduction
We consider the following integrable generalization of the sine-Gordon equation: u xx − u tt = 1 + ν(∂ x + ∂ t ) 2 sin u(x, t), x, t ∈ R, (1.1) where u(x, t) is a real-valued function and ν ∈ R is a parameter-note that (1.1) reduces to the sine-Gordon equation in laboratory coordinates when ν = 0. In terms of the 'light-cone' coordinates (ξ, η) defined by Fokas (1995). It is related to the sine-Gordon equation in the same way that the Camassa-Holm equation is related to the KdV equation.
In this paper, we will assume that ν < 0 and for simplicity set ν = −1. For this value of ν, equation (1.3) appeared in Sakovich and Sakovich (2007), where it was shown to be related, via certain transformations, to an integrable equation which describes pseudospherical surfaces introduced in Rabelo (1989). The inverse scattering transform (IST) formalism on the line for equation (1.3) with ν = −1 was implemented in Lenells and Fokas (2010).
A method for the analysis of initial-boundary value (IBV) problems for nonlinear integrable PDEs was announced in Fokas (1997) and subsequently developed further by Figure 1 The half-line domain Ω with respect to the laboratory and light-cone coordinates.
several authors, see Fokas (2008). Here, we use this method to study equation (1.1) in the half-line domain where T ≤ ∞ is a given final time, see Figure 1. Given initial values at x = 0 and boundary values at t = 0 such that the corresponding IBV problem for (1.1) in the domain Ω has a solution u(x, t), we show that u(x, t) can be constructed via the solution of a 2×2-matrix Riemann-Hilbert (RH) problem. The main notable features as compared with other similar applications of the methodology of Fokas (1997) are: (1) The formulation of the RH problem suggested by the above methodology depends, in addition to the variables (x, t), on a function p(x, t) which is unknown from the point of view of the inverse problem. In order to formulate a RH problem whose jump matrix involves only known quantities, we have to reparametrize the x and t variables. A similar situation occurs in the analysis of the half-line problem for the Camassa-Holm equation, where however only the x-variable has to be reparametrized, see Boutet de Monvel and Shepelsky (2008). (2) Only certain combinations of u and its derivatives can be recovered from the RH problem. Therefore, in addition to solving the RH problem, the reconstruction of u(x, t) involves finding the solution of a nonlinear ODE. Following the ideas of Lenells and Fokas (2010) we show that this ODE can be reduced to a Ricatti equation.
(3) The adopted Lax pair for equation (1.1) has singularities at λ = ∞ and λ = 0, where λ ∈Ĉ = C ∪ {∞} denotes the spectral parameter. In order to define eigenfunctions which are bounded on the whole Riemann λ-sphere, we will use two different representations of the Lax pair. These representations are suitable for the definition of eigenfunctions which are bounded near λ = ∞ and λ = 0, respectively. The analogous problem for the sine-Gordon equation on the half-line (i.e. for the equation obtained from (1.1) by letting ν = 0) was investigated in Fokas (2004Fokas ( , 2008; Fokas and Its (1992). We emphasize that although there exists (when ν < 0) a Liouville type transformation relating equation (1.1) to the sine-Gordon equation (see Sakovich and Sakovich (2007); Lenells and Fokas (2010)), the half-line problems for these two equations are not equivalent, since the Liouville transformation transformation distorts the shape of the domain Ω.
In section 2 we introduce a Lax pair for equation (1.1) and define bounded and analytic eigenfunctions which are suitable for the formulation of a RH problem. The jump matrix of this RH problem can be expressed in terms of certain spectral functions, which are introduced in section 3. Finally, the main result is stated in section 4.
Bounded and analytic eigenfunctions
The Riemann-Hilbert formalism for integrating a nonlinear evolution equation is based on the construction of eigenfunctions of the associated Lax pair. These eigenfunctions are joined together to a bounded and sectionally analytic function on the Riemann sphere of the spectral parameter λ ∈Ĉ = C ∪ {∞}. A Lax pair suitable for the construction of eigenfunctions which are bounded near λ = ∞ was derived in Lenells and Fokas (2010). For the problem on the line, this Lax pair representation alone was sufficient for the formulation of a RH problem, since the eigenfunctions could be constructed using only the x-part of the Lax pair. For the problem on the half-line, the construction of eigenfunctions involves also the t-part of the Lax pair, which has a singularity at λ = 0. We will therefore introduce another representation of the Lax pair suitable for the construction of eigenfunctions which are bounded near λ = 0. Then, according to the methodology of Fokas (1997), we will define solutions of these Lax pair representations by integration from three different corners of the spatial domain Ω. The eigenfunctions which are bounded near λ = 0 and λ = ∞ will be denoted by {µ j } 3 1 and {Φ j } 3 2 , respectively. Together the µ j 's and the Φ j 's can be used to formulate a 2 × 2-matrix RH problem.
Lax pair representations
Let Applying the change of variables (1.2) to the Lax pair of equation (1.3) derived in Lenells and Fokas (2010), we find the following Lax pair for equation (1.1): where φ(x, t, λ) is a 2 × 2-matrix valued eigenfunction, λ ∈Ĉ = C ∪ {∞} is a spectral parameter, the functions W ± (x, t, λ) are defined by and p(x, t) is a real-valued function such that The equations in (2.4) are compatible since equation (1.1) admits the conservation law We choose p(x, t) so that p(0, 0) = 0, i.e.
(2.6) Despite the form of the denominator of the last term in (2.3), the functions W ± do not have singularities at points where 1 ± cos u = 0. Indeed, using equation (1.1), the last term on the right-hand of (2.3) can be rewritten as and this expression is manifestly nonsingular.
The functions W ± have the following properties: • tr(W ± (x, t, λ)) = 0, where A † denotes the complex-conjugate transpose of a matrix A. The last two of these properties ensure that the eigenfunction φ(x, t, λ) can be normalized so that The Lax pair (2.2) is convenient for the definition of eigenfunctions which are bounded near λ = ∞. In order to define eigenfunctions which are bounded near λ = 0, we transform the Lax pair as follows. Let I denote the 2 × 2 identity matrix. The gauge transformation φ(x, t, λ) = g(x, t)ψ(x, t, λ), where V j = V j (x, t, λ), j = 1, 2, are defined by
The form (2.9) of g is motivated by the fact that it diagonalizes the terms of highest order as λ → 0 of the Lax pair (2.2) and that it satisfies (2.12) The relations (2.12) ensure that the gauge transformation (2.8) preserves the properties The function g(x, t) is nonsingular as u x + u t → 0 despite the form of the right-hand side of (2.9). In fact, The functions V 1 and V 2 have the following properties:
Eigenfunctions bounded near λ = 0
In this subsection, we define solutions of (2.10) which are well-behaved near λ = 0. Introducing an eigenfunction µ by we find that the Lax pair (2.10) becomes (2.15) This can be written in differential form as whereσ 3 acts on a 2 × 2 matrix A byσ 3 A = [σ 3 , A], and the closed one-form W (x, t, λ) is defined by (2.17) Figure 3 The sets {D j } 4 1 in the complex λ-plane.
We define three eigenfunctions {µ j } 3 1 of (2.16) by Since the one-form W is exact, the integral on the right-hand side of (2.18) is independent of the path of integration. We choose the particular contours shown in Figure 2. This choice implies the following relations on the contours: the second column vectors of µ 1 , µ 2 , µ 3 are analytic for λ ∈Ĉ such that λ belongs to D 3 , D 4 , and D 1 ∪ D 2 , respectively, see Figure 3. Moreover, away from λ = ∞ where the Lax pair is singular, they have continuous and bounded extensions to the closures of these sets. We will denote these vectors with the superscripts (3), (4), and (12) to indicate these boundedness properties. Similar conditions are valid for the first column vectors. We obtain The µ j 's satisfy The µ j 's are suitable for the formulation of a RH problem except that they have singularities at λ = ∞. Our strategy is therefore to cut out a neighborhood of λ = ∞ and use the Lax pair (2.2) to define eigenfunctions which are bounded in this neighborhood.
Eigenfunctions bounded near λ = ∞
The form of the Lax pair (2.2) is convenient for the definition of eigenfunctions which are well-behaved near λ = ∞. Introducing an eigenfunction Φ by ( 2.24) This can be written in differential form as We define two eigenfunctions Φ 2 and Φ 3 of (2.25) by where (x 2 , t 2 ) = (0, 0) and (x 3 , t 3 ) = (∞, t). The functions Φ 2 and Φ 3 are the analogs of µ 2 and µ 3 defined in (2.18); the analog of µ 1 is not needed since we only consider a neighborhood of λ = ∞. Choosing the integration contours in Figure 2, the integral equations (2.27) defining Φ 2 and Φ 3 become The second column of the integral equation for Φ 2 involves the exponentials where the functions p(·, t) and p(x, ·) are nondecreasing. Define R > 0 by (2.29) We will henceforth assume that so that R is finite. The relations (2.19) together with the definition of R implies the following inequalities on the contour (x 2 , t 2 ) → (x, t): The first of these inequalities is a consequence of the estimates which hold on the contour (x 2 , t 2 ) → (x, t).
The inequalities (2.31) imply that [Φ 2 ] 2 is bounded and analytic for λ ∈ D 6 . Similar considerations apply to the other column vectors of Φ 2 and Φ 3 , and we deduce that Φ 2 and Φ 3 have the boundedness properties .
Proposition 3.1 The spectral functions a(λ) and b(λ) have the following properties: (i) a(λ) and b(λ) are analytic for λ ∈ D 1 ∪ D 2 and continuous and bounded for λ ∈ D 1 ∪D 2 .
Proof. The properties denoted by (i) and (ii) follow from the discussion in subsection 2.2 and the observation that the definition of µ 1 (0, t, λ) implies that this function has the following enlarged domain of boundedness: 1 (0, t, λ) . (3.5) The properties denoted by (iii) follow from (3.3). 2 We will also need the spectral function s ∞ (λ) defined by Evaluation of (3.6) at (x, t) = (0, 0) yields Just like s(λ), the function s ∞ (λ) is defined in terms of the initial data u(x, 0) and u t (x, 0). Moreover, s ∞ satisfies det s ∞ (λ) = 1, (3.8) and can be written as where a ∞ (λ) and b ∞ (λ) are complex-valued functions.
Proposition 3.2 The spectral functions a ∞ (λ) and b ∞ (λ) have the following properties: (i) a ∞ (λ) and b ∞ (λ) are analytic in D 5 with continuous and bounded extensions to λ ∈D 5 .
Proof. Property (i) follows from the discussion in subsection 2.3. Property (ii) follows from (3.8). 2
The Riemann-Hilbert problem
In this section we use the eigenfunctions {µ j } 3 1 and {Φ j } 3 2 to formulate a RH problem for a 2 × 2-matrix valued function with jump contour shown in Figure 5. We will first formulate a RH problem for a 2 × 2-matrix valued functionM (x, t, λ), whose form is suggested by the methodology of Fokas (1997). However, it turns out that the jump matrix for this RH problem depends on the function p(x, t) which occurs in the Lax pair (2.24). The function p(x, t) is unknown from the point of view of the inverse problem, and thus the solution is not yet effective. We will overcome this problem by introducing new variables (y, η) and formulating a modified RH problem for a 2 × 2-matrix valued function M (y, η, λ), whose jump condition is given explicitly in terms of y, η, and λ. The solution u(x, t) of (1.1) can be recovered in parametric from the asymptotics of M (y, η, λ). A similar reparametrization of the RH problem occurs also in the analysis of other equations such as the Camassa-Holm equation and equation (1.3), although in those cases only one of the variables (x, t) has to be reparametrized, cf. Boutet de Monvel and Shepelsky (2008); Lenells and Fokas (2010).
RH problem forM (x, t, λ)
We seek a bounded and sectionally analytic 2×2-matrix valued functionM (x, t, λ), which satisfies a jump condition of the form whereJ(x, t, λ) is a 2 × 2-matrix valued 'jump matrix' and Since the µ j 's and the Φ j 's are well-behaved near λ = 0 and λ = ∞, respectively, we defineM in terms of the µ j 's in the regions D 1 , D 2 , D 3 , and D 4 , and in terms of Φ 2 and Φ 3 in the regions D 5 and D 6 . The methodology of Fokas (1997) suggests making the following ansatz forM : The definition ofM in D 1 ∪ D 2 ∪ D 3 ∪ D 4 , which involves the µ j 's, includes the prefactor g(x, t). This prefactor is suggested by the relationship (2.8) between eigenfunctions of the Lax pairs (2.15) and (2.24), and its inclusion implies that there exists a jump matrix J such thatM + andM − are related as in (4.1). We introduce the following notation: (4.5) The jump matrices {J n } 7 1 can be determined from the various relations between the eigenfunctions. Indeed, algebraic manipulation of the equations (3.1) leads to expressions for the jump matrices {J n } 4 1 in terms of the spectral functions s(λ) and S(λ). Similarly, algebraic manipulation of equation (3.6) leads to an expression for the jump matrixJ 6 in terms of the spectral function s ∞ (λ). To find an expression for the jump matrixJ 5 , we note that the relations (2.8), (2.14), and (2.23) imply that two solutions µ and Φ of (2.15) and (2.24), respectively, satisfy a relation of the form where C(λ) is a 2 × 2-matrix independent of (x, t) and the functions θ(x, t, λ) and θ ∞ (x, t, λ) are defined by In the particular case of µ = µ 3 and Φ = Φ 2 , equation (4.6) holds with C(λ) = g(0, 0)s(λ), i.e. g(x, t)µ 3 (x, t, λ) = Φ 2 (x, t, λ)e −iθ∞(x,t,λ)σ3 g(0, 0)s(λ)e iθ(x,t,λ)σ3 . (4.8) Equation (4.8) together with (3.1a) and (3.6) provide the required relations between the µ j 's and the Φ j 's needed for determiningJ 5 . In summary, we arrive at the following expressions for theJ n 's: and Γ(λ) is defined by (4.10)
RH problem for M (y, η, λ)
In the previous subsection we formulated a RH problem forM (x, t, λ) in the Riemann sphere of the spectral parameter λ. However, as noted above, this RH problem does not provide the solution of our initial-boundary value problem. Indeed, the jump matrices {J n } 7 5 involve θ ∞ . The occurence of the function p(x, t) in θ ∞ implies that the RH problem cannot be formulated in terms of the initial and boundary data alone. To overcome this problem we make two important changes in the formulation of the RH problem: (a) We modify the jump matrix by adding appropriate exponential factors to the definition (4.3); (b) We introduce new variables (y, η) by where p(x, t) was defined in (2.6). The jump matrix of the modified RH problem is explicitly given in terms of (y, η, λ) and can thus be formulated in terms of the initial and boundary data alone.
We can now prove the following theorem.
• None of the zeros of a(k) coincides with a zero of d(k).
Then the solution u(x, t) of equation (1.1) is given in parametric form in terms of two real parameters y, η such that where ξ(y, η) is defined by the function α(y, η) is the unique solution of the Ricatti equation λM 12 (y, η, λ) α − 1 4 , (4.21c) together with the initial conditions and M (y, η, λ) is the unique solution of the following 2 × 2-matrix RH problem: • M is meromorphic away from the contourD + ∩D − .
Proof. In the case when a(λ) and d(λ) have no zeros, the unique solvability is a consequence of the existence of a vanishing lemma. If a(λ) and d(λ) have zeros, the singular RH problem can be mapped to a regular one coupled with a system of algebraic equations, see Fokas and Its (1996). The residue conditions (4.22) can be proved as follows. The general approach of Fokas (2008) implies thatM as defined in (4.3) satisfies the residue condition x,t,kj ) [M (x, t, k j )] 2 , j = 1, . . . , n 1 , where θ(x, t, λ) is given by (4.7). The definition (4.12) of M and the relations (4.16) imply that M satisfies (4.22a). The other residue conditions follow similarly from the corresponding residue conditions forM . In order to prove (4.21), we note that the change of variables (4.11) implies that In particular, ∂ x + ∂ t = √ m∂ y .
(4.23) Moreover, because m = 1 + (u x + u t ) 2 = 1 + mu 2 y , we find that m = 1 1 − u 2 y . Thus, addition of the two equations in (2.24) together with (4.11) yields (4.24) Consider the particular solution of this equation given by . Equation (4.24) implies that Φ admits an expansion of the form where Φ (1) (x, t) and Φ (2) (x, t) are independent of λ. Substituting this expansion into (4.24) we find by considering the terms of O(1) that
|
2019-04-20T13:11:27.220Z
|
2009-11-04T00:00:00.000
|
{
"year": 2009,
"sha1": "80b0c2699331d18c78671e417d04074dbe46cf8c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0911.0848",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "6efae11da6439c0a42f686ac03a7cbbe12c360c9",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
}
|
24582428
|
pes2o/s2orc
|
v3-fos-license
|
Salmonella Typhimurium gastroenteritis leading to chronic prosthetic vascular graft infection
Introduction. It is estimated up to 6 % of prosthetic vascular grafts become infected. Staphylococcus aureus is predominant in early infection and coagulase-negative staphylococci are predominant in late infections. Enterobacteriaceae cause 14–40 % of prosthetic vascular graft infections. This is, to our knowledge the first reported case of Salmonella gastroenteritis causing chronic prosthetic vascular graft infection (PVGI). Case presentation. A 57 years old lady presented with signs and symptoms of prosthetic vascular graft infection. Three years earlier, she had undergone a prosthetic axillo-femoral bypass graft for critical limb ischaemia. The infected prosthetic vascular graft was removed and Salmonella Typhimurium was isolated on culture. In the intervening period, Salmonella Typhimurium was isolated from a faecal specimen, collected during an episode of acute gastroenteritis. Whole-genome sequencing (WGS) showed that the respective Salmonella Typhimurium isolates differed by only a single nucleotide polymorphism (SNP). Salmonella Typhimurium was not isolated on culture of a faecal specimen collected five days following cessation of antimicrobial therapy. Six months after removal of the prosthetic graft, the patient remains under follow-up for her peripheral vascular disease, which currently requires no further surgical intervention. Conclusion. This case has clear implications for the management of chronic PVGI. It is vital to collect high-quality surgical specimens for microbiological analysis and empirical choices of antibiotics are unlikely to cover all potential pathogens. It may also be prudent to enquire about a history of acute gastroenteritis when assessing patients presenting with chronic PVGI.
INTRODUCTION
It is estimated that 0.5-6 % of prosthetic vascular grafts become infected [1]. Staphylococcus aureus is predominant in early infection and coagulase-negative staphylococci are predominant in late infections. Enterobacteriaceae cause 14-40 % of prosthetic vascular graft infections. To date, only seven cases of prosthetic vascular graft infection (PVGI) involving species of the genus Salmonella have been reported [2]. However, none of these cases presented with a history of gastroenteritis. This is, to our knowledge, the first reported case of Salmonella gastroenteritis causing chronic PVGI.
CASE REPORT
In December 2012, a 54 years old female presented with critical limb ischaemia and imaging revealed extensive aorto-iliac occlusive disease. Limb salvage was achieved by means of an extra-anatomic axillo-bifemoral bypass, as the extent of the disease was considered unsuitable for endovascular intervention. A Gelsoft gelatin-impregnated knitted vascular prosthesis was used and she received clindamycin and gentamicin as prophylaxis for the procedure, as per local guidelines.
Underlying comorbidities included Type 2 diabetes (controlled with Metformin), and severe chronic obstructive pulmonary Disorder (COPD). In 2006, the patient suffered an out-of-hospital cardiac arrest due to an infective exacerbation of her COPD, and required a prolonged admission on intensive care. Medications prescribed to control her COPD included: Budesonide and Folmeterol combination inhaler; Tiotropium; Theophyline (Uniphyline Continus); and Montelukast. The patient's history of severe COPD made her unsuitable for open aortic surgery.
In July 2014, the patient developed a diarrhoeal illness, which followed a week-long history of malaise, anorexia and feverish symptoms. She suffered watery diarrhoea four to five times a day, but with no associated abdominal pain, and no blood in her stools. She was admitted to her local hospital where she was treated with intravenous crystalloid until her diarrhoea resolved spontaneously. Culture of a faecal specimen led to the isolation of Salmonella Typhimurium. An environmental health assessment concluded that she had not visited any restaurants, saying that she always prepared her own food. She was unemployed, and had no pets. The patient also denied having ever travelled abroad, and had not recently left the North East of England, where she lives in a coastal town.
In February 2015, the patient presented to her general practitioner (GP) with a five-month history of painful lump in her left axilla. On examination, the lump was approximately 3 cm in diameter, erythematous, tender and fluctuant, and the GP diagnosed a simple abscess. The GP prescribed oral antibiotic therapy, but this had very limited clinical effect. Therefore, the decision was made to lance the presumed abscess at a local hospital. Only a minimal amount of serosanguinous fluid was evacuated, and no unequivocal pathogens were isolated from a wound swab sent for culture. Following this intervention, the presumed abscess was regularly dressed in the community by a team of district nurses. However, the patient reported a green discharge, sufficient to saturate the dressing on a daily basis. The lump became progressively more painful, despite opiate analgesia, and the patient was eventually unable to abduct her left arm.
In October 2015, the patient was referred to the vascular surgical outpatient clinic. On examination, there was a 0.5 cm ulcer over the lower margin of the left lateral thorax, although there was little in the way of surrounding inflammation. Computed tomography (CT) showed that the lesion on the chest wall clearly communicated with the underlying prosthetic axillo-bifemoral bypass graft. The graft remained fully patent, and the axillary and bilateral femoral anastomoses appeared intact. A diagnosis of low-grade PVGI infection was made. In view of the patient's multiple comorbidities and limited options for further revascularisation, surgical intervention with prosthetic graft removal was considered high-risk, and a decision was taken to continue with conservative management of the PVGI. At this stage, Staphylococcus aureus and Streptococcus dysgalactiae were cultured from a superficial wound swab. Both organisms were susceptible in vitro to flucloxacillin. The patient was commenced on oral flucloxacillin and, by February 2016, the patient's wound discharge was minimal, with full range of movement of her left arm.
The patient's clinical condition remained stable until September 2016 when she presented with acute-onset pain in the left groin. On examination, there was a large, discharging ulcer in her left groin, measuring approximately 4 cm by 2 cm with prosthetic graft clearly visible in its base. White cell count (WCC) was 15.49Â10 9 l À1 and her C-reactive protein (CRP) was 32 mg l À1 . CT angiogram showed an area of fluid or soft tissue attenuation around the graft, in the left axilla, and confirmed that the graft was exposed in the left flank and groin (Fig. 1).
In September 2016, it was clear that there was no option other than to remove the graft due to the risk of life-threatening haemorrhage, but that consideration also needed to be given to maintaining lower limb perfusion. Via bilateral percutaneous femoral access, a 14 mm nitinol stent was placed in the aorta and 'kissing' nitinol stents then placed down to the left common iliac artery and the right external iliac artery. The efficacy of the stents was then confirmed by temporarily tying off the exposed inguinal graft. This caused no symptoms suggestive of ischaemia in her legs. Subsequently, the infected prosthetic axilla-bifemoral graft was completely removed, using clindamycin and gentamicin as surgical prophylaxis.
INVESTIGATIONS
Tissue and the explanted graft collected at the time of removal of the infected prosthetic axilla-bifemoral bypass graft were cultured on cysteine lactose deficient (CLED) agar for at least 18 h at 36 C in ambient air. A non-fermenting coliform was identified as a species of Salmonella by matrix assisted laser desorption/ionisation-time of flight mass spectrometry (MALDI-TOF). This was confirmed with an Analytical Profile Index 20E (API20E) and serological agglutination testing identified it as Salmonella enterica serotype Typhimurium. This was confirmed by the Salmonella Reference Service, Colindale (SRS). Antimicrobial susceptibility testing using the EUCAST disk diffusion method demonstrated resistance to amoxicillin and cefuroxime. Salmonella is intrinsically resistant to clindamycin and gentamicin is unlikely to be effective in vivo.
Whole-genome sequencing (WGS) showed that the Salmonella Typhimurium isolated from faeces, in 2014, and the infected prosthetic vascular graft, in 2016, differed by only one single nucleotide polymorphism (SNP). The SNP address profile has only been seen in these two isolates out of 5000 other genomes tested at the reference laboratory (Personal communication, T. Dallman, SRS, Colindale). The difference of one SNP between the isolates is well within what would be expected over this time frame. Salmonella Typhimurium mutation rates are estimated at 1-10 SNPs per year [3].
OUTCOME AND FOLLOW-UP
The patient was prescribed a five-day course of intravenous piperacillin-tazobactam, to which the Salmonella Typhimurium tested susceptible. Two days post-operatively, the patient noticed improvement of her pain. WCC reduced to 9.39Â10 9 l À1 and CRP reduced to 5 mg l À1 . Her opiate requirements fell from 40 mg daily, on admission, to 10 mg daily.
Seven days after removal of the infected prosthetic vascular graft, the patient was discharged home. She had completed five days of intravenous piperacillin-tazobactam, and a decision was taken to cease antimicrobial therapy. The surgical sites were satisfactory in appearance.
Three weeks post-operatively, the patient was seen in the vascular surgical outpatient clinic. She was able to walk into the clinic, and the surgical sites had healed completely. It was noted that Salmonella Typhimurium was not isolated on culture of a faecal specimen collected ten days post-operatively, following cessation of antimicrobial therapy.
Six months post-operatively, the patient remains under follow-up for her peripheral vascular disease, which currently requires no further surgical intervention.
DISCUSSION
This is, to our knowledge, the first reported case of Salmonella gastroenteritis causing chronic PVGI. The results of WGS support our theory that haematogenous spread of Salmonella Typhimurium either during the episode of acute gastroenteritis, or because of subsequent translocation from chronic carriage, led to bacterial 'seeding' of the prosthetic graft. Once infected, the resulting chronic inflammatory process led to the infected prosthetic axilla-bifemoral graft eroding through the skin, in both the axillary and inguinal regions.
On reviewing the wider literature, there have been seven reported cases of Salmonella species causing PVGI (See Table 1). In one case, the prosthetic graft was placed in a patient with an external iliac aneurysm that was later felt to have been caused by Salmonella arteritis [4]. In another case, the graft had eroded into the bowel providing a route of direct spread [5]. In all other cases, the primary source of infection remained unclear [2,[6][7][8][9].
This case has clear implications for the management of chronic PVGI. Firstly, it is vital to collect high-quality surgical specimens for microbiological analysis in all patients who present with PVGI. It is not possible to devise an empirical regime that would cover all the more unusual pathogens. Since June 2015, prospective surveillance at the Freeman Hospital has revealed that Staphylococcus aureus, S. epidermidis and Escherichia coli are the most common causes of PVGI. However, we have also isolated various organisms that would be difficult to account for with an empirical regimen, including coagulase-negative staphylococci, Propionibacterium acnes, Pseudomonas aeruginosa, Corynebacterium striatum and Enterococci.
Secondly, culture of high-quality surgical specimens can inform the choice of antibiotic prophylaxis. Salmonella Typhimurium is intrinsically resistant to several commonly used antimicrobials.
Finally, it may be prudent to enquire about a history of acute gastroenteritis when assessing patients presenting with chronic PVGI, and review previous microbiology results. In this case, no faecal specimens were sent for culture between the episode of gastroenteritis, in 2014, and removal of the infected axilla-bifemoral bypass graft, in 2016. Although 'clearance' stool cultures are not recommended, this case demonstrates that chronic carriage may present a risk to individuals where haematogenous spread can lead to severe infections, such as PVGI. Salmonella Typhimurium was not isolated on culture of a faecal specimen collected post-operatively.
LEARNING POINTS
The underlying anatomy of any prosthetic vascular grafts should be considered in patients presenting with unusual cutaneous abscesses.
A CT scan should be performed before attempting to lance an abscess that might involve an infected prosthetic vascular graft.
It is vital to collect high-quality surgical specimens for microbiological analysis in all patients who present with PVGI.
The choice of empirical antibiotic therapy and surgical prophylaxis should take into account the results of local, prospective surveillance of PVGIs.
Although rare, it is advisable to consider the risk of deepseated Salmonella PVGI infection in any patient who presents with a history of Salmonella gastroenteritis, even if this occurred several years ago.
|
2018-04-03T02:32:07.132Z
|
2017-08-01T00:00:00.000
|
{
"year": 2017,
"sha1": "8fc8e5503e5f45ecaee786e8057cf0e13ab3f10e",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1099/jmmcr.0.005104",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "a841b47bcc36bb16bff7f0293c59c6dd0b0cd28a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
55373428
|
pes2o/s2orc
|
v3-fos-license
|
Interactive comment on “ Evidence for microbial mediated nitrate cycling within floodplain sediments during groundwater fluctuations ”
We have added additional statistical analysis to the current work through the use of a Taylor diagram (Fig. S6). With respect to validation, the model has been tested against several different scenarios related to dilution + denitrification (added into the supplemental, Fig. S7), denitrification alone (Fig. 4 of the main text), increasing OM concentrations (Fig. S4), and varying ratios of electron donors (Fig. S5). The model captures
See for example the extensive work by Paul Brooks, Michelle Baker, Mark Williams
The manuscript under review examines how nitrate is produced and transformed at the capillary fringe during annual fluctuations in the water table.We are familiar with the large body of Paul Brooks' work, however, this work primarily concerns itself with the measurement and modeling of the subnivial nitrogen cycle, as well as examining surface hydrological processes.Much of this work is not necessarily pertinent to the current manuscript.Mark Williams similarly takes a broad approach to examining hydrologically-induced changes in the nitrogen cycle, however, as far as we are C1 aware does not work around the capillary fringe either.Michelle Baker's work primarily focuses on carbon and nitrogen cycling within riverine biogeochemical hotspots (i.e., riparian areas of rivers corridors, and in-stream hyporheic zones) rather than around the capillary fringe.While these authors' outstanding work has informed our broader thinking on the terrestrial nitrogen cycle, their body of work is not immediately applicable to the current manuscript, which is why these papers have not been cited.Indeed, there are few studies in the literature that we can find, taking a mechanistic approach to understanding the nitrogen cycle at capillary fringes that serve as relevant citations for the current work.This also goes against the reviewer's supposition our work is not novel.While there are a number of manuscripts examining nitrogen dynamics around the hyporheic zone of streams (e.g., Bohlke et al., Biogeochem. 2009;Zarnetske et al., JGR, 2011) andrivers (e.g., Clilverd et al., Biogeochem, 2008;Hinkle et al., J. Hydro. 2001), and oxygen transformations around the capillary fringe itself (Haberer et al., J. Contamin.Hydrol), we can, in fact, find few manuscripts that examines the importance of hydrological fluctuations around the capillary fringe with respect to nitrogen cycling.The manuscripts that examine nitrogen cycling around the capillary fringe (e.g., Abit et al., Geoderma, 2008), all of which are referenced in the current manuscript (Pg.2, Ln. 5 -6) do not take a similar mechanistic approach as described here.
-------------------------------------Where are the microbial data needed to test the main predictions of the model (e.g.Fig 4)?There are skilled microbial ecologists on this team and working on this site.I didn't find the model results compelling in the absence of microbial data, especially given the poor performance of the model in predicting nitrate at the three (!) depths where it was apparently compared (Fig. 4).
There has been a significant amount of microbial work performed at this site (e.g., Anantharaman et al., Nat. Comm, 2016;Hug et al., ISME, 2015;Jewell et al., ISME, 2016;Wrighton et al., ISME, 2014), all of which is extremely useful for initializing the reaction network for this, and other, models (as pointed out on Pg. 3, Ln 20, and dis-cussed further on Pg. 14).For example, one of the questions we set out to address in the current manuscript concerns the interactions between different heterotrophic and autotrophic metabolisms promoting N-transformations and loss (denitrification vs. anaerobic ammonium oxidation, see Pg. 3, Ln 5).The notion that anammox is important in this environment comes directly from molecular evidence profiling the community within the naturally reduced zones of the floodplain (e.g., Jewell et al., ISME, 2016).These areas, located just below the capillary fringe, have high abundance of chemolithoautotrophic metabolisms, however, little information exists comparing the importance of different metabolisms to nitrogen loss.Several manuscripts have tackled these questions within marine environments (e.g., Babbin et al., Science, 2014;Koeve & Kahler, Biogeosciences, 2010) using measurements and models, but as far as we are aware, this has not been extended to terrestrial systems.Furthermore, because feedbacks between biotic and abiotic systems are inherently non-linear, and therefore cannot be addressed directly by molecular studies, we believe a mathematical model of interacting microbial guilds informed by these prior studies is a plausible approach to address these interactions.
However, we have yet to find microbial data that is applicable for benchmarking microbial models.Microbial models represent the active portion of the microbial community, and are simplified representations of microbial guilds using several traits, and imposed trade-offs.Therefore, commonly collected microbial metrics are not comparable to modeled metrics.For example, measurements of biomass (via chloroformfumigation) account for microbial and fungal biomass and additional labile compounds from nonliving sources (e.g., plant residue), and frequently overestimate biomass.Modeled biomass, on the other hand, represents the products of growth of the metabolisms considered (never the full community).
Molecular markers of microbial activity (e.g., mRNA measurements) show some promise as benchmarks of specific modeled microbial processes, but at this stage require more work to determine the factors that control the regulation of mRNA.Previous C3 work has shown a lack of correlation between the production of mRNA and the activity of the pathway encoded by that mRNA.Post transcriptional modification pathways play and important role in determining the balance between transcription and translation.More specific incubation experiments around the capillary fringe (for example, the use of random isotope pairing techniques to differentiate anammox from denitrification), would be very useful for parsing out metabolisms of importance, however, were beyond the scope of the current study.
Nonetheless, we believe benchmarks for microbial models are an important area to highlight in this manuscript, and have included a section in the discussion that explicitly deals with the benchmarking needs for models of this type.
We disagree, however, with the reviewer's assertion that the model performs poorly in failing to capture the nitrate dynamics.The model does not capture the totality of the nitrate dynamics in the current configuration.This is because the model is being run to examine the extent of biological nitrogen loss from the different depths.We make this point in the materials & methods (Pg.7 Ln 7 -8), the results (e.g., Pg. 10, Ln 8 -11) and discussion.From this perspective, comparison with the Rayleigh calculations from the isotopic data, the model actually performs reasonably in capturing the nitrate dynamics as catalyzed by different microbiological guilds and as a function of the oxygen dynamics, and organic matter/ nitrate concentrations.It is quite possible to configure the model to account for all of the nitrate loss from biological dynamics (as shown in the supplemental figure 4), or under variable electron donor ratios (supplemental figure 5), however, the broad conclusions from the isotopic data suggests that this would be incorrect, and again, highlights the utility of using isotope data to benchmark this model.It is possible that this point is not made clear enough in the current text.Therefore we have added additional text to the discussion to emphasize this point.For comparison, we have also run the model to simulate both abiotic (dilution) and biotic pathways.These simulations are given in the supplemental figures and discussed further in the text.Finally, in order to compare how well the model captures the data, we have run statistical tests represented in a Taylor Diagram also included in the supplemental, and further discussed in the text.
- ------------------------------------The microbial simulations come off as entirely speculative given that there are no data presented, as does the speculation as to the importance of annamox vs. canonical denitrification vs. chemolithotrophic processes.Contrary to the conclusion (P17 25), I don't think the authors can make any concrete claims as to the mechanisms driving the patterns observed, especially given that the nitrate isotope fractions are not well constrained for these pathways, and that there is enormous variation in nitrate isotope fractionation during denitrification.
Our conclusions are drawn predominantly from the simulations, and the conditions under which these simulations are performed.With regards to understanding the importance of nitrogen loss via heterotrophic denitrification Vs. chemolithoautotrophic anaerobic ammonia oxidation, this question is driven primarily by recent molecular microbiology work at this site showing a relatively high abundance of chemolithoautotrophic metabolisms in the groundwater (Jewell et al., ISME, 2016;Frontiers in Microbiology, 2016), and high abundance of ammonia-oxidizing archaea (Hug et al., Envion. Micro, 2015) and heterotrophic denitrifiers (Anantharamam et al., Nat. Comm. 2016) at shallower depths.We do point out in the text that the spin-up conditions (i.e., a low water table fostering aerobic conditions) prior to the water table perturbation simulations can select against obligate anaerobes (such as the anammox bacteria), and for faculative aerobe such as the heterotrophic bacteria.
From this perspective, we do not believe that our interpretation of the model simulations is speculative.The development of the model is informed by prior studies at the same site (Pg. 2 Ln 25), the model parameters are taken from literature values of representative organisms (aerobic and anaerobic ammonia oxidizers & faculative heterotrophs, see supplemental tables), the broad conclusions of the model simulations (i.e., the % of C5 biotic N-loss Vs. abiotic dilution) are supported by isotopic benchmarks (from Rayleigh fractionation calculations, Pg. 10 Ln 10, and simple mixing calculations, see supplemental material) , and the final question, as to the general importance of anammox Vs. denitrification to N-loss, is supported by prior ecophysiological data and mechanistic modeling.The discussion also goes into more details as to the broader conclusions (i.e., from Pg. 15.Ln 20 onwards) we make from the study.We have, however, added additional text to make it clear that these conclusions are based primarily on model simulations.
The isotopic data has not been used to attempt to parse between the two different pathways.As with previous studies examining the contributions of anammox vs denitrification to nitrogen loss (e.g., Babbin et al., Science, 2014; Koeve & Kahler, Biogeosciences, 2010), we've employed a mechanistic model.As with previous models, it is a simplification of real-world conditions, yet captures some of the more important traits related to fitness under fluctuating environments.Hence, the output is therefore theoretical, rather than speculative, yet corresponds to findings of previous studies attempting to parse out the factors determining nitrogen loss from discrete end members.
We believe that this study therefore supplies suitable impetus for follow up experimental work based on the model output.Furthermore, modifications to the baseline model presented here (for example, incorporating dynamic energy budgets based on the thermodynamic approach explained in the text) could be used to examine why there is such variability in isotopic fractionation from an ecological and metabolic perspective.
- ------------------------------------The spatial replication of the field data seems inadequate given the heterogeneity of the system under study.Why are no isotope measurements from the vadose zone and shallow soils reported?This seems critical to get at the question of biogeochemical processing of N vs. dilution or mixing that comes up throughout the paper, and the enormous spatial heterogeneity of nitrate isotope compositions that is increasingly documented in the literature.What is the composition of the water that is posited to be diluting the sediment zone of interest?There was almost no discussion of the hydrology of the site and potential source waters, which are critical for getting at this point.To interpret the isotopes, you would need to consider mixing rather than pure dilution unless you could demonstrate that you were mixing nitrate-rich vs. nitrate-free water.This is especially critical in the context of the heterogeneity in buried organic lenses that has been demonstrated at this site.
Nitrate accumulates and dissipates only in the depths currently under investigation (i.e., 2 -3 m below surface depth), with little evidence from this study or from previous studies that nitrate accumulates at shallower or deeper depths.Measurements of nitrate in the vadose zone were below detection (figure 2), it is also unlikely, given infiltration rates at this site (∼ 3 cm yr-1, Pg. 14, Ln 2), that nitrate from shallower soils are transported to = 2 m and below.This is further supported by recent work at the site adding ∼ 2500 gallons of deuterium-enriched snow (δD ∼ 2200 per mil), for the purpose of examine water infiltration into the vadose zone around the well used in the current study.Snowmelt last 6 days and δD rapidly infiltrated to ∼ 1 -1.5 m, with very little deuterium signal seen below 1.5 m.Therefore, the transportation of nitrate from the vadose zone to the capillary fringe was not considered to be of importance in the current study.Similarly, nitrate below the 3 m line has been shown to be very low.Fig. 2 shows nitrate data for 3.14 m below surface depth, the lower bound of the current data set, with nitrate concentration ranging from 60 to 700 micromoles.Below this, into the background aquifer, nitrate ranges from undetectable up to 80 micromoles, as reported in previous studies (Zachara et al, J. Cont. Hydrol. 2013;Yabusaki et al., ES&T, 2017).This is alluded to in the main text (Pg.3, Ln 14), however, we have rewritten this statement to make it clearer.Finally, and further emphasizing the nitrate-deficient conditions in the groundwater, a recent NO3 injection experiment injected ∼ 2 mM of nitrate into the groundwater intending to stimulate chemolithoautotrophic metabolisms (Jewell et al., ISME, 2016;Frontiers in Microbiology, 2016).Prior to the injection, nitrate concentrations ranged from undetectable to ∼ 70 µM.Post-injection, the nitrate C7 was entirely consumed within the first 1 m downgradient.
In summary, the reason that no isotope measurements were made in the vadose zone or background aquifer was that nitrate was often below detection limits of the technique.This would also minimize the likelihood of nitrate from outside the depths studied contributing significantly to the observations reported here.
The mM units are correct.Nitrate is measured routinely at this site by ion chromatography according to approaches reported in the main text (Pg. 4,.Data from previous years also shows the large accumulation of nitrate (to mM concentrations) in the unsaturated zone pore water are a recurring phenomenon.An explanation for such high nitrate concentrations is the presence of a natural reduced zone around this well (as discussed on Pg. 14, Ln 11).Organic matter concentrations are very high in these zones, Janot et al., ES&T, 2016, recorded organic matter in these regions with OC concentrations as high as 1.7 %.We can therefore use a back-of-theenvelope calculation to estimate nitrogen availability from the OM in these regions.Considering a measured C:N ratio for the relevant depths of 7 (Conrad et al., unpublished) and a bulk density of ∼ 2 g cm-3, OM in these naturally reduced regions could yield 0.004 g-N cm-3, or 290 mM of nitrogen.Using a conservative mineralization rate of 2 % per year would therefore yield ∼ 6 mM of nitrogen.
This high nitrogen yield therefore makes this site an excellent candidate to study biological hotspots of activity. ------------------------------------- The manuscript is riddled with errors.In the title alone there is a grammatical error and a misspelling of one of the author names.I urge the lead author to give the paper a proper proof reading before sending out for review!
The manuscript has been proofread again.However, I (the lead-author) am unable to identify the spelling mistake.Looking at both the file for submission, and the file uploaded to the Biogeosciences-Discussions website, all authors names are spelt correctly.
P9 15: what do you mean by "highest (most reduced) value" This simply refers to the highest enrichment recorded, however, might be confusing, therefore has been reworded.
The message in Fig. 3 is not at all clear as presented.Try putting the same values (15N, e.g.) on a common plot so we can compare the trends among depths over time.
We are unsure as to what is unclear here.The figure shows the corresponding enrichment in the 15N18O-nitrate accompanied by the trajectory in nitrate concentration over time.The left yaxis represents the isotopic composition of nitrate (15N/ 18O), and is the same axes (from -10 to +20 per mil) across all three plots, while the right y-axis is the concentration of nitrate from 0 -8 mM, and again, is the same axis across all three plots.We are therefore not sure as to the value of re-plotting these figures by 15N.C9 P13 20: Need citation A citation has been added P15 25: "the measured in N2O peak" This has been reworded.
|
2018-12-06T21:14:31.600Z
|
2017-06-12T00:00:00.000
|
{
"year": 2017,
"sha1": "2379e577930203bbf57ec879e506163e79a12837",
"oa_license": "CCBY",
"oa_url": "https://www.biogeosciences-discuss.net/bg-2017-212/bg-2017-212.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "ac89ee4664e2508d5d0eec4522cddba21e6cc7ba",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
44971043
|
pes2o/s2orc
|
v3-fos-license
|
: Middle East respiratory syndrome coronavirus ( MERS-CoV ) in dromedary camels : are dromedary camels a reservoir for MERS-CoV ?
To the Editor: In a recent issue of Eurosurveillance, we read the article by Nowotny et al. [1]. In this article, the authors concluded, based on phylogenetic analysis and a high Middle East respiratory coronavirus (MERS-CoV) load in nasal swabs of dromedary camels, that local zoonotic transmission of MERS-CoV from camels may be possible through the respiratory route. We would like to thank the authors for their contribution to the knowledge of MERS-CoV. However, we would like to report a few concerns regarding this study from a methodological point of view [1].
To the Editor:
In a recent issue of Eurosurveillance, we read the article by Nowotny et al. [1].In this article, the authors concluded, based on phylogenetic analysis and a high Middle East respiratory coronavirus (MERS-CoV) load in nasal swabs of dromedary camels, that local zoonotic transmission of MERS-CoV from camels may be possible through the respiratory route.We would like to thank the authors for their contribution to the knowledge of MERS-CoV.However, we would like to report a few concerns regarding this study from a methodological point of view [1].
First, the authors suggested that the high viral load of MERS-CoV detected in nasal swabs may facilitate the zoonotic transmission through the respiratory route.However, in the study, the data on viral loads in dromedary camels were not described in greater detail and were only derived from testing nasal and conjunctival swabs for the virus.Additionally, the use of upper respiratory track swabs instead of lower respiratory specimens may not give a complete picture of the infection, because the MERS-CoV load in upper respiratory tract specimens is reported to be less important than in lower respiratory tract specimens [2].
Second, while it is known that MERS-CoV can cause severe disease and even death in humans, and this infection has no prophylaxis or specific treatment [3], the authors did not give any detailed information about the respiratory route, in particular whether droplet or aerosol transmission may occur.This constitutes a limitation of this study not mentioned by the authors.
Last, contacts and risk of contagion between the 76 dromedary camels, from which the samples were taken, were not provided in detail in the article.
If the mode of transmission is not well known and not understood, clinicians should pay attention to implement the precautionary principles recommended by the Centers for Disease Control and Prevention (CDC) as airborne precautions (the use of respirators rather than surgical masks), in addition to standard and contact precautions to reduce the risks of this infection until clarification of scientific certainty [4].
In previous studies, it was reported that MERS-CoV infection may be transmitted via respiratory droplets or direct and indirect contact with an infected person [5,6,7].In addition, there has been international concern in the medical community about the risk of MERS-CoV to have a pandemic potential due to aerosol transmission.A recent study demonstrated that there was no evidence of MERS-CoV nasal carriage among Hajj pilgrims [2].We agree with this study and believe that there is no aerosol transmission of this disease [2].The result of other work has also shown that MERS-CoV survives in raw camel milk slightly longer than in milk of other species [8].
Among the Middle East countries with a desert climate, camels are still a major means of transportation and trade.The fact that camels may be a reservoir for MERS-CoV [1], and the possibility that camels could spread MERS-CoV infection with pandemic risk to other countries and regions unaffected by this virus, should be taken into consideration.Given the popularity of camel milk consumption and trade in these countries, it would be appropriate to take regulatory measures on import of camels and camel milk from endemic areas, due to the reasons mentioned above.
An issue here should not be misunderstood: previously on the Asian continent, millions of poultry were destroyed due to the pandemic risk of avian influenza A(H5N1) [9].It is not intended to say that camels, which are claimed to be a reservoir for the disease and play an important role in supplying the basic needs of the people in countries with a desert climate, should be destroyed, but rather it is meant to say that precautionary measures to protect the animals and people should be taken.
www.eurosurveillance.org
In conclusion, MERS-CoV is an emerging pathogen with pandemic potential and with high risk of mortality.It is vital to take all possible preventive measures against MERS-CoV infection.Although in the Nowotny study [1] positive polymerase chain reaction (PCR) results showed MERS-CoV in five of 76 camels, an explicit assessment of the epidemiological role of camels has yet to be made to clarify the mechanism of emergence in humans.Further studies are required to better understand the transmission route and risks of this infection.
|
2017-10-11T16:23:53.247Z
|
2014-05-22T00:00:00.000
|
{
"year": 2014,
"sha1": "3fa48eefb3e09fc32aaeab1eb3ac84a3639e5a09",
"oa_license": "CCBY",
"oa_url": "https://www.eurosurveillance.org/deliver/fulltext/eurosurveillance/19/20/art20810-en.pdf?containerItemId=content/eurosurveillance&itemId=/content/10.2807/1560-7917.ES2014.19.20.20810&mimeType=pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0f87f8889e0032b336dec182b777a0fa613d21f1",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2041736
|
pes2o/s2orc
|
v3-fos-license
|
miR-221 affects multiple cancer pathways by modulating the level of hundreds messenger RNAs
microRNA miR-221 is frequently over-expressed in a variety of human neoplasms. Aim of this study was to identify new miR-221 gene targets to improve our understanding on the molecular tumor-promoting mechanisms affected by miR-221. Gene expression profiling of miR-221-transfected-SNU-398 cells was analyzed by the Sylamer algorithm to verify the enrichment of miR-221 targets among down-modulated genes. This analysis revealed that enforced expression of miR-221 in SNU-398 cells caused the down-regulation of 602 mRNAs carrying sequences homologous to miR-221 seed sequence within their 3′UTRs. Pathways analysis performed on these genes revealed their prominent involvement in cell proliferation and apoptosis. Activation of E2F, MYC, NFkB, and β-catenin pathways was experimentally proven. Some of the new miR-221 target genes, including RB1, WEE1 (cell cycle inhibitors), APAF1 (pro-apoptotic), ANXA1, CTCF (transcriptional repressor), were individually validated as miR-221 targets in SNU-398, HepG2, and HEK293 cell lines. By identifying a large set of miR-221 gene targets, this study improves our knowledge about miR-221 molecular mechanisms involved in tumorigenesis. The modulation of mRNA level of 602 genes confirms the ability of miR-221 to promote cancer by affecting multiple oncogenic pathways.
INTRODUCTION
miR-221 is among the most frequently up-regulated miRNAs in human cancer. It is over-expressed in a large fraction of glioblastoma, liver, bladder, thyroid, pancreatic, gastric, and prostate carcinomas (Gottardo et al., 2007;Lee et al., 2007;Visone et al., 2007;Fornari et al., 2008;Mercatelli et al., 2008;Pineau et al., 2010;Liu et al., 2012;Quintavalle et al., 2012). In few cases, non-oncogenic functions of miR-221 were reported. For example, the regulation of c-kit by both miR-221 and miR-222 induced anti-angiogenic effects and the reduction of cell proliferation of herythroleukemic cells (Felli et al., 2005), suggesting that miR-221 effects could also depend on cellular context. Its suggested role in tumorigenesis was confirmed by the finding that miR-221 over-expression correlates with tumor aggressive features, such as the presence of metastasis and multifocal lesions in hepatocellular carcinoma (HCC) (Gramantieri et al., 2009;Fu et al., 2011). In vitro studies showed that miR-221 caused an increase in cell proliferation rate and invasion capability, while anti-miR-221 induced a decrease in cell growth and promoted apoptosis Garofalo et al., 2009;Pineau et al., 2010;Zhang et al., 2010b). In vivo studies proved that miR-221 could induce proliferation of tumorigenic murine hepatic progenitors cells (Pineau et al., 2010) and accelerate liver tumor formation in a miR-221 over-expressing transgenic mouse model (Callegari et al., 2012).
These miR-221 targets revealed tumor-promoting mechanisms associated with miR-221 over-expression. However, our knowledge about miR-221 molecular targets is largely incomplete and aim of this work was to reveal the many additional miR-221 targets with a high-throughput approach to identify most of its gene targets and improve our understanding of the multiple mechanisms that are affected by miR-221.
SYLAMER ANALYSIS AND BIOINFORMATICS
Gene expression analysis of miR-221 transfected SNU-398 cells and Negative Control 2 (NC2, AM17111, Ambion) treated SNU-398 samples were performed using Agilent Whole Human Genome Oligo Microarray platform (Agilent Technologies), following manufacturer's procedures, as previously described . GeneSpring GX 11 software (Agilent Technologies) was used to analyze results. Data transformation was applied to set all the negative raw values to 1.0, followed by a quantile normalization. A filter on low gene expression was used to keep only the probes classified as Detected in at least one sample by the software. Array results were submitted in ArrayExpress (http://www.ebi.ac.uk/arrayexpress/, accession number E-MTAB-1531). Genes were ordered according to fold-change, from the most down-regulated to the most upregulated in miR-221 transfected cells and the ordered gene list was analyzed with the Sylamer algorithm (EMBL-EBI), through the web-interface SylArray (http://www.ebi.ac.uk/enright-srv/ sylarray) (Bartonicek and Enright, 2010). The Sylarray system assigned 3 UTR sequences to each mappable probe and filtered for low-complexity sequences, redundancy and multiple probe mappings. The Sylamer algorithm was applied to search for enrichment or depletion of miRNA seed sequences in the 3 UTRs ranked according to the gene list provided. The software generated plots representing the hypergeometric statistic significance of each nucleotide words across whole gene list. According to Sylamer method, the peak of the 7-mer plot closest to the start of the ranking was chosen as a conservative threshold to select putative miR-221 target genes (van Dongen et al., 2008). Algorithms used to identify predicted miR-221 target genes were TargetScan v. 5.2 (http://www.targetscan.org) (Lewis et al., 2005), MicroCosm v. 5 (http://www.ebi.ac.uk/enright-srv/ microcosm/htdocs/targets/v5), Diana MicroT v.3 (http://diana. cslab.ece.ntua.gr/microT/) (Maragkakis et al., 2009). For pathway analysis we used GeneSpring GX v.11 and GeneGo (Nikolsky et al., 2005) software through the functions for finding significant pathways.
PLASMID VECTORS
pMIF-GFP-miR-221 was prepared by cloning hsa-miR-221 gene in the pMIF-GFP-Zeo plasmid (System Biosciences), using the NheI restriction site. psiCheck-3 UTR constructs were prepared by cloning portions of 3 UTR (that contained miR-221 seed complementary regions) of putative target genes into psiCheck-2 vector (Promega), downstream of the renilla luciferase gene, using XhoI and PmeI restriction sites. The primers used to amplify 3 UTR regions and the lengths of cloned regions are listed in Table S1. psiCheck-RB1-3 UTR-mutated was obtained by deleting two 6-nucleotides regions, corresponding to miR-221 complementary regions, in RB1-3 UTR sequence of psiCheck-RB1-3 UTR. Mutagenesis was performed by Genescript company. All constructs were verified by sequencing.
Luciferase vectors (psiCheck-based vectors) were transfected at the final concentration of 800 ng/ml. Each transfection was performed in triplicate. For RNA extraction, cells were collected 48 h after transfection and the RNA was extracted following Trizol protocol (Invitrogen). For luciferase assays, firefly and renilla luciferase activity were measured using the Dual-Luciferase Reporter Assay (Promega), 24 h after transfection. The firefly luciferase activity was used to normalize the reporter renilla luciferase signal.
To experimentally investigate the involvement of miR-221 in cancer processes, we used the Cignal Finder 10-Pathway Reporter Arrays kit (SABiosciences), a commercial reporter array that allows for simultaneous evaluation of 10 cancer-related signaling pathways activation. It is a reverse transfection system that makes use of specific pathway-focused transcription factor-responsive firefly luciferase vectors. Ten thousand cells were seeded in wells containing the responsive and normalization vectors and luciferase activities were measured after 48 h. Data normalization was based on an included vector that expresses the renilla luciferase gene under the control of the strong cytomegalovirus (CMV) promoter.
REAL TIME PCR
The RNA purification by Trizol was performed according to manufacturer's indications (Invitrogen). For mature microRNA quantification we performed a Taqman Real time PCR, using miR-221 probe (Applied Biosystems). Five nanogram of purified RNA were retro-transcribed using TaqMan MicroRNA Reverse Transcription kit (Applied Biosystems) and mature miR-221 MicroRNA Assay (Applied Biosystems, assays ID000524), following manufacturer's protocol. Real Time quantitative PCR was performed using TaqMan MicroRNA Assay specific for hsa-miR-221 (Applied Biosystems, assays ID000524) and for hsa-miR-222 (Applied Biosystems, assays ID002276). The reaction was carried out in a 96-well PCR plate at 95 • C for 10 min followed by 40 cycles of 95 • C for 15 s and 60 • C for 1 min on Biorad-Chromo4 thermal cycler real-time PCR instrument. Each sample was analyzed in triplicate. The level of miRNA was measured using Ct (threshold cycle) and the amount of target was calculated using 2 − Ct method. To normalize the expression levels of miR-221, TaqMan endogenous control RNU6B (Applied Biosystem, assay ID001093) was used. For gene expression analysis we performed Real Time EvaGreen PCR. Five hundred nanogram of total RNA were retro-transcribed using random examers and oligo dT. Diluted cDNAs (1:5 for ANXA1, 1:10,000 for 18S and 1:100 for RB1, APAF1, WEE1, CTCF, GAPDH) were amplified in Real Time PCR using Qiagen Taq . The reactions were incubated in a 96-well PCR plate at 95 • C for 15 min followed by 40 cycles of 95 • C for 30 s and 58 • C for 30 s. Each sample was analyzed in triplicate. Fluorescence measurements were completed using a Biorad-Chromo4 thermal cycler real-time PCR instrument. The level of each mRNA was measured using Ct (threshold cycle) and the amount of target was calculated using 2 − Ct method. Gene expression levels were normalized using either 18S or GAPDH expression.
STATISTICAL ANALYSIS
The significance of differential expression in luciferase assays and quantitative PCRs between groups was assessed by student's t-test.
miR-221 CAN DOWN-MODULATE HUNDREDS OF GENE TARGETS BY REDUCING THEIR RNA LEVELS
To identify genes controlled by miR-221, we compared gene expression profiling of SNU-398 cells transfected with miR-221 precursor vs. SNU-398 cells transfected with negative control (NC2, Ambion). This cell line was chosen because it expresses low level of miR-221 . In our experimental setting, miR-221 transient transfection induced a ≈10-fold increase in microRNA expression. No expression changes were detected for the related miR-222 ( Figure S1). Three independent experiments of miR-221 transfections vs. four independent experiments of NC2 transfections were compared. Following quantile normalization and exclusion of undetectable or compromised probes, 26,586 DETECTED probes, according to Agilent Feature Extraction analysis software, were used. Comparison of the average expression level of these probes between miR-221 vs. NC experimental groups was performed and each probe was then ordered according to fold change, from the most down-regulated to the most up-regulated of the miR-221-transfected cells group. The Sylamer algorithm, through the web-based interface Sylarray, was used to analyze the ordered gene list of mRNAs. First, the algorithm filters out genes with low complexity 3 UTRs and selects one probe for each of the maintained genes. Following this selection, Sylamer analysis was applied to a reduced list comprising 11,971 genes ( Table S2). A highly significant (p-value = 1 × 10 −15 ) enrichment for miR-221 target sequences was detected in the 3 UTRs of many of the down-regulated genes (Figure 1).
According to Sylamer indications, the cut-off was chosen in correspondence to the peak closest to the start of the ranking in the 7-mer plot, which led to the selection of the first 1800 most down-regulated mRNAs in miR-221 transfected cells (reduced FC < −1.15), (Figure 1). Within this cut-off, the algorithm detected the presence of at least one seed complementary region for miR-221 in 602 genes (33.4%) ( Table S3). All included the hexamer TGTAGC (corresponding to the nts 2-7 of miR-221 seed); 238 included the heptamer TGTAGCA (corresponding to the nts 2-8 of seed); 310 included the heptamer ATGTAGC (corresponding to the nts 1-7 of seed); 120 included the octamer ATGTAGCA (corresponding to the nts 1-8 of the seed).
To assess the value of these results, we verified the presence of validated targets within these lists. In a list of 35 published targets of miR-221 (http://www.genego.com/, August 2012), 10 (ARHI, ESR1, SLM1, KIT, PIK3KR1, SEMA6D, SLC4A4, TNF, FOG2, ICAM) could not be evaluated because their expression was undetectable in SNU-398 cells. Among the remaining 25 genes, 14 (PUMA, BMF, CDKN1B, CDKN1C, GARNL1, MDM2, THRB, APAF1, TIMP3, TRPS1, ZADH2, RP42, DVL2, and Connexin43) were identified by Sylamer within the list of the 1800 most downregulated genes; five (SEC62, FOXO3, Hox-B6, SIP2, and PTPRmu), although still down-regulated, were outside the selected cut-off and six were found slightly up-regulated (Dicer, BIM, NLK, PTEN, REDD1, and Ets1) ( Table S4). We cannot exclude the possibility that down-regulation of these latter genes could be detectable at protein but not mRNA level. In any case, these results indicate that a significant portion (56%) of the published validated gene targets could have been identified through this approach and suggest that many of the additional targets yet to be validated are likely present within this list. The published gene targets detected by Sylamer contained at least 1 or 2 complementary sites for the 7-mer seed within their 3 UTRs. As expected, with the exception of RALGAPA1 (or GARNL1), THRB and Connexin43, the remaining 10 proven gene targets were predicted by at least one of the online programs MicroCosm, Targetscan or Diana microT, indicating that presently known gene targets were largely identified through an initial scanning using available online predictions.
We intersected the 602 genes that emerged from Sylamer analysis with genes predicted by three online algorithms (MicroCosm, Targetscan, Diana microT). Overall, 125 genes identified by Sylamer (19.9%) were also predicted by online algorithms (Table S5). This set of 125 genes are therefore not only predicted by online algorithms, but also emerged in an experimental setting based on microarray expression analyses, thus adding a new level of confidence to these genes as miR-221 real targets.
miR-221 INDUCED PRO-PROLIFERATIVE PATHWAYS INVOLVED IN CANCER
We investigated the biological processes associated with the genes that emerged from Sylamer analysis: the 602 genes with at least one site matching the seed sequence of miR-221 were analysed for detecting their association with molecular pathways through the use of Genespring GX 11 (Agilent Technologies) and GeneGO (Thomson Reuters) programs.
Both methods provide a rapid categorization of large lists of genes into functionally related groups of genes. At the p-value cutoff of 0.05, Genespring and GeneGO found several pathways that were enriched in genes targeted by miR-221 (Tables 1, S6). Many of these genes were related to cell cycle regulation and apoptosis.
To directly investigate the involvement of miR-221 in cancer processes, we used Cignal Finder 10-Pathway Reporter Arrays kit (SABiosciences), a commercial reporter array that allows for simultaneous evaluation of 10 cancer-related signaling pathways activation levels in cells, using specific pathway-focused transcription factor-responsive firefly luciferase constructs. This kit was used to evaluate the ability of miR-221 to induce cancer-related pathways in the SNU-398/miR221 clone 2 cells, engineered to stably express increased levels of miR-221 ( Figure S1). We found that four pathways, Myc/Max, NFkB, Wnt/β-catenin, and RB-E2F, were significantly induced in miR-221 expressing cells, compared to the original cells (p-value <0.05) (Figure 2), thereby confirming miR-221 involvement in cellular processes that can induce proliferation and suppress apoptosis.
VALIDATION OF NOVEL CANCER-ASSOCIATED TARGETS OF miR-221
From the list of genes identified by Sylamer and the list of pathways significantly affected by miR-221, we focused our attention on genes involved in cell proliferation and apoptosis regulation to individually validate them as miR-221 targets. They included the cell cycle regulators retinoblastoma 1 (RB1) and WEE1, the pro-apoptotic gene Apoptotic Peptidase Activating Factor 1 (APAF1), CCCTC-binding factor (CTCF), a transcriptional repressor and Annexin A1 (ANXA1). In addition, we also investigated Fas Ligand (FASLG), which contains a miR-221 target region in its 3 UTR, but was not within the Sylamer-derived-602 genes list. The genes studied are listed in Table 2 and in Figure S2.
To validate potential miR-221 target genes, we first performed luciferase assays. We cloned a portion of the 3 UTRs containing miR-221 target sequences into a psiCheck-2 reporter vector, downstream the luciferase reporter gene. We assayed the luciferase activity in the presence or absence of added miR-221 mimics in three different cell lines: HEK-293 (embryonic kidney derived cells) and two hepatocarcinoma derived cells HepG2). The reporter vector constructs were transfected into cells together with miR-221 precursor or negative control (NC2). We found that miR-221 induced a significant decrease in luciferase activity of all the vectors containing 3 UTRs of genes identified through the Sylamer approach. Instead, no change in luciferase activity could be detected for the psiCheck-FASLG 3 UTR vector ( Figure 3A). In the case of RB1, to confirm that the decrease in luciferase activity was specifically linked to the miR-221 complementary regions present in the 3 UTR, we prepared a mutant form of psiCheck-RB1 3 UTR (psiCheck-RB1 3 UTR mutated) that lacks the miR-221 target sequences within the RB1 3 UTR. Mutation of these sites made the vector unresponsive to miR-221 inhibitory effect ( Figure S3).
DISCUSSION
We have added a number of novel gene targets to those already identified for miR-221. In this study, we did not use the traditional "in silico" prediction, followed by the individual validation of the most interesting ones. In fact, through this approach, various important miR-221 gene targets have been already identified (Galardi et al., 2007;le Sage et al., 2007;Fornari et al., 2008;Medina et al., 2008;Garofalo et al., 2009;Gramantieri et al., 2009;Zhang et al., 2010a,b;Quintavalle et al., 2012) (see also Table S4). Here, we used an approach that was based on Sylamer analysis of gene expression profiling. Sylamer couples experimental gene expression results to the search for microRNA target sequences in the 3 UTR of genes. By using this approach, we could identify 602 miR-221 gene targets, among which 125 were also predicted by at least one prediction algorithm. For this latter group of genes, our experimental results add a new level of confidence to their inclusion among the true targets of miR-221. The enforced expression of miR-221 did not alter the level of miR-222. Hence, the targets reported by this study may actually be specifically regulated just by miR-221. However, by sharing the www.frontiersin.org April 2013 | Volume 4 | Article 64 | 7 no change in luciferase activity could be detected for the FASLG same seed sequence, it is possible that many of the discovered genes could potentially be targets of miR-222 too. More specific experiments would be required to formally prove this point. To achieve these results, in this study we did not employ the traditional t-test analysis, which is used to select the genes with a significant differential expression level between two groups of samples. In fact, this approach would have been too stringent and many real targets excluded. For example, two known targets, such as CDKN1B and BMF (Gramantieri et al., 2009) were among the down-modulated genes, but their p-value was not significant (Table S2). Analysis based on Sylamer did not produce this bias. Even slight, apparently non-significant effects on mRNA modulation could be picked up by Sylamer if the gene is present in the context of a group of genes that share the characteristic of having homology with the same miRNA's seed. It should also be considered that this assay takes into account only miRNA effects on the stability of target mRNAs, without considering its effect on mRNA translation, which is often predominant in mammalian cells.
To validate some of the results, we also applied the traditional approach to few of them, RB1, APAF1, WEE1, CTCF, and ANXA1: the assessment of mRNA and protein levels following miRNA or anti-miRNA enforced expression and luciferase assays were used to prove direct interaction with predicted target sites. In addition, for RB1, mutation of the putative miR-221 target site was shown to abolish the miR-221 regulation. Notably, we also identified Anxa1 as strongly down-regulated in a proteomic screening performed in SNU-398 cells transfected with miR-221 (Corrales, data not shown), further confirming Anxa1 as target of miR-221 regulation. APAF1 was recently independently demonstrated as miR-221 target in lung cancer derived-cell lines (Quintavalle et al., 2012), thereby giving support to our finding. Conversely, FAS ligand, a gene that contains a miR-221 targeting site within its 3 UTR, but was not present among the downregulated mRNAs detected in this study, was not validated as a miR-221 target by luciferase assay. In general, all the individually tested genes confirmed to be targets of miR-221, suggesting that most, if not all the 602 genes identified through expression profiling and Sylamer analysis may represent truly novel gene targets modulated by miR-221. Taken together, these findings show that this approach represents a feasible and effective method for the identification of large sets of valid miRNA targets.
Pathway analysis of the 602 putative miR-221 target genes revealed that these genes are involved in cellular processes related to cell cycle regulation and apoptosis. Experimental tests revealed that various pathways, which included WNT/β-catenin, E2F/RB cell cycle, MYC and NFkB, may be promoted by miR-221 overexpression.
Previous experiments demonstrated that miR-221 was able to induce tumor cells proliferation, both in vitro (Galardi et al., 2007;le Sage et al., 2007;Fornari et al., 2008;Gramantieri et al., 2009) and in vivo models (Pineau et al., 2010;Callegari et al., 2012). Among these pathways, inhibition of RB1 protein may explain the activation of the E2F pathway and support the important role of miR-221 in cell cycle progression. Indeed, it was previously shown that miR-221 can repress the CDK inhibitors CDKN1B and CDKN1C (both confirmed in this study), and RB1 represents the third negative regulator of cell cycle that appear to be controlled by miR-221. This finding supports the observation that miR-221 could induce a significant increase of Hep3B HCC cells in S-phase and a decrease of cells in G1-phase . This finding suggests that impairing cell cycle control mechanisms is one of the main effects of miR-221 over-expression in human cancer. The present work has led to the discovery of hundreds of new miR-221 target genes and provides an example of the multiple molecular functions that a single deregulated miRNA could affect. This is relevant for the establishment of miRNA-based therapeutics: multiple pathways could be simultaneously affected when miRNA or anti-miRNA are given as therapeutic molecules. The tumor promoting activity of miR-221 has been recently confirmed and supported by in vivo models also by showing the anti-tumor effect achieved by anti-miR-221 oligonucleotides: anti-miR-221 could significantly inhibit tumor cell proliferation of human HCC xenografts (Park et al., 2011) as well as reduce the number and size of liver tumors in a transgenic mouse model (Callegari et al., 2012). The present results add new information about the molecular pathways controlled by miR-221, thus contributing to explain how this microRNA could promote tumorigenesis and how to better assess the effect of its inhibition.
This work, by showing that the Sylamer algorithm could reveal the enrichment for a miRNA seed sequence among a group of down-regulated mRNAs, represents a general approach that could be applied to any miRNA for the experimental identification of a wide range of gene targets. In the case of miR-221, the list of modulated mRNAs was used to improve our understanding of its role in tumor pathways. However, these results may potentially be useful for defining the molecular mechanisms that are influenced in any other physiological or pathological condition in which miR-221 could be eventually implicated.
ACKNOWLEDGMENTS
This work was supported by funds to Massimo Negrini from Associazione Italiana per la Ricerca sul Cancro (AIRC), the Italian Ministry of University and Research (FIRB 2011) and the University of Ferrara.
|
2016-05-12T22:15:10.714Z
|
2013-04-25T00:00:00.000
|
{
"year": 2013,
"sha1": "595d14b966a4651da3fd30613ee0c4757d6ec140",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fgene.2013.00064/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "595d14b966a4651da3fd30613ee0c4757d6ec140",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
136221820
|
pes2o/s2orc
|
v3-fos-license
|
Improvement in Sensing Performance of H2O2 Biosensor Electrodes through Modification of Anatase TiO2 Nanorods and Pretreatment of Electrochemical Reduction
Electrochemical biosensors are essential health monitors that aid in the detection and diagnosis of diseases. In this research, anatase titanium dioxide (TiO2) nanorods (TNR-A) were synthesized on a titanium (Ti) substrate in three stages, namely, hydrothermal, alkali, and heat treatments, and utilized as a modified electrode (TNR-A/Ti). The pretreatment of electrochemical reduction was adopted to increase the Ti3+ amount in TiO2 nanorod surfaces of the electrodes, thus improving electron transfer. The electrodes were characterized by X-ray diffractometry (XRD), fieldemission scanning electron microscopy (FESEM), and X-ray photoelectron spectroscopy (XPS). The electrodes after pretreatment (Ti3+-TNR-A/Ti) showed a better electrochemical response to hydrogen peroxide (H2O2), indicating that pretreatment improves the performance of electrodes. The assembled biosensor electrode (Nafion/HRP/Ti3+-TNR-A/Ti) exhibited a sensitivity of 6096.4 μA∙mM−1∙cm−2, a detection limit of 0.008 μM, a linearity of 0.04–700 μM, and an apparent Michaelis–Menten constant K M of 0.0027 μM, which are higher than those in previous research studies on TiO2 or similarly nanostructured modified biosensor electrodes. This research could provide a potential competent method that can be used to modify electrodes for high-performance amperometric biosensors.
Introduction
Electrochemical biosensors are one of the essential health monitors used to help detect and diagnose diseases, among which hydrogen peroxide (H 2 O 2 ) biosensors are deemed important. (1)(2)(3)(4) This is because H 2 O 2 is the main component of reactive oxygen species (ROS) in living organisms, and increasing levels of H 2 O 2 in cells could indicate oxidative stress and cellular damage. (5) This damage could result in many diseases such as cancer, neurodegenerative disorders, diabetes, and atherosclerosis. (6) Thus, H 2 O 2 biosensors with high accuracy, high sensitivity, good selectivity, and good anti-interference ability play vital roles in detecting and monitoring the H 2 O 2 concentration under these conditions.
To increase sensing capacity, enzymes are utilized. The enzyme adsorption on electrodes and direct electron transfer between them have been recognized as key factors that affect biosensor performance. (7) Horseradish peroxidase (HRP) is one such enzyme, which belongs to the superfamily of peroxidases and heme-containing glycoproteins. (8) Therefore, HRP is always used to determine H 2 O 2 . (9) If a mediator with nanostructures is built on the electrodes, it could increase the surface area and facilitate more enzyme immobilization to improve biosensor performance. (9,10) Nanostructured TiO 2 has a strong protein adsorption ability and great biocompatibility. (11) Moreover, TiO 2 with an isoelectric point of 8.9 is very appropriate for HRP enzyme immobilization. (12) In our previous work, TiO 2 nanodot film and rutile TiO 2 nanorods were reported as a mediator to construct a H 2 O 2 biosensor electrode and showed good performance. (13,14) However, nanostructured anatase TiO 2 was reported to well retain the biological activity of the enzyme and it can be used to load an increased enzyme concentration. (15) The electrical conductance of TiO 2 is closely related to the Ti 3+ amount. If there are more Ti 3+ ions in the nanostructured TiO 2 surface, it should favour electron transfer within the electrode. An electrochemical reduction or self-doing approach was reported to be effective for producing Ti 3+ ions on a TiO 2 surface and demonstrated to have the capability of improving the performance of TiO 2 -nanotube-based biosensor electrodes. (16,17) The main problem with H 2 O 2 biosensors is their dependence on the dissolved oxygen concentration. This problem can be overcome by utilizing 'mediators' that transfer the electrons directly to the electrode bypassing the reduction of the oxygen cosubstrate. (13) The advantage of these biosensors is that they react rapidly with the reduced form of the enzymes. Moreover, they are sufficiently soluble, in both oxidized and reduced forms, to be able to rapidly diffuse between the electrode surface and the active site of the enzyme. (13,18) The purpose of this work is to develop a highly sensitive electrode by intensifying direct electron transfer between the electrode substrate and the functional enzyme through TiO 2 nanorod modification.
In this work, anatase TiO 2 nanorods on Ti substrates were prepared via three stages, hydrothermal, alkali, and heat treatments, following a pretreatment of electrochemical reduction. The modified electrodes were then characterized and their electrochemical behaviour and performance were measured. The effect of pretreatment on the performance of the electrodes was also evaluated.
Materials and reagents
Titanium (Ti) foils with a purity of 99.99% and a thickness of 0.1 mm were used as a conductive material for the construction of anatase TiO 2 nanorod biosensor electrodes. The structure with dimensions of 1 × 2 cm 2 (length × width) was then cleaned in a mixture of ethanol, distilled water, and acetone at a ratio of 1:1:1. 250 U•mg −1 HRP (Aladdin Chemistry Co., Ltd.) was used. Nafion (5 wt%) was purchased from Sigma-Aldrich. All the chemicals used in the experiments that were of analytical reagent grade were purchased from Sinopharm Chemical Reagent Co., Ltd. The 0.1 M phosphate buffer solution (PBS) used in the experiments was prepared with the constituents of NaH 2 PO 4 (AR, >99.0), Na 2 HPO 4 (AR, >99.0), NaCl (AR, >99.5), and H 2 O. The HCl (37 wt%) and NaOH (AR, >96.0) used to adjust the pH of the PBS were prepared with various pH values. The PBS solution was deoxygenated by bubbling pure N 2 gas for 30 min, prior to use.
Electrode construction and modification
Anatase TiO 2 nanorods were obtained through three steps. The first step was a hydrothermal process described as follows: The hydrothermal solution was prepared by mixing 0.45 g of picric acid (AR, >99.8%), 15 mL of ethanol, 60 mL of H 2 O, 40 mL of HCl, and 220 mL of titanium tetrabutoxide (TBOT, CP, >98%). After stirring the chemicals sufficiently, the Ti substrate and mixture were placed in a Teflon vessel of a hydrothermal autoclave and then into an oven at 160 °C for 4 h. After the hydrothermal treatment, the autoclave vessel was taken out and allowed to cool to room temperature. Rutile TiO 2 nanorod films were obtained by rinsing with deionized water and ethanol. In the second step, the rutile TiO 2 nanorod films were hydrothermally alkalitreated in a 100 mL Teflon vessel with a solution of hydroxides (KOH/NaOH = 1:1) at 200 °C for 2 h, then washed with deionized water and soaked in 0.1 M HCl for 4 h. In the third step, the films with acidic treatment were heat-treated at 500 °C for 2 h, after which anatase TiO 2 nanorod films on Ti substrates (TNR-A/Ti) were obtained. According to the literature, (16) the electrochemical reduction method was performed by applying −1.5 V to the electrodes in 1 M (NH 4 ) 2 SO 4 (AR, >99.0%) to obtain a Ti 3+ -TNR-A/Ti electrode. Field-emission scanning electron microscopy (FESEM) (Hitachi, S-4800) was performed to observe the morphology of the anatase TiO 2 nanorods, and X-ray diffractometry (XRD; PANalytical, X'pert PRO) was carried out to analyse the phase and crystal of TiO 2 nanostructure films. Also, X-ray photoelectron spectroscopy (XPS, ESCALAB 25OXi) was performed to analyse the role of pretreatment in changing the surficial Ti valence for the electrochemical reduction pretreatment. The anatase TiO 2 nanorod biosensor electrodes were assembled via the same procedure using the same enzyme and protective membrane, and the steps of biosensor electrode construction were started by sealing the electrodes with epoxy resin except for an area of 2.5 × 2.5 mm 2 , which was left as the measuring area. The physical adsorption method was performed to immobilize the HRP enzyme on the electrode surface, where 10 µL of HRP solution was dropped on the electrode surface and left to dry at room temperature. The HRP solution was prepared by dissolving HRP in 0.01 M PBS (pH = 7.4 (13,19) ) in order to obtain a 0.01 g mL −1 solution. Routinely, Nafion has been used as a biosensor and as an immobilisation matrix for the enzyme, which helps in maintaining the stability of the biosensor. (20) Hence, as the final step, 4 µL of a 0.5 wt% solution of Nafion was dropped onto the biosensor surface to protect the enzyme and make the biosensor biocompatible, and then was left to dry at room temperature (Fig. 1). The Nafion/HRP/Ti 3+ -TNR-A/Ti and Nafion/HRP/TNR-A/Ti biosensor electrodes were washed after storing at 4 °C for 1 d. (9) When not in use, the modified anatase TiO 2 nanorod biosensor electrodes were stored at 4 °C.
Measurement of performance characteristics of the biosensor electrodes
An electrochemical workstation device (CHI 660D, from Shanghai Instrument Co., Ltd.) was used to characterize the modified biosensor electrodes. The Pt foil and saturated calomel electrodes (SCEs) served as the counter and reference electrodes, respectively. The modified biosensor electrodes were used as the working electrode. The cyclic voltammetry technique was used to test the modified electrodes in 20 mL of PBS (PBS, 0.1 M at 25 °C and purged with pure nitrogen for 30 min in order to remove oxygen) with an optimum pH. Through this technique, we can investigate and study the electrochemical behaviour of the modified electrodes and the effect of electrocatalytic activity towards the redox reaction of H 2 O 2 . The scan rate effect on the electrochemical behaviour of the electrodes was also studied by cyclic voltammetry. The amperometric technique was carried out on the biosensors in order to study the performance characteristics of the modified biosensor electrodes, such as sensitivity, limit of detection (LOD), linearity, selectivity, and stability, by applying an optimum voltage. The optimum applied voltage was selected via the amperometric technique by applying a voltage in the range between −0.3 and −0.9 V; the voltage with high current response was selected as the optimum applied voltage. The optimum pH of the PBS was selected depending on the best performance of the modified electrodes.
Characterization of anatase TiO 2 nanorod modified electrodes
After alkali, acidic, and heat treatments of hydrothermally grown rutile TiO 2 nanorods, the XRD pattern (Fig. 2) of the nanorods on Ti substrate fits well with the anatase TiO 2 phase (JCPDS No. 21-1272). The formation of the anatase TiO 2 phase starts with breaking in the Ti-O-Ti building units in rutile TiO 2 to form Ti-O-Na (Ti-O-K). After the Na + and K + ions are exchanged with H + ions in acid wash, Ti-OH bonds are formed during alkali treatment. The resulting hydrogen titanate is finally transformed to anatase TiO 2 after heat treatment at 500 °C (Fig. 3).
Effect of electrochemical reduction on the electrodes
After the electrochemical reduction treatment was performed, the Ti 3+ contribution in the XPS spectra ( Fig. 4) increased, and the calculated Ti 3+ molar percentage on the surface also increased from 19 to 40.8%. This indicated that the electrochemical reduction treatment is really an effective surface reduction procedure.
Electrochemical behaviours
When a biosensor electrode was assembled, its cycle voltammograms (CVs) [ Fig. 5(a)] showed that the electrochemical activity of the electrode (Nafion/HRP/Ti 3+ -TNR-A/Ti) enhanced after the pretreatment, and the biosensor electrode had a higher oxidation peak current when 50 µM H 2 O 2 was added, as observed in the CV curve [ Fig. 5(b)].
The CVs [ Fig. 6(a)] of the Nafion/HRP/Ti 3+ -TNR-A/Ti electrode at different scan rates showed that the current response increased proportionally with the scan rate, and that the reduction current peak (I pc ) was linear as a function of the scan rate [ Fig. 6(b)], implying that the behaviour followed a surface-controlled process with direct electron transfer. (21,22) Hence, the improvement in the electrochemical response of the biosensor electrode could be attributed to enhancing direct electron transfer within the electrode by electrode electrochemical reduction pretreatment.
Sensing performance of the biosensor electrodes
After the optimization of the applied voltage [ Fig. 7(a)] and the pH of the PBS [ Fig. 7(b)] for the electrode with the highest response towards 20 µM H 2 O 2 , the amperometric response of the Nafion/ HRP/Ti 3+ -TNR-A/Ti electrode for the successive addition of H 2 O 2 was measured under the applied voltage of −0.55 V and 0.1 M PBS with pH 6.5.
The amperometric technique was used to test the sensing performance of the biosensor electrodes (Fig. 8) and the sensitivity of Nafion/HRP/Ti 3+ -TNR-A/Ti with 6096.4 µA•mM −1 •cm −2 (curve i). The limit of detection (LOD) was found to be 0.008 µM, the linearity ranged from 0.04 to 700 µM with the correlation coefficient R = 0.999 (n = 28) in the inset, and the required time to reach 95% of the steady-state current was less than 3 s. Compared with Fig. 8 curve ii, the electrode with pretreatment had a 1.15-fold increase in the sensitivity. Table 1 shows a comparison of the TiO 2 nanorod electrode before and after the electrochemical reduction pretreatment and other previously reported electrodes. This indicates that both the present nanostructure and the pretreatment provide an effective way of enhancing the sensing performance. Moreover, the apparent Michaelis-Menten constant K app M could reflect the enzyme affinity. (27) The K app M of the electrodes before and after the electrochemical reduction pretreatment was calculated to be 0.029 and 0.027 mM, respectively. The results show that the value decreases after the reduction pretreatment, indicating that the enzyme achieves a higher catalytic efficiency at a low H 2 O 2 concentration owing to the higher affinity of the enzyme with the mediator.
Selectivity and stability of Nafion/HRP/Ti 3+ -TNR-A/Ti biosensor electrode
Selectivity is very important for the biosensor electrode with anti-interference towards other species. The amperometric technique was used to study the selectivity. The electrode was examined in the presence of 10 µM H 2 O 2 , where the current response of the electrode was clearly rapid and strong. In contrast, when each 100 µM of uric acid (UA), ascorbic acid (AA), and glucose were successively injected into the PBS, no notable current response was observed, as indicated in Fig. 9. However, when 10 µM H 2 O 2 was again injected into the same supporting electrolyte solution, an immediate current response was observed (Fig. 9). The obtained results suggest that the biosensor electrode had good anti-interference and a high selectivity. Thus, it could be used for the determination of H 2 O 2 from real species.
Owing to its stability, the amperometric technique was also used. About 96% of its initial current response to H 2 O 2 still remained after the electrode was stored for 12 d at 4 °C and 93% of it remained after the electrode was stored for 28 d (Fig. 10). This indicates that the Nafion/HRP/Ti 3+ -TNR-A/Ti electrode behaved as a biosensor in an acceptable manner for applications.
Conclusions
Anatase TiO 2 nanorods for modifying the electrode surface and electrochemical reduction pretreatment for increasing the amount of Ti 3+ on the surfaces of the nanorods were adopted to improve the sensing performance of the biosensor electrodes. The nanostructure gave a high sensitivity and the pretreatment resulted in further improvement in sensing performance. The Nafion/HRP/Ti 3+ -TNR-A/Ti electrode showed a sensitivity as high as 6096.4 µA•mM −1 •cm −2 . The nanostructure could increase the affinity of the enzyme and the pretreatment could increase the direct electron transfer within the electrodes. The present approach is a facile and effective manner to improve the amperometric biosensor performance.
|
2019-04-29T13:17:48.869Z
|
2017-01-01T00:00:00.000
|
{
"year": 2017,
"sha1": "4adf61d88b621c0bbaa5e7d957bb621f245cb34f",
"oa_license": "CCBY",
"oa_url": "https://myukk.org/SM2017/sm_pdf/SM1304.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "0d79fd602c0d438f6133773636e434cdf662a324",
"s2fieldsofstudy": [
"Materials Science",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
119184890
|
pes2o/s2orc
|
v3-fos-license
|
Coulomb corrected eikonal description of the breakup of halo nuclei
The eikonal description of breakup reactions diverges because of the Coulomb interaction between the projectile and the target. This divergence is due to the adiabatic, or sudden, approximation usually made, which is incompatible with the infinite range of the Coulomb interaction. A correction for this divergence is analysed by comparison with the Dynamical Eikonal Approximation, which is derived without the adiabatic approximation. The correction consists in replacing the first-order term of the eikonal Coulomb phase by the first-order of the perturbation theory. This allows taking into account both nuclear and Coulomb interactions on the same footing within the computationally efficient eikonal model. Excellent results are found for the dissociation of 11Be on lead at 69 MeV/nucleon. This Coulomb Corrected Eikonal approximation provides a competitive alternative to more elaborate reaction models for investigating breakup of three-body projectiles at intermediate and high energies.
I. INTRODUCTION
Halo nuclei are among the most peculiar quantum structures [1,2,3]. These light neutronrich nuclei exhibit a very large matter radius when compared to their isobars. This extended matter distribution is due to the weak binding of one or two valence neutrons. Thanks to their low separation energy, these neutrons tunnel far inside the classically forbidden region, and have a high probability of presence at a large distance from the other nucleons. In a simple point of view, they can be seen as very clusterized systems: a core that contains most of the nucleons, and that resembles a usual nucleus, to which one or two neutrons are loosely bound, and form a sort of halo around the core [4]. The 11 Be, 15 C, and 19 C isotopes are examples of one-neutron halo nuclei. Examples of two-neutron halo nuclei are 6 He, 11 Li, and 14 Be. In addition to their two-neutron halo, these nuclei also exhibit the Borromean property: the three-body system is bound although none of the two-body subsystems is [5].
Since their discovery in the mid 80s [6], these nuclei have thus been the focus of many experimental [1,2,3] and theoretical [7,8,9] studies. Due to their short lifetime, halo nu-clei cannot be studied with usual spectroscopic techniques, and one must resort to indirect methods to infer information about their structure. Breakup reactions are among the most used methods to study halo nuclei [10,11,12]. In such reactions, the halo dissociates from the core through interaction with a target. In order to extract valuable information from experimental data one needs an accurate reaction model coupled to a realistic description of the projectile. Various techniques have been developed with this aim: perturbation expansion [13,14], adiabatic approximation [15], eikonal model [16,17,18], coupled channel with a discretized continuum (CDCC) [19,20,21], numerical resolution of a three-dimensional time-dependent Schrödinger equation (TDSE) [22,23,24,25,26,27], and more recently, dynamical eikonal approximation (DEA) [28,29,30].
Some of these techniques (perturbation expansion, adiabatic approximation, and eikonal model) are based on approximations that lead to easy-to-handle models. Their main advantage is their relative simplicity in use and interpretation. However, the approximations on which they are built usually restrain their validity domain. For example, perturbative and adiabatic models are restricted to the sole Coulomb interaction between the projectile and the target. The eikonal method on the contrary diverges for that interaction and can be used only for reactions on light targets. The adiabatic, or sudden, approximation made in the usual eikonal model is responsible for that divergence. It indeed assumes a very brief collision time, that is incompatible with the infinite range of the Coulomb interaction.
The more elaborate models (CDCC, TDSE, and DEA) are not restricted in the choice of the projectile-target interaction. However, they lead to complex and time-consuming implementations. First calculations were therefore limited to simple descriptions of the projectile (i.e. two-body projectiles with local core-halo interactions). Recently, several attempts have been made to improve the description of the projectile. For example Summers, Nunes, and Thompson have developed an extended version of the CDCC technique, baptized XCDCC, in which the description of the halo nucleus includes excitation of the core [31]. Other groups are developing four-body CDCC codes, i.e. a description of the breakup of three-body projectiles, with the aim of modeling the dissociation of Borromean nuclei [32,33]. These techniques albeit promising, require large computational facilities, and are very time-consuming.
Alternatively one could try to extend the range of simpler descriptions of breakup reactions. Among these descriptions, the eikonal model is of particular interest. It indeed allows taking into account, at all orders and on the same footing, both nuclear and Coulomb interactions between the projectile and the target. Moreover it gives excellent results for nuclear-dominated dissociations [17,29]. Its only flaw is the erroneous treatment of the Coulomb interaction. A correction to that treatment has been proposed by Margueron, Bonaccorso, and Brink [34] and developed by Abu-Ibrahim and Suzuki [35]. The basic idea of this Coulomb corrected eikonal model (CCE) is to replace the diverging Coulomb eikonal phase at first-order by the corresponding first-order of the perturbation theory [36]. The latter, being obtained without adiabatic approximation, does not diverge. The CCE is much more economical than more elaborate techniques (a gain of a factor 100 in computational time can be achieved between this CCE and the DEA). It could therefore constitute a competitive alternative for simulating the breakup of Borromean nuclei at intermediate and high energies. However efficient it seems, this correction has never been compared to any other reaction model.
In this work, we aim at evaluating the validity and analyzing the strengths and weaknesses of this correction by comparing it with the DEA. The chosen test cases are the breakup of 11 Be on Pb and C so as to see the significance of the correction for both heavy and light targets. The considered energy is around 70 MeV/nucleon. This corresponds to RIKEN experiments [11,12], with which the DEA is in excellent agreement [28,29].
Our paper is organized as follows. In Sec. II, we recall the basics of the eikonal description of reactions, and detail the Coulomb correction proposed in Refs. [34,35]. The numerical aspects of our calculations are summarized in Sec. III. The results for 11 Be on Pb are detailed in Sec. IV, while those corresponding to a carbon target are given in Sec. V. The final section contains our conclusions about this model.
A. Eikonal description of breakup reactions
To describe the breakup of a halo nucleus, we consider the following three-body model. The projectile P is made up of a fragment f of mass m f and charge Z f e, initially bound to a core c of mass m c and charge Z c e. This two-body projectile is impinging on a target T of mass m T and charge Z T e. The fragment has spin I, while both core and target are assumed to be of spin zero. These three bodies are seen as structureless particles.
The structure of the projectile is described by the internal Hamiltonian where r is the relative coordinate of the fragment to the core, p is the corresponding momentum, µ cf = m c m f /m P is the reduced mass of the core-fragment pair (with m P = m c + m f ), and V cf is the potential describing the core-fragment interaction. This potential includes a central part, and a spin-orbit coupling term (see Sec. III).
In partial wave lj, the eigenstates of H 0 are defined by where E is the energy of the c-f relative motion, and j is the total angular momentum resulting from the coupling of the orbital momentum l with the fragment spin I. The negative-energy solutions of Eq. (2) correspond to the bound states of the projectile. They are normed to unity. The positive-energy states describe the broken-up projectile. Their radial part u klj are normalized according to where k = 2µ cf E/ 2 is the wave number, δ lj is the phase shift at energy E, and F l and G l are respectively the regular and irregular Coulomb functions [37]. The interactions between the projectile constituents and the target are simulated by optical potentials chosen in the literature (see Sec. III). Within this framework the description of the reaction reduces to the resolution of a three-body Schrödinger equation that reads, in the Jacobi set of coordinates illustrated in Fig. 1, where R is the coordinate of the projectile center of mass relative to the target, P is the corresponding momentum, µ = m P m T /(m P + m T ) is the projectile-target reduced mass, and E T is the total energy. The projectile-target interaction is the sum of the optical potentials V cT and V f T (including Coulomb) that simulate the core-target and fragment-target interactions, respectively. The projectile impinging on the target is initially bound in the state φ l 0 j 0 m 0 of energy E 0 . We are therefore interested in solutions of Eq. (4) that behave asymptotically as where Z is the component of R in the incident-beam direction and η = Z T Z P e 2 /(4πǫ 0 v) is the P -T Sommerfeld parameter (with Z P = Z c + Z f ).
In the eikonal description of reactions, the three-body wave function Ψ is factorized as the product of a plane wave by a new function Ψ [16,17,18], where K is the wavenumber of the projectile-target relative motion related to the total energy E T by With factorization (7), the Schrödinger equation (4) reads where v = K/µ is the initial projectile-target relative velocity. The first step in the eikonal approximation is to assume the second-order derivative P 2 /2µ negligible with respect to the first-order derivative vP Z . The function Ψ is indeed expected to vary weakly in R when the collision occurs at sufficiently high energy [16,17,18]. This leads to the DEA Schrödinger equation [28,29] i where the dependence of the wave function on the longitudinal Z and transverse b parts of the projectile-target coordinate R has been made explicit (see Fig. 1). This equation is mathematically equivalent to a time-dependent Schrödinger equation with straight-line trajectories, and can be solved using any algorithm valid for the time-dependent Schrödinger equation (see e.g. Refs. [22,23,24,25,26,27]). However, contrary to time-dependent models, it is obtained without semiclassical approximation: the projectile-target coordinate components b and Z are quantal variables in DEA. This advantage over time-dependent techniques allows taking into account interferences between solutions obtained at different bs.
The DEA reproduces various breakup observables quite accurately for collisions of looselybound projectiles on both light and heavy targets [29,30]. The second step in the usual eikonal model is to assume the collision to occur during a very brief time and to consider the internal coordinates of the projectile to be frozen while the reaction takes place [17]. This second assumption, known as the adiabatic, or sudden, approximation leads to neglect the term H 0 − E 0 in Eq. (10) which then reads In these notations, the asymptotic condition (6) becomes The solution of Eq. (11) exhibits the well-known eikonal expression [16] Ψ This expression is only valid for short-range potentials. The Coulomb interaction requires a special treatment that is detailed in the next section. Let us point out that this treatment allows taking properly account of the projectile-target Rutherford scattering. The Coulomb distortion in Eq. (12) is therefore simulated in the phase of Eq. (13). After the collision, the whole information about the change in the projectile wave function is thus contained in the phase shift χ that reads Due to translation invariance, this eikonal phase depends only on the transverse components b of the projectile-target coordinate R and s of the core-fragment coordinate r.
B. Coulomb correction to the eikonal model
The eikonal model gives excellent results for nuclear-dominated reactions [17,29]. However, it suffers from two divergence problems when the Coulomb interaction becomes significant. The first is the well-known logarithmic divergence of the eikonal phase describing the Coulomb elastic scattering [16,17,18]. The second is caused by the adiabatic approximation used in the eikonal treatment of the Coulomb breakup [17]. To explain this, let us divide the eikonal phase (14) into its nuclear, and Coulomb contributions The Coulomb term χ C for a one-neutron halo nucleus reads (the extension to the case of a charged fragment is immediate) [29,35] b denotes a unit vector along the transverse coordinate b. In Eq. (16), we subtract the term 1/R corresponding to a Coulomb interaction between the projectile center of mass and the target. The phase χ C therefore corresponds to the Coulomb tidal force that contributes to the breakup. Moreover, this subtraction leads to a faster decrease of the potential at large distances, which enables us to obtain the analytic expression (17). This is compensated by the addition of the elastic Coulomb phase χ C P T This phase describes the Rutherford scattering between the projectile and the target. The integral is truncated, for it otherwise diverges (note that the integral in Eq. (17) does not diverge and therefore does not require the same treatment). This truncation basically corresponds to Glauber's screened Coulomb potential [16]. Other truncation techniques [16] and other ways to deal with this divergence [18] exist. All lead to the same expression of the elastic Coulomb phase but for an additional constant phase that does not affect the cross sections [16]. The truncation considered in Eq. (18) leads to This elastic Coulomb phase correctly reproduces Rutherford scattering, indicating that the first of the two aforementioned divergences can be easily corrected [16,17,18]. The nuclear term χ N is then by definition the difference between the eikonal phase (14) and the Coulomb contributions (17) and (19).
In addition to the divergence in elastic scattering, the Coulomb interaction is responsible for a divergence in breakup. The aim of the present paper is to analyse a way to correct this divergence. It is due to the slow decrease of χ C in b. Indeed, when expanded in powers of χ C , the exponential of the Coulomb eikonal phase reads where the explicit dependence on the coordinates has been omitted for clarity. When integrated over b in the calculation of the cross sections (see Sec. II C), the 1/b asymptotic behavior of the first-order term iχ C will lead to divergence. This divergence problem arises from the incompatibility between the infinite range of the Coulomb interaction and the adiabatic, or sudden, approximation: no short collision time can be assumed if the Coulomb interaction dominates. Renouncing the use of the adiabatic approximation solves this divergence: the DEA, which corresponds to the eikonal model without this approximation [see Eq. (10) and Refs. [28,29]], does not diverge. The excellent results obtained within the DEA for collisions of loosely-bound projectiles on both light and heavy targets [29,30] confirm that, when dynamical effects are considered, both nuclear and Coulomb interactions can be properly taken into account on the same footing.
To avoid this divergence, a cutoff at large b could be made. In Ref. [38], Abu-Ibrahim and Suzuki proposed to limit the values of b to be considered in the cross-section calculations at This cutoff is obtained by requiring the characteristic time of internal excitation /|E 0 | to be shorter than the collision time b/v. The factor of two is proposed as a qualitative guide. However this treatment is rather artificial and not very satisfactory [35]. Alternatively, it has been proposed by Margueron, Bonaccorso, and Brink [34], and developed by Abu-Ibrahim and Suzuki [35], to replace the first-order term iχ C in Eq. (20), which leads to the divergence, by the first-order term of the perturbation theory iχ F O [36] where ω = (E − E 0 )/ , with E the c-f relative energy after dissociation. Since no adiabatic approximation is made in perturbation theory, this term does not diverge. When the adiabatic approximation is applied to Eq. (22), i.e. when ω is set to 0, one recovers exactly the Coulomb eikonal phase (16). This suggests that without adiabatic approximation the first-order term in Eq. (20) would be iχ F O (22). Furthermore, a simple analytic expression is available for each of the Coulomb multipoles in the far-field approximation, i.e. for m f r/m P < R [39]. The idea of the correction is therefore to replace the exponential of the eikonal phase according to With this Coulomb correction, the breakup of halo-nuclei can be described within the eikonal model taking on (nearly) the same footing both Coulomb and nuclear interactions at all orders. This correction can also be seen as an inexpensive way to introduce higher-order effects and nuclear interactions in the first-order perturbation theory.
In this work, we analyse the validity of this CCE model by comparing results obtained with the correction (23) to results of the DEA. The latter is chosen as reference calculation, since it does not make use of the adiabatic approximation that leads to the divergence in the eikonal description of breakup. It is also in good agreement with experiments [29,30]. Calculations performed in the usual eikonal model, and at the first-order of the perturbation theory will also be presented to emphasize the effects of the correction. We focus on the case of 11 Be breakup. In that case, only the dipole term of the Coulomb interaction is significant [40]. We thus restrict the correction to that multipole. The perturbative correction then reads [35] where K n are modified Bessel functions [37]. Of course, in other cases, like in 8 B Coulomb breakup, the quadrupole term may no longer be negligible [14,30], it should then be included in the correction.
C. Breakup cross sections
To evaluate breakup cross sections within the CCE we proceed as explained in Ref. [29], replacing the DEA breakup amplitude by where σ l is the Coulomb phase shift [37]. The breakup amplitudes for the usual eikonal model are obtained in the same way but without the correction.
In the following, we consider two breakup observables. The first is the breakup cross section as a function of the c-f relative energy E after dissociation [see Eq. (52) of Ref. [29]] This energy distribution is the observable usually measured in breakup experiments [11,12]. It corresponds to an incoherent sum of breakup probabilities computed at each b The second breakup observable is the parallel-momentum distribution [see Eq. (53) of Ref. [29]] where θ k = arccos(k/k ) is the colatitude of the c-f relative wavevector k after breakup. Contrary to the energy distribution, the parallel-momentum distribution corresponds to a coherent sum of breakup amplitudes. This observable is therefore sensitive to interferences between different partial waves. Consequently, it constitutes a particularly severe test for reaction models.
III. NUMERICAL ASPECTS
For these calculations, we use the same description of 11 Be as in Ref. [41]. The halo nucleus is seen as a neutron loosely bound to a 10 Be core in its 0 + ground state. The 10 Be-n interaction is simulated by a Woods-Saxon potential plus a spin-orbit coupling term (see Sec. IV A of Ref. [41]). The potential is adjusted to reproduce the first three levels of the 11 Be spectrum. The 1 2 + ground state is seen as a 1s1/2 state, while the 1 2 − excited state is described by a 0p1/2 state. This well-known shell inversion is obtained by considering a parity-dependent depth of the central term of the potential. The 5 2 + resonance at 1.274 MeV above the one-neutron separation threshold is simulated in the d5/2 partial wave.
The interaction between the projectile components and the target are simulated by optical potentials chosen in the literature. In our calculations, we use the same potentials as in Refs. [27,41]. As suggested in Ref. [42], the 10 Be-Pb potential is scaled from a parametrisation of Bonin et al. [43] that describes elastic scattering of 699 MeV α particles on lead [potential (1) in Table III of Ref. [27]]. For the 10 Be-C interaction, we use the potential developed by Al-Khalili, Tostevin, and Brooke, which reproduces the elastic scattering of 10 Be on C at 59.4 MeV/nucleon [44] (potential ATB in Table III of Ref. [41]). In both cases, we neglect the possible energy dependence of the potential. We model the neutron-target interaction with the Becchetti and Greenlees parametrisation [45].
To evaluate the breakup amplitude (25) within the CCE or the usual eikonal model, we need to compute the eikonal phase (15). The nuclear part is evaluated numerically, while the Coulomb part is obtained from its analytic expression (17). The numerical integral over Z is performed on a uniform mesh from Z min = −20 fm up to Z max = 20 fm with step ∆Z = 1 fm. The corrected phase (23) is then numerically expanded into multipoles of rank λ. We use a Gauss quadrature on the unit sphere similar to the one considered to solve the time-dependent Schrödinger equation in Ref. [27]. The number of points along the colatitude is set to N θ = 12, and the number of points along the azimuthal angle is N ϕ = 30. Unless otherwise stated, we perform all calculations with multipoles up to λ max = 12.
The eigenfunctions of the projectile Hamiltonian H 0 (1) are computed numerically with the Numerov method using 1000 radial points equally spaced from r = 0 up to r = 100 fm. The same grid is used to compute the radial integral in Eq. (25). For Coulomb (nuclear) breakup, the integrals over b appearing in Eqs. (26) and (28) are performed numerically from b = 0 up to b = 300 (100) fm with a step ∆b = 0.5 (0.25) fm.
The DEA Schrödinger equation (10) is solved using the numerical technique detailed in Ref. [27]. In this technique, the projectile internal wave function is expanded upon a threedimensional spherical mesh. The size of the mesh required for the calculation varies with the projectile-target interaction. For Coulomb (nuclear)-dominated reactions, the angular grid contains up to N θ = 8 (12) points along the colatitude θ, and N ϕ = 15 (23) points along the azimuthal angle ϕ. This corresponds to an angular basis that includes all possible spherical harmonics up to l = 7 (11). The radial variable r is discretized on a quasiuniform mesh that contains N r = 800 (600) points and extends up to r Nr = 800 (600) fm. The time propagation is performed with a second-order approximation of the evolution operator. It is started at t in = −20 (10) /MeV with the projectile in its initial bound state, and is stopped at t out = 20 (
IV. BREAKUP OF 11 BE ON PB AT 69 MEV/NUCLEON
We first consider the breakup of 11 Be on lead at 69 MeV/nucleon, which corresponds to the experiment of Fukuda et al. at RIKEN [12]. These data are fairly well reproduced by the DEA [29], that we use as reference calculation. Since we focus on the comparison of models, we do not display Fukuda's measurements. A comparison with experiment would indeed require a convolution of our results, which would hinder the comparison between theories.
In Fig. 2, we compare the breakup probability (27) Over the whole range in b, the CCE results are close to the DEA ones, and this at all energies. This good agreement suggests the Coulomb correction to be valid for simulating the breakup of loosely-bound nuclei on heavy targets. In particular, the CCE is superimposed to the DEA results in the asymptotic region. Obviously, the first-order perturbation theory efficiently corrects the erroneous 1/b asymptotic behavior of the usual eikonal model.
At small b, the agreement between the CCE and DEA seems slightly less good. In particular, at small energy, the corrected eikonal model overestimates the reference calculation. This is due to the far-field approximation used in the first-order perturbation correction. This approximation provides a convenient analytical expression (24) of the phase χ F O . However, it is incorrect at small b: it diverges at b = 0. Nevertheless, in spite of that divergence, the CCE remains close to the DEA. This illustrates that the CCE can also be seen as a way to include nuclear interactions within the first-order perturbation theory, and correct its ill-behavior at small b.
The breakup cross section (26) computed with the four approximations is displayed in Fig. 3(a) as a function of the 10 Be-n relative energy E after dissociation. Contributions of the s, p, and d partial waves are shown separately in Fig. 3(b). The small bump at about 1.25 MeV is due to the resonance in the d5/2 partial wave. The CCE cross section (dotted line) is nearly superimposed on the DEA one (full line). Only at low energy is the CCE slightly larger than the reference calculation. As mentioned earlier this effect is due to the use of the far-field approximation to derive the perturbative correction χ F O .
Interestingly, the agreement between CCE and DEA is better for the total cross section than for each partial-wave contribution: The CCE p contribution is larger than the DEA one, while the CCE s and d contributions are smaller than the DEA ones. We interpret this as a lack of couplings in the continuum in the CCE. In the DEA, these couplings depopulate the p waves towards the s and d ones without modifying the total cross section [40]. The differences between CCE and DEA partial-wave contributions suggest that this mechanism is hindered in the former.
The wrong asymptotic behavior of the Coulomb eikonal phase (17) leads to a divergence in the calculation of the breakup cross sections. To evaluate the energy distribution within the usual eikonal model one needs to resort to a cutoff at large b. The cutoff proposed in Ref. [38] [see also Eq. (21)] gives here b max = 71 fm. The corresponding cross section is displayed in Fig. 3(a) with a dashed line. Its energy dependence is strongly different from that of the reference calculation: it is too small at low energy and too large at high energy. The p contribution, which includes the diverging term of the Coulomb eikonal phase (17), is responsible for that ill-behavior. Contrarily, the s and d contributions are superimposed on those of the CCE. The use of the Coulomb correction therefore significantly improves the eikonal model when considering collisions with heavy targets.
The cross section obtained within the first-order perturbation theory is shown in dotdashed line. The nuclear interactions between the projectile and the target are described by a mere cutoff at b min = 15 fm. This value has been chosen to fit the DEA energy distribution in the region of the maximum. Here again, the shape of the cross section is very different from that of the reference calculation. However, contrary to the usual eikonal model, it decreases too quickly with the energy. Moreover, since only the dipole term of the Coulomb interaction is considered, only the p wave is reached from the s ground state, whereas s and d waves are significantly populated through nuclear interactions and higher-order effects. Note that a smaller cutoff b min , in better agreement with the usual choice that corresponds to the sum of the projectile and target radii, does not improve the agreement. We now consider the parallel-momentum distribution [see Eq. (28)]. This breakup observable is more sensitive to interferences and therefore constitutes a more severe test than the energy distribution. The parallel-momentum distribution computed within the four models is displayed in Fig. 4.
As in the previous cases, the CCE is in excellent agreement with the DEA in both magnitude and shape. We simply note that the former is slightly less asymmetric than the latter, which is probably a signature of the lack of couplings in the continuum mentioned earlier. On the contrary, both the usual eikonal model and the first-order perturbation theory lead to rather poor estimates of the momentum distribution. First, they lead to an erroneous magnitude of the cross section. The usual eikonal model gives too large a parallelmomentum distribution. This is related to the too slow decrease obtained for the energy distribution. On the contrary, the first-order perturbation gives too low a cross section; a defect due to the quick decrease in the energy distribution. Lowering the cutoff b min to cure this problem would then lead to too large an energy distribution in the peak region. Second, none of these models exhibits the asymmetry observed in the DEA. This absence of asymmetry in parallel-momentum distributions of the breakup of loosely-bound projectiles is a well-known problem of the eikonal model [46]. It is fortunate that the Coulomb correction, combining two approximations that lead to perfectly symmetric momentum distributions, restores the asymmetry observed experimentally and in dynamical calculations. has not yet converged: there remains some 4% difference with the other two at the maximum. On the contrary, the difference between λ max = 8 and 12 is insignificant (about 0.5%). This shows the necessity to include a large number of partial waves in dynamical calculations. Note that other breakup observables converge with a lower number of multipoles. In particular, the energy distribution requires only λ max = 4 to reach satisfactory convergence.
These results confirm the ability of the Coulomb correction to reliably reproduce breakup observables for collisions of loosely-bound projectiles on heavy targets. It reproduces dynamical calculations with an accuracy that is unreachable within the usual eikonal model or the first-order perturbation theory, on which it is based.
V. BREAKUP OF 11 BE ON C AT 67 MEV/NUCLEON
To complete this analysis of the Coulomb correction, we investigate its effect in nuclear induced breakup. The usual eikonal description of such reactions is known to give excellent results [17,29]. The Coulomb interaction between the projectile and the target plays then a minor role and we expect the correction (23) to have much less influence than in the Coulomb breakup case.
For this analysis, we consider the breakup of 11 Be on a carbon target at 67 MeV/nucleon, which corresponds to the experiment of Fukuda et al. [12]. The DEA is in excellent agreement with Fukuda's data [29], and therefore constitutes our reference calculation. For the same reasons as in the previous section, we do not compare directly our calculations with experiment. Fig. 6 displays the breakup probability (27) obtained at three energies E = 0.5, 1.274, and 3.0 MeV within the DEA (full lines), the CCE (dotted lines), and the usual eikonal model (dashed lines). Since this reaction is nuclear dominated, we no longer display the result of the first-order perturbation theory. The upper part of Fig. 6 displays the breakup probability at small b, while the lower part emphasizes the asymptotic behavior of P bu in a semilogarithmic plot.
In this case, all three reaction models lead to similar results. This confirms the validity of the adiabatic approximation in the eikonal description of nuclear-dominated reactions. The difference between the DEA and the other two models is indeed rather small. Only at E = 1.274 MeV, the energy of the 5 2 + resonance, does it become significant (up to 10% difference in the vicinity of the peak at b ∼ 6 fm). This larger difference suggests stronger dynamical effects at the resonance. This is not very surprising since the presence of that resonance strongly increases the breakup process [41].
Up to b = 20 fm, the usual eikonal model and the CCE remain very close to one another, confirming the small role played by the Coulomb interaction in the dissociation. At larger b, where only Coulomb is significant, we observe the 1/b behavior of the usual eikonal model. This ill-behavior is corrected using the CCE, whose breakup probabilities are nearly superimposed on the DEA ones in the asymptotic region. However, since this correction affects breakup probabilities at two or three orders of magnitude below the maximum, we do not expect it to significantly influence breakup observables.
The breakup cross sections computed within the three models are plotted as functions of the energy E in Fig. 7. The contributions to the total cross section of the partial waves s, p, and d are shown as well. The large peak at about 1.25 MeV is the signature of the significant enhancement of the breakup process by the d5/2 resonance. As suggested by the previous result, all three models lead to very similar cross sections. This similarity is also observed in the partial-wave contributions. The couplings in the continuum that depopulate one partial wave toward others, as observed in Coulomb breakup (see Fig. 3 and Ref. [40]), are thus much smaller in nuclear-induced breakup.
As in Fig. 6, the difference between the DEA and the other two models is rather small. The DEA is about 6% in average larger than the eikonal model. Note that this difference reaches 8% at the resonance energy, which is consistent with the difference observed in Fig. 6(a). The usual eikonal and the CCE lie even closer to one another. The relative difference between them in the total cross section does not exceed 3%. Even in the p partial wave, where the Coulomb correction is performed, no significant difference is observed. This confirms that the correction of the eikonal model is not necessary for nuclear-dominated reactions due to the small role played by the Coulomb interaction. The cutoff in b proposed in Ref. [38] is therefore sufficient.
The parallel-momentum distributions obtained with the three models are displayed in Fig. 8. As already mentioned, this observable is a more severe test for reaction models than the energy distribution. We observe significant differences between the DEA and the other two models. As in the case of Coulomb breakup, the DEA leads to an asymmetric parallelmomentum distribution: The DEA distribution is shifted toward negative k and presents a more developed tail on the negative k side, as observed in Ref. [46].
As for the previous observable, the CCE and usual eikonal models lead to very similar parallel-momentum distributions. These distributions are symmetric. As mentioned earlier, this symmetry is due to the lack of dynamical effects in the eikonal description of reactions. Contrary to the Coulomb case, the correction (23) is not able to restore this asymmetry. It indicates that these dynamical effects result from the nuclear interactions between the projectile and the target.
The convergence of the CCE model with the number of multipoles is illustrated in Fig. 9 for the parallel-momentum distribution. The CCE distributions computed with λ max = 4- 12 are displayed. The convergence is much slower than for Coulomb-dominated breakup (see Fig. 5). The relative difference between λ max = 10 and λ max = 12 is indeed about 3% at the maximum. This is due to the rapid variation of the nuclear potential with the projectile-target coordinates. It confirms the need of a larger number of partial waves in the dynamical calculation of nuclear-dominated dissociation. Note that the convergence is faster for the energy distribution. For that observable, an acceptable convergence is reached at λ max = 6.
VI. CONCLUSION
The eikonal description of reactions is a useful tool to simulate breakup and stripping reactions on light targets at intermediate and high energies [16,17,29]. This model is interesting because of its relative simplicity in implementation and interpretation with respect to other elaborate models, like CDCC or DEA. Unfortunately, it suffers from a divergence problem associated with the treatment of the Coulomb interaction between the projectile and the target. This divergence is due to the incompatibility of the adiabatic, or sudden, approximation which is made in the usual eikonal model, and the infinite range of the Coulomb interaction. One way to cure this problem is not to make this adiabatic approximation. This leads to the DEA [28,29]. However, like other elaborate models, the DEA is computationally expensive. Another way to solve this problem is to substitute the diverging Coulomb phase at the first-order of the eikonal model by the corresponding first-order of the perturbation theory [34,35].
In this work, we study the validity of this Coulomb correction by comparing it to the DEA, which does not present the divergence problem of the usual eikonal model. The chosen test cases are the dissociation of 11 Be on Pb and C at about 70 MeV/nucleon. These correspond to RIKEN experiments [11,12] that are well reproduced by the DEA [29].
In the case of the Coulomb breakup, the CCE gives results in excellent agreement with the DEA. The combination of the eikonal model with the first-order perturbation theory indeed solves the divergence problem due to the Coulomb interaction. Moreover, it correctly takes into account the nuclear interaction between the projectile and target. The breakup observables (energy and parallel-momentum distributions) obtained within the DEA are accurately reproduced using the CCE. This agreement is obtained while both CCE ingredients-usual eikonal and first-order perturbation-fail to describe the reaction. First they both require a rather arbitrary upper or lower cutoff in b in order not to diverge. Second they do not reproduce the shape of the breakup cross sections. In particular the CCE gives an asymmetric parallel-momentum distribution, in agreement with the dynamical calculation. Contrarily, both the usual eikonal and the perturbative models lead to perfectly symmetric distributions. This suggests that CCE restores dynamical effects that are missing in its ingredients.
The Coulomb correction has much less effect on the nuclear-dominated breakup. This was expected because of the much smaller influence of the Coulomb interaction in reactions involving light targets. This result indicates that in this case the correction is not essential. It also implies that the CCE suffers the same lack of dynamical effects as the usual eikonal model in nuclear dominated reactions. The CCE successfully combines the positive aspects of both the eikonal model and the first-order perturbation theory. It allows describing accurately the nuclear interaction while correctly reproducing Coulomb-induced effects. Moreover the CCE restores some of the dynamical effects, which are totally absent in other simple models. It therefore provides a reliable description of the breakup of loosely-bound projectiles at intermediate and high energies. Its simplicity in use and interpretation suggests it as a competitive alternative to more elaborate models to describe the breakup of Borromean nuclei.
|
2008-10-15T09:50:42.000Z
|
2008-10-15T00:00:00.000
|
{
"year": 2008,
"sha1": "23f530fc105a05c8994327318a3ece23bc466097",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0810.2645",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "23f530fc105a05c8994327318a3ece23bc466097",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
198136477
|
pes2o/s2orc
|
v3-fos-license
|
First detection and genetic characterization of canine Kobuvirus in domestic dogs in Thailand
Background Canine Kobuvirus (CaKoV) has been detected both in healthy and diarrheic dogs and in asymptomatic wild carnivores. In this study, we conducted a survey of CaKoV at small animal hospitals in Bangkok and vicinity of Thailand during September 2016 to September 2018. Results Three hundred and seven rectal swab samples were collected from healthy dogs (n = 55) and dogs with gastroenteritis symptoms (n = 252). Of 307 swab samples tested by using one-step RT-PCR specific to 3D gene, we found CaKoV positivity at 17.59% (54/307). CaKoVs could be detected in both sick (19.44%) and healthy (9.09%) animals. In relation to age group, CaKoV could be frequently detected in younger dogs (25.45%). Our result showed no seasonal pattern of CaKoV infection in domestic dogs. In this study, we characterized CaKoVs by whole genome sequencing (n = 4) or 3D and VP1 gene sequencing (n = 8). Genetic and phylogenetic analyses showed that whole genomes of Thai CaKoVs were closely related to Chinese CaKoVs with highest 99.5% amino acid identity suggesting possible origin of CaKoVs in Thailand. Conclusions In conclusion, this study was the first to report the detection and genetic characteristics of CaKoVs in domestic dogs in Thailand. CaKoVs could be detected in both sick and healthy dogs. The virus is frequently detected in younger dogs. Thai CaKoVs were genetically closely related and grouped with Chinese CaKoVs. Our result raises the concerns to vet practitioners that diarrhea in dogs due to canine Kobuvirus infection should not be ignored. Electronic supplementary material The online version of this article (10.1186/s12917-019-1994-6) contains supplementary material, which is available to authorized users.
a survey of canine Kobuvirus in domestic dogs at small animal hospitals in 5 provinces of Thailand. The survey was conducted under the Chulalongkorn University's animal use and care protocol # 1731074. The result of this study provided the first detection and genetic characterization of CaKoV isolated from domestic dogs in Thailand.
Canine Kobuviruses in domestic dogs in Thailand
During September 2016 to September 2018, we conducted a survey of viral enteric diseases in domestic dogs in small animal hospitals in 5 provinces of Thailand (Bangkok, Nakhon Ratchasima, Ratchaburi, Suphanburi, and Tak). We tested 307 rectal swab samples for CaKoV by using one-step RT-PCR specific to 3D gene. Based on a two-year survey, we found CaKoV positivity at 17.59% (54/307). CaKoVs could be detected in both sick (19.44% (49/252)) and healthy (9.09% (5/55)) animals. Our result showed no seasonal pattern of CaKoV infection in dogs (Figs. 1 and 2). In relation to age group, CaKoV could be frequently detected in younger dogs at 25.45% (42/165) (Additional file 2: Table S2). The coinfections of CaKoV with other enteric viral pathogens were observed including CaKoV/Canine parvovirus/Canine Coronavirus (n = 6), CaKoV/Canine parvovirus (n = 20) and CaKoV/Canine Coronavirus (n = 2). In this study, 12 CaKoVs were selected and characterized by whole genome sequencing (n = 4) or 3D and VP1 gene sequencing (n = 8). The viruses were selected to represent epidemiological and demographic data such as age, date of isolation and breed. In this study, nucleotide sequences of the CaKoV were submitted to the GenBank database under the accession numbers MK201776 -MK201795 (Table 1).
Phylogeny of the Thai canine Kobuviruses
Phylogenetic analysis of whole genome of CaKoVs showed that the Thai CaKoVs were closely related to each other and clustered with Aichivirus A. The cluster Aichivirus A contains Kobuviruses from dogs, cats, rodents, bats and human. While Aichivirus B and C contain Kobuviruses from cattle and pigs, respectively. Based on whole genome sequence, Thai CaKoVs were closely related to Chinese CaKoVs sub-cluster but in separated sub-cluster from the viruses from the US, UK, Brazil and Tanzania (Fig. 3). Phylogenetic analysis of 3D and VP1 of Thai CaKoVs and reference CaKoVs from various animal species were also performed. Similarly, 3D gene of Thai CaKoVs were grouped together with Chinese CaKoVs (G1 sub-cluster) but separated from the viruses in sub-clusters G2 as well as G3 (Fig. 4). Phylogenetic analysis of VP1 gene, the viruses can be clustered into 2 major subgroups, US/EU/Africa subgroup and China/Thailand subgroup (Fig. 5).
Genetic analysis of the Thai canine Kobuviruses
We compared the nucleotide and deduced amino acid sequences of Thai CaKoVs against those of reference viruses from the US, UK, Italy, China, and Korea (Tables 2 and 3 most variable region of VP1 is position 201-243, especially proline rich region. Putative proline rich region at VP1-228-240 (P 228 XPPPPXPPXPXP 240 ) was also observed in Thai CaKoVs as well as reference viruses (Table 4). In this study, unique amino acids were found in Thai and Chinese CaKoVs at the position, 65 V, 67D, 119L, 138T, 150P, 151M, 153D, 201S, 204Q, 205Q, 201Q, 213T and 241E ( Table 4). Analysis of predicted amino acid cleavage sits of whole genome were conserved among Thai CaKoVs (Table 5).
Discussions
Canine Kobuvirus (CaKoV) is an emerging pathogen in Thailand. To the best of our knowledge, the CaKoV was described in Asia in retrospective study in Korea in 2011 and have been reported in Japan, China and Australia, respectively [2,15,17,21]. However, the CaKoV have never been reported in the country or South East Asia region. In this study, during the 2 year-survey program, we found CaKoV positivity at 17.59% in both sick (19.44%) and healthy (9.09%) animals. Compare to other studies, CaKoV % positivity in this study was lower than those in China (54%) and Korea (32.2%) [14,22]. Our result showed that the CaKoV could be frequently detected in younger dogs at 27% which consistence with previous reports [15]. Similar to other previous studies, co-infections with other enteric viral pathogens were observed such as CaKoV/Canine parvovirus and CaKoV/ Canine Coronavirus [12,14,15]. Moreover, CaKoVs were detected in both diarrheic and non-diarrheic dogs which consistent with other studies [2,15]. Our result supported that this virus may not be the only cause of enteric disease in dogs. Nevertheless, the CaKoV infection have still been identified in symptomatic dogs without other enteric pathogen infections [12]. Our observation supported that the role of CaKoV as a primary pathogen of acute gastroenteritis remain unclear.
In this study, the genome size of 4 Thai CaKoVs is 7, 530 bp with one ORF encoding 2,444 amino acids of a putative polyprotein, which comparable to previous reports. Genome organization of CaKoV includes leader protein (L), structural proteins (VP0, VP3, VP1), nonstructural proteins (2A, 2B, 2C, 3A, 3B, 3C, 3D). Phylogenetic analyses showed that the Thai CaKoVs were closely related to each other and clustered with Aichivirus A. It is noted that Thai CaKoVs were closely related to Chinese CaKoVs sub-cluster but in separated sub-cluster from the viruses from the US, UK, Brazil and Tanzania (Fig. 3). Phylogenetic analyses of 3D gene showed similar result which Thai CaKoVs were grouped together with Chinese CaKoVs (G1 sub-cluster). This observation regarding to the sub-clusters of CaKoVs was in agreement with the previous study [23]. On the other hand, based on VP1 gene, the viruses can be clustered into 2 major subgroups, US/EU/Africa subgroup and China/Thailand subgroup which similar to the previous reports [16,22] (Figs. 4 and 5).
Genetic analyses of Thai CaKoVs showed that whole genome of 4 Thai CaKoVs posed highest nucleotide similarity to Chinese CaKoVs including SMCD-59 and CH-1. This observation supported phylogenetic analysis that Thai CaKoVs were closely related to Chinese CaKoVs sub-cluster but in separated sub-cluster from (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) 100 (100) (100) 100 (100) 97.1 (100) 99.4 (100) 100 (100) 100 (100) 100 (100) (100) 96 (100) the viruses from the US, UK, Brazil and Tanzania. Of all viral genes, the VP1 gene was the most diverse gene among Thai CaKoVs and other reference CaKoVs. Similar observation was also reported in previous study that VP1 protein is the most variable capsid protein [24]. It is noted that the putative proline rich region at VP1-228-240 (P 228 XPPPPXPPXPXP 240 ) was observed both in Thai CaKoVs and reference viruses. Previous studies indicated that proline rich region may associate with enteric receptor binding of the viruses [14,24]. It is noted that Thai CaKoVs posed unique PPP (VP1; 228-240), which also observed most reference viruses from China, Korea, Japan, US, UK suggesting unique characteristic. These unique amino acids were not observed in the CaKoV from the Australia (CE9), Brazil (BRA/26) and Tanzania (TZ/75, TZ82) [16,20]. However, the association of these unique amino acids and viral pathogenesis is still need to be further investigated. Based on genetic analysis, unique amino acids at the position, 65 V, 67D, 119L, 138 T, 150P, 151M, 153D, 201S, 204Q, 205Q, 201Q, 213 T and 241E were observed. These unique amino acids of China/ Thailand sub-cluster could be benefit for the detection of virus origin or diagnostic purpose in the future. Similar to previous study, analysis of predicted amino acid cleavage sits of whole genome were conserved among CaKoVs except one variation at 776/777 (VP3/VP1) which unique in wild carnivores [16].
Conclusions
In conclusion, this study is the first to report of canine Kobuvirus in dogs in Thailand. CaKoVs were mostly detected in clinical dogs of young age. However, the viruses
Sample collection
Sample collection was conducted in domestic dogs at small animal hospitals in Bangkok and vicinity of Thailand During September 2016 to September 2018. 307 rectal swab samples were collected from healthy dogs (n = 55) and dogs with gastroenteritis symptoms (n = 252) including vomiting, watery diarrhea, hemorrhagic diarrhea and dehydration. The swab samples were collected from dogs of young age (< 1 year) (n = 165), adult (1-5 years) (n = 98) and older (> 5 years) (n = 44). The animal demographic data including age, sex, breed, and vaccination history were also recorded. The ethics was conducted under the Chulalongkorn University's animal use and care protocol # 1731074. The consent to participate of the owners of the animals used in this study was obtained in writing.
Canine Kobuvirus (CaKoV) detection
All 307 samples were subjected to canine Kobuvirus identification by one step RT-PCR using primers specific to 3D gene of CaKoV [21]. First, RNA extraction was performed using the QIAsymphony DSP Viral/Pathogen mini kit (Qiagen, Hilden, Germany) following manufacturer's instructions. To detect CaKoV, RNA samples were screened for 3D gene of CaKoV by using one step RT-PCR assay.
Phylogenetic and genetic analyses of canine Kobuviruses
The phylogenetic and genetic analyses were performed by comparing nucleotide sequences of Thai CaKoVs with those of Kobuvirus available from the GenBank database. The reference nucleotide sequences of CaKoVs were retrived based on their different geographic locations, host species and date of isolation. Phylogenetic analysis of CaKoV was performed by using MEGA v.6.0 (Tempe, AZ, USA) [29] with neighbor-joining method with Kimura 2-parameter with 1,000 bootstrap replicates and Beast program with Bayesian Markov chain Monte Carlo (BMCMC) with 10,000,000 generations and an average standard deviation of split frequencies < 0.05 [30]. For genetic analysis, the nucleotide sequences and deduced amino acids of CaKoV were aligned and compared using MegAlign software v.5.03 (DNASTAR Inc.; Wisconsin, USA). Pairwise comparison of nucleotides and amino acids of Thai CaKoV and those of reference CaKoVs were conducted. The variable and unique amino acids related to receptor binding of the viruses and host preferences of CaKoVs were monitored.
Additional files
Additional file 1:
|
2019-07-21T04:34:09.593Z
|
2019-07-19T00:00:00.000
|
{
"year": 2019,
"sha1": "f32dc6c0ebf9a1623dc3b3b3eb15cfbaf12755d4",
"oa_license": "CCBY",
"oa_url": "https://bmcvetres.biomedcentral.com/track/pdf/10.1186/s12917-019-1994-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f32dc6c0ebf9a1623dc3b3b3eb15cfbaf12755d4",
"s2fieldsofstudy": [
"Biology",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220267138
|
pes2o/s2orc
|
v3-fos-license
|
Costing Analysis of a Pilot Community Health Worker Program in Rural Nepal
Data from a retrospective costing analysis offers insights and practical considerations for policy makers and locally elected officials for designing and implementing a new community health work cadre as a mechanism to achieve SDG targets in Nepal.
INTRODUCTION
A s the global community works to collectively realize our commitment to universal health coverage (UHC) and the Sustainable Development Goals (SDGs), robust community health care systems will be a critical foundation. 1 Community health worker (CHW) programs have increasingly received attention and focus as a key strategy to achieve the SDGs and UHC, with dramatic increases in global investments and scaleup of national and subnational programs. 2 Strong evidence suggests the cost-effectiveness of CHW programs, with economic returns of up to 10:1. 3 The World Health Organization (WHO) has recently endorsed them as a key mechanism to achieve UHC and SDG 3-ensure healthy lives and promote well-being for all at all ages-in the first global guidelines for CHW program design and implementation, 4 offering important guidance to policy makers and locally elected officials looking to improve progress toward SDG targets.
Nepal has made important gains in health outcomes, including a two-thirds decline in maternal mortality and halving rates of stunting between 1990 and 2015. 5 Despite this, similar to many countries, Nepal is not presently on track to meet its SDG targets by 2030, 6,7 with maternal mortality at 239 per 100,000 live births, under-5 mortality at 39 per 1000 live births, and 38% of disability associated with noncommunicable diseases (NCDs) occurring before age 40 years.
Community Health Care in Nepal
For decades, Nepal has been a leader in community health care systems. The country has a long history of various CHW models, including both fulltime and part-time and paid and voluntary cadres, covering a range of programmatic outreach and service delivery foci.
In the 1980s, the Ministry of Health introduced full-time, paid village health workers (VHWs), who received 6 weeks of primary health care training and focused mainly on increasing immunization. After the VHW program was well established, community health leaders were added to support the VHWs, promote the health messages they were spreading, and increase community participation in improving the nation's health. 8 Unlike VHWs, community health leaders were unpaid volunteers who received only 1 month of training and did not make home visits, but rather coordinated convenient places and time for people to meet them elsewhere in the community. 8 In 1988, the government initiated the voluntary, part-time cadre of female community health volunteers (FCHVs) to focus on increasing uptake of family planning methods along with immunizations. Both of these groups of health care workers were part of Nepal's response to the global movement toward primary health care that emerged out of the 1978 Alma Ata Declaration. 9 The FCHV program was developed as a country-level response to the global focus on primary health care and has grown to more than 50,000 volunteers nationally. The FCHV program has been a pillar of improved health care outcomes in Nepal, including national priorities of vitamin A distribution, immunization, and antenatal and postnatal care. 10 In 2010, Nepal was recognized as a leading country in progress toward achieving the Millennium Development Goals, and the FCHV program was acknowledged as a key component in that effort. 11 In the early 1990s, in line with the new National Health Policy of 1991 aimed at bringing health care services closer to rural communities, the government introduced the maternal and child health worker (MCHW). 12 Also a full-time, paid position, MCHWs were charged alongside FCHVs with covering entire village development committees, the smallest geo-administrative unit at the time, and were affiliated to rural health care facilities such as primary health care centers and subhealth posts. As educational levels and training opportunities increased over time, 13 VHWs and MCHWs were transitioned to auxiliary nurse-midwives (ANMs) and auxiliary health workers (AHWs), respectively, both of which undergo 18 months of pre-service training. In parallel, the FCHV program continued to grow and now includes more than 50,000 volunteers covering communities throughout all of Nepal's 7 provincesroughly 1 FCHV per every 500-1000 people in the country, and their areas of responsibility have broadened to include a range of other maternal, neonatal, and child health outreach and services. 12 Despite Nepal's significant successes in community health, in particular the FCHV program, there are areas for improvement for communitybased cadres. These include having more robust managerial and training structures, establishing minimum educational requirements, expanding to full-time paid employment (FCHVs are parttime volunteers working on average 7.2 hours per week), and addressing supply chain management challenges similar to much of the rest of the health care system. [14][15][16] Although improvements are being made to the FCHV program, including in the new FCHV Strategy 2076 that will outline a requirement for new FCHVs to have a minimum educational requirement, as well as other efforts at local levels where FCHVs are advocating for improved structure, supervision, and payment, these challenges make it unlikely that the FCHV program in its current state presents a viable pathway toward the SDGs. Indeed, in these regards, increasingly over the last decade, policy discussions and pilot programs have examined leveraging additional community-based cadres, including ANMs, to expand community health. 17 Nepal recently transitioned to a federal republic, including further decentralization of the health care system. Newly elected municipal governments are seeking locally appropriate strategies to better respond to their constituents' health care priorities. Although this transition and decentralization process has brought significant challenges, paired with Nepal's commitment to UHC and the SDGs, the new political context is also an excellent foundation for improved municipality-based CHW service delivery.
Improving Community Health Care Models in Nepal
To meet the gap in SDG targets in Nepal, it has become clear that new ideas and investments need to be made in CHW systems as part of overall health care systems strengthening efforts. 14,15,18,19 The WHO guidelines, 4 and other recent recommendations 2,20 for design of effective CHW programs, offer helpful framing in these regards, highlighting that CHWs should: (1) receive regular financial compensation; (2) meet a minimum education level; (3) be well supervised; (4) be continuously trained; (5) be closely integrated into the local primary health care system; (6) use a mobile health tool; (7) have consistent supply chain; and (8) live in the communities they serve.
In this context, the nongovernmental organization Nyaya Health Nepal developed a collaborative effort with the Ministry of Health and Population, Department of Health Services, Family Welfare Division to implement a CHW pilot program 21 that was closely aligned with WHO guidelines (Table 1), which may offer insight for Nepal's future CHW programs. This pilot program has had promising early results, described in detail elsewhere, 22,23 including between 2014 and 2016, improving antenatal care (ANC), institutional birth rate, and postpartum contraception ( Figure 1). Since 2016, monitoring data collected in the course of program operations has shown a further increase in the institutional birth rate in the catchment area to 96% (unpublished data). More recently, in multiple municipalities across the country, locally elected officials are planning or have already begun implementing new community-based services, including adjustments to the FCHV program as well as introduction of new cadres with a range of training and skillsets (e.g., ANMs). Despite growing interest, there is limited consensus or coordination of how such programs or new cadres should be implemented.
Although CHW programs may be costeffective strategies for health care systems strengthening, 3,4 there are limited data in Nepal regarding the costs or operational details of what a CHW program aligned with WHO guidelines would entail. Understanding what options are available as policy makers and locally elected officials consider more robust community-based service delivery will be critical to achieving SDG Understanding the available options as policy makers and locally elected officials consider more robust community-based service delivery will be critical to achieving SDG targets. Here, we describe costs for a catchment area population of 60,000 community members in the pilot program, including analysis of per-capita costs, service delivery costs, and administrative costs. Secondly, to situate these costs in the context of current discussions at the municipal level, we provide 3 additional implementation scenarios for the pilot CHW cadre to reflect local policy makers' considerations. This analysis may be instructive for locally elected officials and future community health care systems policy in Nepal, and more broadly, in similar settings globally that leverage community health care strategies to achieve UHC and SDG targets.
Population and Setting
In 2014, Nyaya Health Nepal began CHW program development and implementation in Achham district in Province 7 working with a catchment area population of approximately 36,000. In 2017, the pilot study was implemented in both Achham and Dolakha districts (Province 3) targeting expansion to a catchment area population ultimately of more than 250,000 people. 21 The subpopulation of the pilot described in this analysis included the original program area, Sanfebagar municipality, and the first expansion area, Kamalbazaar municipality in Achham, which together have a population of 60,504 persons. 24 A total of 30 CHWs were employed in the 2 pilot municipalities during the time period of July 16, 2017 to July 15, 2018. Achham is historically one of the most impoverished districts with some of the lowestperforming health indices in the country. 5,10 The government-owned Bayalpata Hospital in Achham is operated by a public-private partnership between the Ministry of Health and Population and Nyaya Health Nepal. Through this public-private partnership, the hospital is owned by the Ministry of Health and Population and is financed through federal, provincial, and municipal budgetary allocations, supplemented by Nyaya Health Nepal's own financing. Nyaya Health Nepal oversees day-to-day management and operation of the facility and all health care services and is accountable to the Ministry for regular reporting via the local District Health Office. Bayalpata Hospital serves as a referral facility for comprehensive emergency obstetric and newborn services and noncommunicable disease (NCD) management, in addition to providing adult and pediatric medicine and basic surgical services for Sanfebagar. Similarly, the Achham district hospital in Mangalsen serves as the referral facility for Kamalbazaar and provides a similar range of services. Kamalbazaar also has a government primary health center that provides basic outpatient care; maternal and child health services, such as immunization, nutrition, pneumonia, and diarrhea care; pre-and postnatal care; and maternal and newborn care, including emergency obstetric services, as well as several village health posts, which provide a smaller set of basic primary and ANC services.
CHW Program Design, Structure, and Service Delivery
For the current pilot, the initial program protocols were adapted in collaboration with local government partners and the Ministry of Health and Population, Department of Health Services, Family Welfare Division.
All CHWs in the pilot: (1) receive regular financial compensation; (2) meet a minimum education level requirement of secondary schooling; (3) are well supervised; (4) receive continuous training; (5) are closely integrated into the local primary health care system; (6) use a mobile health tool; (7) have a consistent supply chain; (8) live in the communities they serve; and (9) provide service without point-of-care user fees. CHWs are full-time, paid employees of Nyaya Health Nepal and assigned to individual villages, or "wards," according to the political designation of the new federal system. CHW are paid a starting base salary of Nepalese rupees (NPR) 19,930 per month. Nyaya Health Nepal's procurement and operations teams provides CHWs with supplies to use in their daily work, such as mobile phones, urine pregnancy tests, blood pressure cuff, measuring upper arm circumference tape, and other basic supplies. Each CHW is responsible for a population of, on average, 2,000 residents. In a few areas, where the population is very spread out and the terrain is more challenging, an additional CHW is employed to cover the ward. All CHWs have dedicated supervision, with 1 community health nurse (CHN) supervising, training, and conducting monitoring and evaluation for 5 CHWs. Community health program associates supervise 2-3 CHNs and are responsible for program planning and administration ( Figure 2). CHWs receive 1 month of preservice training, followed by 2-week training modules each month over the following 3 months after baseline family registration is complete. Continuous ongoing trainings are conducted at monthly CHW meetings. CHWs are engaged with several subpopulations within their catchment area: married women of reproductive age, children aged 2 years and younger, pregnant and postpartum women, and adults with chronic diseases. For each of these subpopulations, the CHWs maintain a regular home visit schedule and conduct counseling, case detection, and referrals. During home visits, CHWs conduct urine pregnancy testing; trimester-specific antenatal counseling and referral; postnatal counseling and referral (with a strong emphasis on contraception counseling); age-specific child health and nutrition counseling; childhood illness screening and referral (based on community-based Integrated Management of Neonatal and Childhood Illnesses, and NCD counseling and referral ( Figure 2). 21 Services delivered are determined based upon local Costing Analysis of a Community Health Worker Program in Nepal www.ghspjournal.org and national priorities and morbidity and mortality burden. In addition to home visits, CHWs facilitate group ANC sessions at local health posts on a bimonthly schedule, with 1 of the monthly visits for early gestation (4-and 6-month pregnant women) and the other for late gestation (8-and 9-month pregnant women). CHWs coordinate with the existing primary health care system (including local village clinics, primary health care centers, and hospitals) through monthly data sharing and following up on referrals made. They also join monthly FCHV meetings at local village clinics and coordinate with FCHVs to ensure that all pregnant women and children are identified and receiving care. The program does not alter public health services coordinated and delivered in the study areas through the Government of Nepal with the exception of group ANC. The CHWs work very closely with government ANMs at village health posts to facilitate group ANC sessions. These sessions are intended to replace individual ANC visits, as physical assessments and medication distribution are conducted during the group along with discussion and counseling. Sometimes women present outside of group visits for individual ANC, but group ANC sessions fulfill all basic ANC requirements. Additionally, CHNs provide antenatal ultrasound and prenatal lab services (including hemoglobin, HIV, blood typing, hepatitis B and C, and urine glucose and protein) during these sessions. 25,26 CHNs receive preservice training as well as ongoing training and assessment to assure the quality of these services. The role of FCHVs is not altered in the study areas.
CHWs are equipped with Android-based smartphones utilizing the CommCare platform 27 for clinical service documentation and content to support counseling and referrals. The data from the CHW smartphones are shared with the Nyaya Health Nepal hospital facility-based electronic health records to aid in providing continuity of care between community and facility-based services. 23 The supply chain is managed through a digital inventory management system linked to Nyaya Health Nepal's facility-based electronic health record system. No user fees are charged for services delivered by CHWs. The structure is closely aligned with WHO guidelines for the design and implementation of CHW programs (Table 1). 4,23 The pilot study has been described in detail previously. 21 In brief, the pilot is a Type II hybrid effectiveness-implementation research study, evaluating effectiveness using a pre-post, quasi-experimental design with stepped implementation and evaluating implementation using the RE-AIM framework. Enrollment began in 2017 and concluded in 2019, with the goal to enroll over 250,000 community members across both Dolakha and Achham districts. The analysis included in this discussion focuses on a catchment area population only in Achham district of 60,504 persons across 2 municipalities (Table 2): Sanfebagar municipality (population 36,766) and Kamalbazaar municipality (population 23,738). The pilot is ongoing and future analysis will include data for the full catchment area.
Costing Analysis
We performed a retrospective costing analysis using demographic, programmatic, and financial data for the period between July 16, 2017, and July 15, 2018. All programmatic informationincluding number of households enrolled in the CHW program, number of beneficiaries receiving care, CHW encounters and service delivery, and noncare-delivery events, such as trainings and supervision field visits, were collected from the CommCare platform and staff program calendars. CHWs use CommCare to collect household visit data, and these data formed the basis for analyzing CHW resource utilization during care delivery. 23 All direct costs (personnel, medicines and consumables, and depreciation of equipment) and indirect costs (transportation, regular supplies, staff benefits, and depreciation of digital tools) were obtained from organizational financial records. The Nepal Health Research Council (#461/2016) and Brigham and Women's Hospital Institutional Review Board (2017P000709/PHS) approved the study. Verbal informed consent was obtained by CHWs for delivery of patient care and for use of a limited identifiable dataset for research analysis.
We employed a 'top-down' costing methodology based upon methods previously described by the Joint Learning Network. 28 This methodology first disaggregates all direct and indirect costs into 'intermediate cost centers' and second into 'final cost centers.' Intermediate cost centers consisted of 6 care delivery programs: pregnancy case detection; individual ANC; group ANC; postnatal care; childhood illness management for children aged 2 years and younger; and NCD management. Intermediate cost centers also included 5 administrative functions: planning and administration; training; supervision, monitoring and evaluation; data reporting and learning; and, continuous surveillance. Final cost centers consisted of only the 6 care delivery programs.
To allocate costs to intermediate cost centers, we defined a 'capacity cost rate' for CHWs, equal to the time spent in each service encounter (available from the CommCare database) divided by the number of available minutes during the year. Personnel costs of supervisory and administrative staff (community health program associates and CHNs, as illustrated in Figure 2) were allocated to administrative functions based on resource attribution gathered from staff program calendars. Staff benefits and expenses, such as uniforms, food, and telecom network expenses, were allocated using the same methodology. Notably, costs for regional and national administrative staff were not included given these roles are primarily responsible for program design and organizational strategy, and are not anticipated to be necessary if the program scales beyond the pilot study. Similarly, initial programmatic development costs were not included. All remaining direct and indirect costs were attributed to either programs or administrative functions based on their relative use (e.g., transportation costs were partly allocated to supervisory systems and to the group ANC program, and mobile phones and associated technology support was allocated to data reporting). In summary, the allocated costs represent annual recurring costs of operating the CHW program and do not include costs relating to initial training and program development.
This approach ensured that all direct and indirect costs were allocated to the intermediate cost centers of the 6 service delivery programs or the 5 administrative functions. To arrive at final costs, all administrative function costs were further allocated downward to service delivery programs based on the CHW encounters using a step-down costing methodology. 28 The final cost centers are the 6 service delivery programs.
We performed an analysis of programmatic cost per capita by village-the smallest geopolitical division-for which there are 14 villages in Sanfebagar municipality and 10 villages in Kamalbazaar municipality. Analysis of cost by service delivery-pregnancy case detection, individual ANC, group ANC, postnatal care, under-2 care, and NCD care-was performed for both cost per capita (of the total catchment area) and for cost per beneficiary (for persons who received the service). Notably, group ANC and NCD services were available to only 56% and 61% of the population, respectively, during the measurement period. Group ANC limitations were due to certain villages in both municipalities not having yet implemented services, and NCD services were not implemented in Kamalbazaar due to constraints at the local primary health care center for NCD management.
All costs were measured in NPRs and converted to US dollars using a conversion rate of NPR104.4 to US$1, the average exchange rate for the 1-year measurement period. 29 Costs in NPRs are available upon request. Further costing methodology details are included in Supplement 1.
Alternative CHW Implementation Scenarios
We performed the above-described costing analysis to generate insights regarding cost of the We performed the costing analysis to generate insights regarding cost of the pilot program as policy makers consider new community-based strategies. Scenario 2 projects implementation wherein administrative functions of the program are absorbed into municipal health care unit governance structures. Specifically, this scenario includes the functions performed by community health program associates (Figure 2), such as program planning, budgeting, evaluation, human resources, and financial management, to be taken on by local governance structures, thereby eliminating the community health program associate cadre. Scenario 3 incorporates the CHW program directly into existing primary health care infrastructure (e.g., primary health centers or health posts). In this scenario, the functions of CHNs ( Figure 2) supervision, training, and monitoring and evaluationwould be performed by government health care facility staff (e.g., ANMs, thereby eliminating the CHN cadre). Additionally, as group ANC services in this pilot are delivered in large part by CHNs, this service would be subsumed into local health care facility service delivery or discontinued.
We developed these scenarios in response to the authors' conversations with multiple stakeholders at federal and municipal-levels highlighting 3 broad challenges: amount of payment for CHWs; CHW supervision structure; and integration of new CHW cadres into local primary health care systems. These scenarios are intended to complement the costing analysis results and offer insight into human resources and implementation possibilities as locally elected officials and federal policy makers consider the development of similar programs throughout the country.
Scenarios have not been tested but are presented for the purposes of considering alternative policy options. As each scenario also impacts cost, changes are presented in a cumulative manner in decreasing order of overall cost; the conditions and cost reductions of scenario 1 are included in scenario 2, and conditions from both scenarios 1 and 2 are included in scenario 3.
Program Cost
We analyzed the cost for pilot implementation for villages in 2 municipalities (Table 2) across multiple factors: cost per capita, by administrative function, by service delivery, cost per beneficiary, and by indirect and direct costs.
The analysis by village demonstrated an average cost per capita in the Sanfebagar municipality of US$3.22 and in the Kamalbazaar municipality of US$2.79 ( Figure 3). The overall annual cost of the pilot CHW program during the period was US$184,504. The population-weighted average annual cost per capita across both municipalities was US$3.05. The overall cost function and variation between villages are largely driven by CHW personnel costs that tend to be a step function and increase nonlinearly at population intervals.
The analysis of costs by administrative function-planning and administrative; training; monitoring and evaluation, and supervision; data reporting and learning; and continuous surveillance-demonstrated that administrative functions comprised 42% of overall costs (with service delivery comprising 58%). These costs were intermediate costs and were ultimately further allocated to the 6 service delivery areas as final cost centers. The largest cost drivers in administrative functions were supervision and monitoring and evaluation, which combined comprised 18% of overall costs (Figure 4). There was no significant variation in composition of intermediate costs between Sanfebagar and Kamalbazaar municipalities.
Analysis of final cost centers (pregnancy case detection, ANC, group ANC, postnatal care, under-2 care, and NCD) are shown in Figure 5. Pregnancy case detection was a leading overall cost driver including a per capita cost of US$0.75, and with the lowest per beneficiary cost of US$5.74. The highest cost of service per beneficiary was group ANC at US$27.06. The higher costs of group ANC were due to larger time allocation of CHNs in supervision, lower number of beneficiaries relative to home visits, and lab and diagnostic services provided during group sessions. Analysis of cost by functional type of expenditure (direct costs covering personnel, equipment and consumables, and indirect costs disaggregated into transportation and other) demonstrated direct costs constitute 81% of overall costs, including staff costs of 74%, and transportation the second leading cost driver at 11% (Figure 6). In comparing municipalities, personnel in Sanfebagar comprise 78% of costs but only 66% of costs in Kamalbazaar. This is likely due to the fact that CHW-to-population ratio and CHN-to-CHW ratios were lower in Kamalbazaar than in Sanfebagar (
Costs of Alternative Implementation Scenarios
The projected additional per capita costs over and above current public-sector budgetary allocation for the 3 alternative implementation scenarios of a CHW cadre are shown in Figure 7. In the first scenario, CHWs would receive an 'adjusted salary' of US$1,829.36 annually, as per minimum wage stipulations in the Nepal Labor Act, 30 which would Figure 7 also describes potential advantages and disadvantages for each. Cost per capita and per beneficiary for each scenario and further details of costing are included in Supplement 2.
DISCUSSION
We describe the costs of a pilot CHW cadre, aligned with WHO's guidelines for CHW program design, to provide operational and financial insight to policy makers considering new community-based services. The costs for the program were, on average, US$3.05 per capita, with variation per service delivered ( Figure 5). Similar to other community health care programs, costs were largely due to human resources and transportation ( Figure 6). 31,32 Pregnancy case detection and NCD services included some of the higher costs per capita, though pregnancy case detection had the lowest cost per beneficiary. Pregnancy case detection was the largest program component by beneficiaries count in both municipalities, whereas NCD services was the second largest in the Sanfebagar area. Group ANC services comprised the highest per beneficiary cost due to inclusion of laboratory tests and ultrasound services provided during sessions and presence of CHNs at all sessions (with corresponding increased transportation costs). Although women in areas where group ANC was not offered were also encouraged to receive these diagnostic tests, services occurred at higher-level health care facilities where available and thus were not included in the cost of CHW program delivery. Costs in the Sanfebagar municipality were higher, on average, than in Kamalbazaar due in large part to a higher staffing to population ratio (Table 2). We did not include a cost-benefit analysis as a part of this study due to limitations in data availability.
Nepal, like many countries, has a strong highlevel commitment to UHC and the SDGs. A CHW cadre, such as the pilot described, offers one potential path forward for Nepal and other countries. Although it is difficult to compare costs across programs or geographies and our current analysis is not a cost-effectiveness analysis, the costs of the program described are broadly aligned with, if not perhaps cheaper than, programmatic costs in other community-based programs. 3,31,33-35 Current health care spending in Nepal is US$51 per capita, or 6.7% of the 2016 gross domestic product (GDP), and Nepal's SDG targets have 2 important implications for health care delivery. First, to reach the SDG goals, Nepal intends to increase per capita health care spending from US$51 to US$175 by 2030. The majority of this growth will come from Nepal's intended increase in GDP per capita from US$759 to US$2,500 by 2030, while Program costs were an average US$3.05 per capita and were largely due to human resources and transportation.
Costing Analysis of a Community Health Worker Program in Nepal www.ghspjournal.org Global Health: Science and Practice 2020 | Volume 8 | Number 2 health care contributions will improve only marginally, from 6.7% to 7.0% of GDP. Second, to expand financial risk protection for citizens, Nepal intends to reduce the share of out-of-pocket spending from 52% of total health care expenditure (2016) to 35% by 2030. Accordingly, public-sector contributions to health care will need to increase by 12% annually in real terms, from US$13.60 of government-funded per capita health care spending (2016) to US$77 per capita by 2030. Further details are included in Supplement 3. As the Government of Nepal increases the fiscal space dedicated to health care, it will need to further incorporate CHWs as part of overall health care systems strengthening efforts. 36 The recently established federal system of governance, decentralized health care administration, and the newly elected municipal governments throughout Nepal provide an important opportunity for enhanced community health delivery. The costs presented here provide insight into what is required to deploy a CHW cadre closely aligned with WHO guidelines. Conversations regarding implementation of new communitybased cadres are already occurring at the federal and municipality levels where newly elected officials are eager to improve health indices for their constituencies. Although the costs of CHW service delivery are a concern to policy makers, with increased health care spending, including if Nepal spends the recommended 7.0% of 2030 GDP on health care required to meet its SDG targets, the allocation required for a cadre as described in the pilot may be quite feasible. Having said that, there are multiple other concurrent demands on the MOHP budget that would compete for additional funding were it to materialize.
The 3 alternative implementation scenarios presented provide additional insight for policy makers and locally elected officials. Further research is required to draw conclusions regarding impact and cost, but the scenarios reflect ongoing conversations at both federal and local municipal levels presently. As highlighted in Figure 7, there are advantages and disadvantages that must be accounted for in considering policy approaches to deploy CHW cadres. In these regards, we caution policy makers from accounting only for budgetary implications, as overall programmatic effectiveness may suffer with more limited supervision and administrative oversight (e.g., scenarios 2 and 3), thereby potentially negating the investment and limiting progress toward UHC and health-related SDG targets. Growing evidence globally demonstrates the importance of strong supervision, which should be taken into account as implementation strategies are considered. 4,[37][38][39] Additionally, although scenarios 2 and 3 require less per capita budgetary allocation over and above the current health care budget, they also require public-sector staffing adjustments to ensure adequate CHW programmatic oversight, and these costs have not been accounted for. Thus, we encourage policy makers to consider most importantly how implementation of a CHW cadre can be locally owned with significant community and local governance engagement, be well integrated into existing primary health care systems, and have the necessary supportive supervision to optimize impact. Scenario 1 decreases CHW salaries relative to the pilot program. From a human capital development perspective, this reduction is undesirable as higher salaries enable further opportunity and empower women CHWs, many of whom may be otherwise unemployed and/or less socioeconomically empowered. However, in the context of the health care system and current minimum wage standards, a lower salary may also increase feasibility and avoid potential perceptions of salary inequity. This could improve collaboration and integration of the cadre into the health care system. We believe this scenario is more feasible in the current political climate.
Scenarios 2 and 3 present important opportunities for further integration of CHWs into local primary health care systems, as well as improved ownership by local governance bodies and stakeholders. Strong linkages to primary health care systems and community engagement are key elements to CHW programmatic success and thus are included in the WHO guidelines for CHW program design, 4 making this an important potential benefit of both scenarios. Conversely, the lack of dedicated supervisors in these scenarios poses risks to implementation of effective supportive supervision practices, training, monitoring and evaluation, and supply chain management, with scenario 3 posing greater risk in these regards.
In other examples of CHW program implementation, CHW supervisors who have additional responsibilities (e.g., providing clinical services at the local health post) have experienced challenges providing the supportive supervision that WHO guidelines recommend, including regular coaching and mentoring of CHWs, direct observation of CHW service delivery, and review of performance data and community feedback. 4,20 These challenges may be for multiple contextually specific reasons. Health care facility staff who are asked to supervise CHWs, in addition to conducting their routine clinical or administrative responsibilities, frequently do not have the availability, training, or appropriately aligned incentives to optimize supportive supervision. This situation can manifest in supervisors infrequently visiting communities to observe CHW service delivery or having limited time and capacity to routinely review data, coach, mentor, and provide performance feedback. Scenarios 2 or 3 may pose similar risks. As discussed previously, over the last 40 years, Nepal's community health care system has included multiple community-based cadres, both fulltime and part-time, paid and volunteer, including VHWs, community health leaders, MCHWs, ANMs, AHWs, and FCHVs. 8,12,18,40,41 Now, with renewed and increasing enthusiasm to bolster progress toward the SDGs through community-based cadres, including growing recognition of the need to enhance capacity in the FCHV cadre, paired with the opportunity of increasing fiscal space at the federal and provincial levels, there is an important opportunity to offer guidance on how improvements in the community health care system can be optimally implemented. 17 Because the pilot described here was designed before the increasingly popular concept of ANMor AHW-based community services, it did not incorporate these cadres specifically into the pilot methodology. Notably, several CHWs employed in this pilot program were in fact ANMs. However, this was not an intentional aspect of the pilot program protocol, and performance of CHWs who had ANM qualifications were not compared to CHWs without ANM certification. As such, while it may ultimately be the case that AHWs and/or ANMs are well-positioned to carry forward the work of such a community-based cadre as described in this pilot, the data included in this manuscript cannot specifically comment on this question. Further research detailing the feasibility of the ANM or AHW cadre leveraged in this particular capacity should be considered to address these questions.
Finally, it is important to recognize the opportunity a 'dual-cadre' system provides in which paid full-time CHWs work closely with a volunteer cadre to optimize community-based service delivery. Dual-cadre systems are exemplified in other countries, including Ethiopia's health extension workers and health development army volunteers. These systems have been employed historically in Nepal as well, with FCHVs working in collaboration with VHWs and MCHWs, and now in various capacities with ANMs and AHWs at health posts and primary health care centers. 12 Similarly, the potential to further leverage the extensive reach, infrastructure, and effectiveness of the FCHV network with a new CHW cadre is significant. The CHWs in the pilot described here interact regularly with FCHVs, including attending FCHV meetings, health campaigns, and collaborating on health promotion activities. As CHWs are charged with enumerating and enrolling all members of every household in their respective catchment areas, they have also historically been accompanied by FCHVs during household visits. This has been helpful to ensure CHWs do not miss households or family members while conducting triage and referral care and community-based diagnosis, treatment, and counseling. Expansion of a full-time, paid community-based cadre could more deeply and effectively engage FCHVs (e.g., around outreach activities, civil registration and vital statistics, and routine monitoring and evaluation activities related to government reporting). Additionally, a CHW cadre like the one described in the pilot may further bolster supervision of FCHVs to include an enhanced focus on regular skill development, problem solving, performance review, professional development, and data feedback loops as part of routine work.
Limitations
Our study includes several limitations. First, our analysis regarding CHW time allocation was conducted using top-down allocations of costs from CommCare. Practically, this equates time required for CHWs to complete a form on mobile phones using the CommCare application as a proxy for time spent providing care; however, this proxy has not been validated. During site visits, the use of Commcare tools were observed, and CHWs also provided self-reported average time required to conduct a specific type of home visit. No notable differences were found between time stamps from the CommCare tool and reported numbers. Nonetheless, differences here could impact our analysis. Additionally, some CHW services and functions (e.g., postpartum contraception counseling or time required for travel) are not accurately captured using this methodology; therefore, it is difficult to determine precise costs for these aspects of service delivery. However, given that the top-down costing approach includes all costs, these limitations should not impact overall per capita cost. More detailed analysis via time-driven activity-based costing would be more rigorous, but with human resource and financial constraints, this was not feasible. Second, given that this analysis includes only 2 municipalities, the external validity of our conclusions to other areas throughout Nepal are potentially limited. The full pilot will cover over 250,000 persons across 2 districts. The analysis presented includes a subset catchment area of 60,504 persons in 1 district. Additionally, as noted, there was incomplete implementation of group ANC and NCD services during the measurement period, which may have underestimated per capita costs accordingly. Future analysis will include the full catchment area when all service delivery is implemented, but such analysis is not expected for at least 2 more years. Given the delay anticipated, these data provide some early insights that may help inform decision making in the current policy context.
Third, our analysis excluded costs for some administrative personnel involved in programmatic design. We do not believe such personnel would be scalable for the program as they are not involved in service delivery, and budgetary allocations are likely to be constrained if the government chooses to scale the program in other geographies. However, the exclusion of such personnel presumes limited further ongoing design and iteration which may cause challenges for the program's operations. Additionally, these personnel do enhance oversight of the program currently and their absence could affect programmatic quality. Notably, such functions could also be fulfilled by local governance bodies or primary health care facility staff, as described in scenarios 2 and 3.
Finally, the pilot was implemented in the context of a public-private partnership, through which Nyaya Health Nepal oversees day-to-day operations of the public-sector Bayalpata Hospital; therefore, the generalizability of these results for other public-sector institutions should be interpreted with caution. Future scale of a CHW cadre is more likely in public-sector or strictly private-sector settings, as public-private partnerships remain limited in Nepal. This may have overestimated costs of early-stage program development as it includes a higher administrative staffing ratio than would be necessary in future at-scale efforts. Scenarios 1, 2, and 3 attempt to account for this discrepancy by adjusting to the expected minimum wage standard (scenario 1), and assuming public-sector staffing for administrative functions (scenarios 2 and 3). Conversely, our analysis does not include potential costs for administrative personnel overseeing scaling efforts if this program were to be adopted more broadly. Similarly, the unique management structure of the pilot is not generalizable, and this may impact feasibility of a similar program in different contexts. Additionally, the catchment area also includes a hospital run by the same public-private partnership that offers access and quality of services not necessarily generalizable to other regions. Accessible, high-quality, facility-based services are a strong determinant of CHW program impact, 4 thus similar results in areas with less access to quality health care facilities may be less feasible.
CONCLUSION
As Nepal looks ahead toward achieving UHC and SDG targets, more robust primary health systems are required. A new CHW cadre, such as assessed in this national pilot, represents an important opportunity. The costs described may be instructive for policy makers and locally elected officials in Nepal and may also be relevant to countries with similar health care settings aiming to improve community health care systems on their path toward the SDGs.
|
2020-07-01T13:08:25.276Z
|
2020-06-30T00:00:00.000
|
{
"year": 2020,
"sha1": "fdcb5dd100e89962d79a06f054f95b0c11719409",
"oa_license": "CCBY",
"oa_url": "https://www.ghspjournal.org/content/ghsp/8/2/239.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "01b882b84458880bb80a4e84f226ae32e40d9c3e",
"s2fieldsofstudy": [
"Medicine",
"Economics"
],
"extfieldsofstudy": [
"Medicine",
"Business"
]
}
|
236746159
|
pes2o/s2orc
|
v3-fos-license
|
Extending a Euclidean Model of Ratio and Proportion
This paper is written for mathematics educators and researchers engaged at the elementary and middle school levels and interested in exploring ideas and representations for introducing students to ratio and proportion and for making a smooth transition from multiplication and division by whole numbers to their counterparts with fractions. Book V of Euclid’s Elements offers a scenario for deciding whether two ratios of magnitudes, embodied as a pair of line segments, are equal based on whether the ratios of magnitudes, when multiplied by the same whole numbers, n and m, each yield common products. This test of proportion can be performed using an educational software application where students are presented with a target ratio of commensurable magnitudes, A : B, and challenged to produce a selected ratio , C : D , that behaves like the target ratio under the critical conditions. The selected ratio is automatically constructed such that C : D = m : n , on the basis of a lattice point ( n , m ) chosen by the student. By adding partitive and Euclidean division to Euclid’s model, five new scenarios with similar goals are proposed. Representations in the Euclidean plane, on a number line, and in the Cartesian plane provide feedback that students may use to help identify a ratio of whole numbers corresponding to the targe ratio of magnitudes. The representations serve to highlight fractions as members of equivalence classes. The model remains to be investigated with teachers and students.
Comparing Ratios: Book V of the Elements and Beyond
This paper is written for mathematics educators and researchers engaged at the elementary and middle school levels and interested in exploring ideas and representations for introducing students to ratio and proportion and for making a smooth transition from multiplication and division by whole numbers to their counterparts with fractions. The ideas and representations have roots in Book V of Euclid's Elements, where a ratio corresponds to the relative magnitude of a pair of line segments and to the relative magnitude of a pair of whole number operators. Because ratios were limited to comparisons of entities of the same kind 1 , they offer a straightforward means of expressing the relations of less than, equal to, greater than, and q times as great as through juxtaposed line segments. Although mathematical notation in Euclid's time was limited, modern notation and representational systems may prove useful in extending the model and making it suitable for mathematics education in the present-day.
In Book V of the Elements, Euclid suggests a scenario in which one multiplies line segments in pursuit of common products, that is, as two equal outcomes of multiplication. When partitive division and Euclidean division are brought into the model, additional scenarios may be developed that serve, along with the common product scenario, to compare and order ratios of magnitudes. They may also serve to identify a ratio of whole numbers corresponding to a ratio of magnitudes, provided that the magnitudes are multiples of a common unit.
We begin by describing the Euclidean scenario involving multiplication of magnitudes represented as the successive concatenation of copies of given line segments. We then propose five additional scenarios aimed at the pursuit of common quotients, the transformation of one magnitude into another (via multiplication and division by whole numbers, by fractional operators, or by fractional increments), and the deployment of Euclid's algorithm. Next, we examine how representations in the Cartesian plane and on the real line may be associated with operations on pairs of line segments and how equivalent ratios are represented in each of these systems.
The intent is to describe scenarios that may serve as resources for nurturing schemes teachers and students at different levels might deploy for reasoning about: (a) ratios, quotients, and fractions as reflections of relative magnitude; (b) multiplication and division by fractions as compositions of multiplication and division by whole numbers; (c) fractions as rational numbers, as slopes, and as constants of proportionality in functions of direct proportion; (e) representations of numbers as points on a number line.
We believe that the schemes students develop will entail perceptual structures for representing mathematical objects such as ratios, proportions and fractions as "`things' they can describe, measure, analyze, model, and symbolize with culturally accepted words, diagrams, and signs (Abrahamson, Dutton & Bakker, 2021, in press)." At the present stage of development, we will approach the schemes as "scenarios" to be introduced to students, leaving the sense teachers and students make of such scenarios for a future moment, when appropriate empirical data are available.
Introductory remarks
Ratio and proportion permeate the multiplicative conceptual field and entail invariants arising in diverse structures (isomorphism of measures, product of measures, and multiple proportion), symbolic representations (e.g., the number line, graphs of functions in the Cartesian plane, and algebraic notation), and situations (Vergnaud, 1983). By this standard, the model of ratio and proportion set forth in Book V of Euclid's Elements, restricted to quantities of the same kind, is limited. For instance, it will not be directly applicable to problems entailing relations between elements of distinct "measure spaces", that is, between elements from different quantity dimensions (such as time and distance). But there is considerable evidence that (1) students are having difficulty with fractions and rational numbers in middle school and well into high school (Carpenter, Corbitt, Kepner, Lindquist & Reys, 1980;National Mathematics Advisory Panel, 2008) and (2) their difficulties are rooted in misunderstandings about the magnitude of fractions as well as the effect of a fractional operator on the magnitude of the outcome (Siegler, Thompson, & Schneider, 2011;Siegler, Fazio, Bailey, 2013;Torbeyns, Schneider, Xin & Siegler, 2015).
Restricting our focus to comparisons of quantities of the same kind does not mean that we will be setting functions to the side, for relations between same kind of quantities may entail functions. In addition, they direct our attention towards mathematical topics related to the expansion of the concept of number, from natural numbers, through integers, to rational numbers. In this context, certain terms merit special attention.
Ratio, proportion, magnitude, and quantity are slippery terms because each has more than one distinct yet recognized meaning. Ambiguity can be minimized by restricting usage to a single meaning or by using various meanings while pointing out the intended interpretation in specific contexts. We follow both courses here.
Magnitude refers to the greatness, size, extent, or intensity of an entity, whether it be a phenomenon, a thing, a substance or a mathematical object such as a number. Entities can be ordered by their magnitudes. Collections of discrete entities can be ordered by their counts.
In ancient Greek mathematics, magnitude (μέγεθος) typically refers to some unmeasured size such as length, area, or volume, sweep of an angle, or size of a number.
There is a general agreement nowadays among scientific communities that a quantity is a "property of a phenomenon, body, or substance, where the property has a magnitude that can be expressed as a number and a reference (BIPM, IFCC, IUPAC, & ISO, 2012, p. 2)." This definition contains two distinguishable meanings: quantity may refer to the property itself or to measurements of the property (or counts of discrete quantities). Here we use the term to quantity in the first sense. We refer to measurements or counts of a quantity as the quantity value. Thompson (1993)
captures this distinction between quantities and quantity values:
Quantities, when measured, have numerical value, but we need not measure them or know their measures to reason about them. You can think of your height, another person's height, and the amount by which one of you is taller than the other without having to know the actual values. (pp. 165-166) There are two main kinds of relative magnitude. The first refers to an additive difference. In the above quotation, the "amount by which one of you is taller than the other" refers to an unmeasured additive difference. If, however, the quantity value of each person's height is known, one value may be subtracted from the other, yielding a difference consisting of a number together with a unit of measure. The first kind of difference is a quantity 2 ; the second is a quantity value.
Numbers by themselves, sometimes referred to as "pure numbers", also have magnitude. And like quantity values, a pure number may be subtracted from another pure number to yield an additive difference.
A ratio refers to an expression of relative magnitude in a multiplicative sort sense. When the two items of interest are unmeasured quantities of the same kind, the ratio may be expressed numerically. For instance, if the height of an infant named George is one half the height of his father, Daniel, and G and H stand for their respective, unmeasured heights, then G:H = 1:2, = ! " , 2G = D, = 0.5 , or even ÷ = ! " . Although neither G nor D has been measured using an external unit, such as inches or centimeters, the ratio of their heights has been assigned a number.
The value 0.5 in this example might have formerly been described as "dimensionless", but present-day convention in scientific communities would treat the present case of 0.5 as a value on the "quantity dimension 1" (BIPM,op. cit.,p. 8). In other words, because 0.5 represents as ratio of two heights, it is regarded a quantity value, despite having no units and not being directly related to a quantity dimension such as length, mass, or area. Values of Mach numbers, friction values, or solid angles are, likewise, of dimension 1. Number of entities (e.g., number of children, number of correct answers) is also understood to have a quantity dimension of 1.
A proportion may refer to an equality of ratios of whole numbers. For instance, the equality, 2:4 = 4:6 (or either of the variants, An intensive quantity entails a ratio of different kinds of quantities (Schwartz, 1996). Speed is a ratio of distance over time. Density of a substance is its mass divided by its volume. Population density refers to the number of inhabitants per unit of area. Although issues regarding intensive quantities are important in mathematics and science education, there is no clear way to directly compare magnitudes of different dimensions. Consider, for example: "Which is greater: three miles, three minutes, or three kilograms?" Even so, intensive quantities often involve proportions. For instance, 3 miles: 6 minutes = 10 miles: 20 minutes, is a proportion.
For Euclid, a ratio invariably concerns the relative magnitude of quantities of the same kind.
In functions of direct proportion of the form ( ) = * + , the ratio of the function's value to its input also represents a proportion. Thus, for all x in a given domain, ( ): = : (and ,(.) . = * + and ( ) ÷ = ÷ ). In these cases, the proportion is expressed by an equation, not simply an equality, because a variable is involved. Here * + is referred to as a constant of proportionality.
Although we begin our analysis with the comparison of ratios of fixed values (constants or constant magnitudes), when variation, equivalence, and dilations enter the scene, one shifts to equations. Such shifts signal that one has entered the domain of algebra and moved from fractions to rational numbers.
We are interested in exploring how operations involving ratios of unmeasured quantities and ratios of whole numbers can help imbue the latter with meaning. This involves understanding how quotients of whole numbers may be ordered as rational numbers on the real line and also understanding how the magnitudes of rational operators and operands determine the magnitude of the outcome of operations.
A Euclidean model of ratio and proportion
In Book V of the Elements, Euclid represents ratios according to the relative size of two magnitudes of the same kind represented by line segments. Two ratios of magnitudes are to be compared by operating on the magnitudes and taking note of the outcomes. In Definition 5, Euclid draws attention to the case in which a proportion holds, that is, two ratios of nonnumerical magnitudes are equal: Magnitudes are said to be in the same ratio, the first to the second and the third to the fourth, when, if any equimultiples whatever be taken of the first and third, and any equimultiples whatever of the second and fourth, the former equimultiples alike exceed, are alike equal to, or alike fall short of, the latter equimultiples respectively taken in corresponding order (Heath, 1956).
This statement can be clarified using A, B, C and D to refer to non-numerical magnitudes represented as line segments and m and n to refer to whole number multipliers. The operation of multiplying line segment A by n is to be understood as the concatenation of n instances of A, yielding the line segment, nA. The statement can be reformulated as: If line segments A and C are multiplied by n and B and D are multiplied by m, where n and m are whole numbers, then one can compare nA with mB, and nC with mD. If the ratios A:B and C:D are equal, then, the outcome for the first ratio will invariably match the outcome for the second ratio. In other words, if : = : then one of three outcomes will always occur: (a) < and < ; (b) > and > ; or (c) = and = .
For ratios of commensurable magnitudes, outcomes (a) and (b) neither clearly confirm nor disconfirm that : = : . However, outcome (c) provides conclusive evidence that the ratios of magnitudes are equal.
A common products outcome of the form nA = mB exists for every ratio of commensurable magnitudes. When a common product based on the same multipliers emerges for both ratios of magnitudes, it can be inferred that the ratios are equal. This can be summarized as follows: Given four line segments, A, B, C and D (and assuming that line segment A is commensurable with B, and C is commensurable with D) then A:B and C:D are equal if and only if there exist whole numbers n and m such that, = and = .
As mentioned above, we will restrict Euclid's Definition 5 in Book V of the Elements to ratios of commensurable magnitudes, given that, if two ratios of incommensurable magnitudes (think, √2: 1) are equal, common products will not be attained. So, there can be no definitive evidence, based on Definition 5, that the two ratios are equal 3 .
A scenario based on the pursuit of common products A ratio of magnitudes can be compared to a ratio of whole numbers. If the relative size of the magnitudes matches the relative size of the numbers, the ratios are equal. If the magnitudes are represented by line segments A and B, and the whole numbers by m and n, then, according to Euclid's definition of proportion, = conveys the idea that : = : . Table 1 shows the outcomes of three attempts to find a common product for the two pairs of line segments (A and B; C and D). The first attempt shows that 1A < 2B and 1C < 2D. Thus, A:B may equal C:D but does not settle the matter. The second attempt shows that 5A > 6B and 5C > 6D. Once again, the two ratios may be equal but does not imply it. In the third attempt, 3A = 5B and 3C = 5D. A common product emerged for A and B and also for C and D for the multipliers m = 5 and n = 3. This allows us to conclude that A:B = C:D. The outcome further establishes that each ratio of magnitudes is equal to a ratio of whole numbers; that is, : = 5: 3 and : = 5: 3. It is important to appreciate what this means.
The statement, A:B = 5:3, should not be taken to imply that A equals 5 and B equals 3. A and B are not numbers. They are the names of particular line segments. More importantly, they are to be understood as the non-numeric magnitudes of the segments that, when subjected to arithmetic operations, can be construed as two units of measure. This implies that, if we designate the lengths of line segments A and B as 1A and 1B, respectively, a line segment resulting from the concatenation of n instances of A has the value nA and a line segment resulting from the concatenation of m instances of B has the value mB.
The distinction between non-numerical magnitudes (Freudenthal, 1986), on one hand, and numbers and quantity values, on the other, receives fairly little attention in present-day K-12 mathematics education. Davydov's (1975) approach to the addition and subtraction of quantities represents a significant exception (Schmittau, 2005;Bass, 2018;Coles, 2021). In the approach, even before students are introduced to numbers, they learn to appreciate that, if A and B are strips of different lengths, such that A is longer than B (i.e., A > B), then there must exist a strip, C, shorter than A such that A = B + C. They learn to represent strip B as the part of A that remains when C is removed from A and to express this as B = A -C. They also learn that C can be represented as the difference between A and B and express this as An important premise of Davydov's approach is the idea that addition and subtraction are inherently intertwined. A statement that the result of adding two amounts 5 is equal a third amount ( + = ) implies two additional ideas related to subtraction ( − = and − = ). In this perspective, additive relations are invariably about both addition and subtraction. This can be appreciated by thinking about additive structures as entailing a ternary relation.
A similar point can be made about multiplicative relations. To say that the result of multiplying an amount by a whole number is equal a third amount ( × = ) implies two complementary ideas regarding division ( ÷ = and ÷ = ). So multiplicative relations of this sort are invariably about both multiplication and division 6 . Measurement approaches to arithmetic, which maintain a sharp distinction between non-numeric magnitudes and numbers and make use of line segments or fraction strips to represent magnitudes, follow a tradition dating back to Euclid's Elements (Madden, 2018). They offer important ways of coordinating geometric and arithmetic thinking and representations. Euclid's model presumes that one can work with unmeasured ratios of magnitudes and draw inferences about their relative size of the magnitudes by operating on them.
This premise is likely to evoke objections. One might object to the premise on the grounds that diagrams are inherently imprecise and subject to the limits of human perception. Pairs of line segments may have different yet perceptually indistinguishable lengths. Who is to say, for instance, whether line segments 3A and 5B in Table 1 are exactly equal in length? Could 3A not be greater than 5B by some small amount?
There is a sensible response to this objection. In a given setting, the problem poser (whether a teacher or a software embodiment of the model) can be informed of the ratio of whole numbers that a pair of drawn line segments is intended to represent. The problem poser can then provide precise feedback as to whether the first product segment is less than, greater than, or equal to the second product segment. The problem poser can furthermore provide an equality or inequality that removes any doubt about the outcome. Feedback of this sort appears in each of the diagrams in Table 1.
Diagrams associated with the common products scenario are to be understood as mere illustrations of assertions about relationships among quantities. The assertions are to be expressed in ways that do not require reliance on the diagrams.
Line segment scenarios for determining that two ratios are equal Restricting Euclid's model to commensurable quantities enabled us to conceptualize the definition of proportion in Book V as referring to operations carried out in pursuit of a common product which, when found, reveals a ratio of whole numbers equal to the ratio of commensurable quantities.
We saw, in the Euclidean common products scenario, that if A:B = m:n, then, by Definition 5 of Book V, the product of A times n equals the product of B times m; that is, × = × . So, a conjecture that a ratio of magnitudes equals a specified ratio of whole numbers can be tested through specified operations. If a critical outcome is obtained, the conjecture is confirmed. Otherwise, one of two possible alternative outcomes will result, implying that the ratio of magnitudes is either less than or greater than the numerical ratio. A discrepancy between the conjecture and an outcome may be exploited in preparing subsequent conjectures in pursuit of a solution.
Arithmetic operations on magnitudes are straightforward. Addition of two magnitudes is represented by joining or concatenating them. Subtraction of a smaller magnitude from a larger one can be represented as the removal of the former from the latter or a diminishing of the greater by the lesser magnitude 7 . We saw that multiplication of a magnitude A by a whole number n is represented as repeated addition, namely, the concatenation of n instances of line segment A. Partitive division of A by n is represented by the output of one part of A after A has been partitioned into n equal parts, each of which is designated as ! * or & * . Euclidean division of magnitude A by B returns as quotient a whole number n while leaving a remainder magnitude less than B (possibly the null segment). Although partitive division and Euclidean division of magnitudes are discussed in the Elements, neither appears in the treatment of ratio and proportion in Book V. We shall, however, bring them into the extended model.
Once partitive division of magnitudes is brought into the model, additional scenarios may be devised for evaluating whether a particular ratio of whole numbers is equal to a ratio of commensurable magnitudes. Euclidean division applied to unmeasured quantities can also be integrated via the Euclidean algorithm, thereby allowing one to explore how simple continued fractions bear on issues of ratio and proportion. Such topics may be suitable for students at the secondary school level.
Let us summarize the scenarios one might consider in determining the equality of two ratios. These scenarios might be introduced at different grade levels and with varying degrees of support by teachers, adjusted according to the mathematical backgrounds and needs of students. In any case, the following presentation is intended for mathematics educators, not students.
Two ratios-one of commensurable magnitudes, A and B, the other of whole numbers, m and n-are equal, that is, the proportion : = : is true, if any of the following critical outcomes is reached. If any condition holds, all of the other conditions necessarily hold: • Common products (Table 1): Already described.
• Common quotients 8 ( Table 2): division of A by m and B by n yields equal magnitudes, that is, ÷ = ÷ . This may be expressed as • Transformation of one magnitude into the other by multiplication and division by whole numbers (Table 3) , corresponds to an increase when A<B and m<n; it corresponds to a decrease when A > B and m > n. This condition offers an opening for the introduction of negative integers and directed line segments 9 .
• Same sequence of Euclidean quotients (Table 6): 〈÷〉 = 〈÷〉 where 〈÷〉 yields the sequence of partial quotients resulting from the application of Euclid's algorithm to (A, B); 〈÷〉 returns the sequence of partial quotients resulting from the application of Euclid's algorithm to whole numbers, m and n.
Common products
This scenario is outlined in Table 1. Table 2 shows two attempts to find a ratio of whole numbers equal to A:B through a scenario involving division and multiplication by whole numbers. First, the ratio 2:3 is subjected to a test. When A is divided by 2 and the result, ! " , is multiplied by 3, the end result reveals that # " < . Then, on a subsequent attempt, the student learns that 1 $ = , confirming that A:B = 4:7. Given magnitudes A, B and whole numbers, m and n, :
Multiplication and division by whole numbers
A proportion involving a ratio of quantities and a ratio of whole numbers may also be assessed through multiplication and division by whole numbers. Table 3 shows the results of two attempts to find a ratio of whole numbers equivalent to A:B. In this case, division of the antecedent quantity by m is followed by multiplication by n. While # " is less than B, 1 $ equals B. This demonstrates that : = 4: 7. The conjecture is confirmed. Table 4 shows the results of operations in the multiplication (or division) by fraction scenario used to assess the same previous conjectures. The final results show that # " < and 1 $ = . This shows, as Tables 1 to 3 did, that A:B = 4:7. Adding or subtracting parts < disconfirms the conjecture. The finding that + # $ = confirms it.
Partial quotients from Euclid's algorithm
Euclid's algorithm is normally employed as a means of producing the greatest common divisor (in the case of numbers) or the greatest common measure (in the case of quantities). We will use the algorithm with different objective in mind, namely, to determine whether the ordered list of partial quotients emerging from the algorithm is the same for two ratios.
Whereas each of the scenarios considered thus far aims at producing two equal magnitudes, the Euclidean algorithm scenario rests on a comparison of an ordered list of quotients that emerges when the algorithm is applied to both a ratio of quantities (i.e., non-numerical magnitudes) and another ratio, either of numbers or of quantities. The conjecture is refuted.
: = 4: 7
Because A/4= B/7 it is also possible to conclude that: Given magnitudes A, B, and m=4, n=7, Table 6 shows the results obtained when Euclid's algorithm is applied to the ratio of 2:3, to a particular ratio of magnitudes A:B, and to the ratio of whole numbers 4:7. Although it is not necessary to use line segments to perform Euclid's algorithm on whole numbers 10 , the reader may find it useful for noting the parallels to cases of non-numerical magnitudes. It is found that 2:3 is associated with the quotients (0; 1, 2) 11 . That is because 3 goes 0 times into 2, leaving a remainder of 2. That remainder, 2, goes one time into 3, leaving a remainder of 1 that goes twice into 2, leaving no remainder.
10 There are options for representing the selected ratio using letters to represent line segments (as is the standard for target ratios). 11 The leftmost digit, followed conventionally by colon, indicates how many times the consequent or second magnitude fits wholly into the antecedent or first magnitude. The Euclidean algorithm may also be represented through operations involving tilings of squares (Bass, 2011). The structure and the list of quotients for A:B and 4:7 (Table 7) are the same as before ( Table 6). The diagrams and quotients for 2:3 do not match those for A:B. However those for 4:7 match those for A:B, confirming the conjecture, A:B = 4:7. Euclidean algorithm applied to (4, 7). 7 fits zero times into 4, leaving 4 as a remainder. 4 fits one time into 7, leaving 3 as remainder. 3 fits one time into 4, leaving 1 as remainder. 1 fits three times into 3, leaving no remainder.
Given magnitudes
Integrating, through Software, the Euclidean Model with Representations in the Cartesian Plane and on the Real Line Although my earlier effort to extend Euclid's model (Carraher, 1993) offered varied ways of representing fractions, it did not draw attention to fractions as members of equivalence classes of ordered pairs of whole numbers. This shortcoming is addressed in the current proposal by making explicit reference to the slopes of graphs of linear functions and to quotients of integers expressed as fractions or decimals, via points on the real line.
Ratio and proportion are represented differently in the Euclidean plane, in the Cartesian plane, (a coordinate plane), and on the real line (Table 8). In this section we suggest how the three systems of representation might be used together to explore issues of ratio and proportion. We illustrate the ideas by means of the software environment Exploring Relative Magnitude (Carraher, 2021), that allows representations in the three systems to be dynamically linked.
Euclidean plane
The real line Cartesian plane Visual embodiment of a ratio as… … a pair of parallel line segments … a point on a number line (as well as the displayed distance from the origin to the point).
… a point in the coordinate plane along with the abscissa and ordinate; the slope of a graph of a linear function.
Resources for ordering ratios of magnitudes based on perception
Crude judgments of relative magnitude.
Simple observation of order of points on number line
Judgments of slope of lines emanating from the origin through points Two ratios of quantities are precisely equal if… … critical outcomes, associated with equations, emerge for each ratio when the same operators are used. … the two points occupy same position (and have the same value) on number line. … the two points (or two graphs) have same slope, that is, the same quotient or "rise over run".
Is there a visual depiction of arithmetic operations?
Yes. Addition, subtraction, multiplication and division on antecedent and consequent magnitudes are expressed through line segment diagrams.
No. No standard conventions exist, aside from displacements, for clearly expressing arithmetic operations.
No. But multiplicative operations on ratios may be expressed as changes in the slope of a graph of a linear function.
As noted, the principal geometric object in the extended Euclidean model is a pair of line segments represented in a plane devoid of coordinates. On the real line, a ratio is expressed as a single point or as a distance from the origin to that point. In the Cartesian plane, the ratio may be represented as the relative size of two perpendicular line segments associated with the coordinates of a single point corresponding, respectively, to the "run" and the "rise." The Cartesian plane is also used to represent the graph of a function corresponding to the line, = + * , the slope of which corresponds to the constant of proportionality, + * .
One compares and orders ratios differently in the three systems.
While it has been found that children as young as 5 or 6 years of age readily distinguish between ratios of magnitudes less than ½ and ratios greater than ½ (Spinillo and Bryant, 1991), the ordering ratios of magnitudes based solely on perceptual judgments is limited at any age. It is unclear to what extent children appreciate that a ratio of magnitudes is invariant under uniform dilations, but young children do seem to realize that, as a pair of objects approaches the observer, their relative size does not change. Even so, there is generally no reliable way to instantly ascertain, by perceptual judgment alone, whether two ratios of magnitudes are equal.
On the real line, where a ratio is expressed as a single point, positive ratios expressed as quotients diminish as they approach the origin. Proper fractions occupy the span between zero and one; improper fractions reside on positions to the right of one. Equal ratios occupy the same position on the real line.
In the Cartesian plane one may order ratios of points in the first quadrant by imagining or drawing lines from the origin through each point of interest and comparing their inclinations. The slope of the line is the quotient of the ordinate to the abscissa ("rise over run") of any point on the line (excluding the origin). Under uniform scaling, lines with greater slopes have greater inclinations. The pedagogical context for the software is fairly straightforward. A problem poser (the teacher or the software itself) covertly selects a ratio of whole numbers to underly the problem being posed. The problem solver, typically a student (or students), is to discover that numerical ratio. The solver formulates conjectures, receiving feedback from the poser, and presumably using the feedback to move in on a solution, which is eventually to be found by the solver and confirmed by the problem poser.
This context bears a certain similarity to activities in the guess-my-rule game (Davis, 1967;Carraher & Earnest, 2003;Schliemann, Carraher, & Brizuela, 2007). In guess-my-rule, the problem poser secretly chooses a function, typically a function over the integers that can be represented by a simple algebraic rule such as → 2 + 3. In this case, the problem solver must identify the function in the course of a series of trials. In each trial, the problem solver provides an integer, n. The poser then returns the value of ( ). The goal for the problem solver is to come up with a description of the function that matches the function predetermined by the problem poser.
In Exploring Relative Magnitude, the problem poser preselects a ratio of two positive integers the terms of which are limited to a certain range (say, 1 to 15). The poser uses the integers to construct a target ratio of magnitudes in the form of two line segments (T and F in Figure 2). The problem solver makes a conjecture about the value of the ratio of those target line segments. He expresses this conjecture by selecting a pair of integers, m and n, that together comprise the selected numerical ratio, m:n. When this is done, two new line segments appear representing the solver's selected ratio of magnitudes (D:C in Fig. 2).
Figure 2: A target ratio of magnitudes and a selected ratio of magnitudes
The label above the selected segments, : = 8: 12 = 2: 3, conveys that 8:12 is equivalent to 2:3. The label at the top of the target ratio pane poses the question of whether the target ratio equals the selected ratio. This is the issue the problem solver faces. Like the scenario envisioned in Euclid's Definition 5 of Book V of the Elements, the solver is comparing two ratios of magnitudes (T:F and D:C). In addition, the solver is indirectly comparing a ratio of magnitudes to a ratio of whole numbers (T:F and 2:3).
Embedding the extended Euclidean model in a software environment enables students and teachers to raise and assess conjectures in the context of diagrams, equations, and inequalities. A software setting may also help establish meaningful connections between the Euclidean model and representations in the Cartesian plane and on number lines. Figure 3 displays the full screen image from which Fig. 2 was taken. At the top of the screen is a dashboard with user interface controls. In the middle part of the screen one finds the "Target Ratio" and "Selected Ratio" areas, as well as the integer lattice (a square lattice, grid lattice, or simply "grid", in the present case of a plane), titled the "Ratio Selector," on the right. A number line pane is at the bottom of the screen. Several of the features in Fig. 3 are optionally shown. For instance, the label, = 2 !" = " # is displayed because a "show line label" and "simplify expressions" options were set. It would have appeared simply as = 2 !" if the simplification option had not been checked. Similarly, the label above the selected ratio, : = 8: 12 = 2: 3 would have appeared simply as : = 8: 12 were the simplification not set. The labels, "run = 12" and "rise = 8 appear as a "rise and run" option.
Panes may be individually shown or hidden. In this way it is possible to pose problems and discuss issues from a subset of panes. For instance, a teacher or student might reduce the number of open panes to examine the relationship between the selected point in the grid and the relative size of the line segments drawn in the selected ratio pane (Fig. 4). The number line in Figure 3 automatically displays ticks consistent with the consequent (the second term) of the selected numerical ratio. Given that n = 12, the ticks of the number line are located at multiples of ! !" . When the chosen ratio is reducible, as 8:12 is, all equivalent ratios in the grid are used to create "multiple rulers" under the number line. In the present case, rulers for thirds, sixths, ninths, twelfths and fifteenths are displayed ( Figure 5 This present issue merits thoughtful discussion by teachers and students, for it bears directly on the idea that there is an infinite set of fractions equivalent to 2/3. The idea that 2/3 is a rational number rests precisely on the notion that it may represent an arbitrary member of the equivalence class, ( The ratio selector enables the problem solver to choose a ratio to match the target ratio. (Although the relative size of a ratio is fixed, the displayed width of a ratio may vary depending on the setting of a slider labelled "dilate".) Clicking on point (n, m) signals that the solver has selected the ratio m:n. The software constructs the line segments D and C in the ratio of m:n.
For the case at hand, the point (12,8) was selected, corresponding to the ratio 8:12.
The grid in Figure 3 displays a blue line labelled = !5 . This information is also conveyed by the fact that each of these fractions falls at the same position on the number line. The target ratio and selected ratio are displayed as the blue and red points plotted on the number line. The relative position of the points shows that the target ratio, T:F, is less than the selected ratio, D:C. Because the values under the number line are chosen in accordance with the selected ratio, the value of the target ratio normally cannot be read off directly from the ticks on the number line (unless the target ratio matches a tick on the number line). Figure 5 shows that the blue point lies somewhere between 5 3 and 2 !" . This may prove useful. Imagine that a student proceeds to test the conjecture that T:F is equal to 5:9. In this case (Fig 9) the points on the number line are closer; the vertical lines in the target ratio pane are closer. And the decimal representation of the selected quotient is 0.555555… instead of 0.6666666…. 5 3 is less than 0. 571428 dddddddddd but much closer than 2 !" .
There is other useful information about the target ratio in the number line pane. Above the number line, the selected ratio is displayed as a decimal quotient (Fig. 6): The bar above the 6 indicates that 6 is a repeating, non-terminating digit.
Had the "show target decimal" option been set, the following information about the target ratio would have also appeared (Fig. 7): Students will need some time to become familiar with how the software works. Some students may be puzzled by the fact that a point (n,m) corresponds to the ratio m:n rather than n:m. This is understandable, but the ordering of parameters is consistent with conventions for x and y axes: the point (x,y) is associated with the slope 6 . , the ratio y:x, and the quotient ÷ . Even after a student has understood this, she must learn to anticipate how the size of a selected ratio depends on the location of a point in the lattice grid.
In Fig. 8 the student has activated the common product scenario and has set m equal to 8 and n equal to 12. The figure shows that, whereas 12P = 8G (as expected), 12T < 8F. Although : ≠ 8: 12, 8:12 appears to be close to T:F. This is reflected in the fact that the two dashed vertical lines associated with 12T and 8F are close (separated by a little more than 1F). (The vertical lines are congruent in the case of the selected ratio.) Each antecedent magnitude (T and P) has been multiplied by 12; each consequent magnitude (F and G) has been multiplied by 8. A common product emerged for the selected ratio: 12 = 8 . This is to be expected given that the software constructed P and G to satisfy the constraint that : = 8: 12. No common product emerged for the target ratio: 12 < 8 . The outcomes are consistent with the relative positions of the quotients ÷ and ÷ on the number line as shown in Figure 3. Figure 9 displays the information available after the student has posited that 9T equals 5F by virtue of selecting the ratio 5:9. Once again, the selected ratio is close, but the target ratio, T:F is slightly greater and this is reflected in the positions of the two ratios as quotients on the number line. It is also confirmed by the fact that the decimal representation of the quotient of P divided by G is slightly less than that of P divided by F. Figure 9: The student tests the conjecture that 9T = 5F. Fig. 10 shows a solution. The student has found that 7T = 4F. This implies that T:F = 4:7. The points corresponding to the selected and target ratio are at the same location. Furthermore, both the number line and grid indicate that the point ratio 8:14 is equivalent to 4:7. Options enable the teacher to control the information available to students at any moment. Young students learning about decimal numbers may find it challenging to infer a target ratio even from a simple decimal such as 0.125. (A teacher may wish to leave this option off for students who can readily produce a fraction from a decimal number.) Each of the screen panes (Fig. 3) can be minimized or maximized by clicking on the title bar above the respective pane.
So far, we have been looking at various sorts of information that can be provided in the opening scenario, before any operations have been carried out on the line segments. Fig. 11 displays the grid with the "points visited" option set. The black points correspond to those outcomes in which the selected ratio was found to be greater than the target ratio. The yellow points, including (9,5), correspond to trials in which the selected ratio was smaller than the target ratio. Many points are plotted to highlight how the location of the points is related to the magnitude of the ratios; we expect that students guided by a teacher would normally not need to go through so many trials to arrive at a solution. The idea of registering outcomes of tests of proportion on the unit grid is suggested by Madden (2018) in his analysis of how Euclid's approach to ratio and proportion, in Book V of the Elements, may contribute to present-day instruction about ratio and proportion.
Simplified Expressions and Partitioned Line Segments
When the simplified expressions option is set (Fig. 12), operators are taken from the reduced ratio (2:3). Furthermore, the labels above the line segments provide information regarding both the selected ratio and the reduced selected ratio. This sort of representation is designed to encourage students to view ratios and fractions as members of an equivalence class of ordered pairs of integers. The same reasoning underlies the decision to use, by default, "multiple rulers" (Fig. 5) under the number line whenever the selected ratio is reducible. In all scenarios, when the partitioned line segments option is set in addition to the simplified expressions option, the line segments are partitioned throughout according to the values of the simplified selected ratio (Fig. 13). Every other part is colored yellow to facilitate counting of parts. In the present case, P and T are each broken up into 2 equal parts; G and F are each broken up into 3 equal parts. This also makes it easier to notice that, whenever an "unsuccessful" ratio of whole numbers is selected, the parts of the target antecedent line segment will differ from that of parts of the target consequent. In Fig. 13, the parts of T are noticeably smaller than the parts of F. This size mismatch never occurs for the selected line segments because the selected ratio of magnitudes always matches the numerical ratio.
The partitioned line segments and simplified expressions options also apply to the line segments along the grid (Fig. 13). This is consistent with ideas and an implementation discussed by Beckmann and Kulow, 2018). The line segments P and G along the grid axes were independently dilated (in this case, multiplied by a scale factor of 1.1) to 8.8 and 13.2, respectively (their initial values were 8 and 12). The fact that P and G are broken up into 2 and 3 parts, respectively, even under dilation, provides evidence that 2.2 !#." = " # . For the present problem, whenever P and G are dilated along the axes, it will be possible to infer that 89:; The observant reader may have noticed in Fig. 13 that the selected ratio of magnitudes in the middle pane has been dilated (in this case, shrunken) by means of a slider control at the bottom of the central pane. Ratios are invariant under uniform dilation. Whether students appreciate this is, of course, to be determined and addressed as needed. Might whole numbers facilitate, rather than interfere with, learning about fractional operators?
There is considerable evidence showing that, although students are first introduced to fractions in elementary school, many are still having considerable problems with fractions well into high school (Behr, Harel, Post, & Lesh, 1992). Part of the problem seems to lie in the vast gulf between representations of "fractions as pizzas" in middle school textbooks and fractions as numbers and points on the real line in mathematics proper (Wu, 2020). There is also "overwhelming evidence that a curriculum which has as its primary basis the counting of discrete objects (and hence which introduces numbers as discrete) is not an effective or rational organization (Coles, 2021, p. 1)." Children are often introduced to fractions as if they were so different from whole numbers as to be regarded as something entirely new. This is particularly apparent in the teaching and learning about multiplication and division of fractions.
Children's experience with whole number operations leads them to generalize that "multiplication makes bigger and division makes smaller" (Bell, Swan & Taylor, 1981), a generalization that lands them in trouble when they begin to work with fractions (including decimal fractions), where multiplication by a common fraction "makes smaller" and division by a common fraction "makes bigger." Some mathematics educators interpret this sort of discrepancy as evidence that children's knowledge about natural numbers serves as a "conceptual barrier" (Gelman and Williams, 1998) to learning about fractions. Others have argued that fractions should be taught as fundamentally different from whole numbers, and that the children's intuitions about how numbers are ordered need to be set aside when they work with fractions.
There is an alternative view, one that actually regards children's intuitions about the impact of whole number operations on magnitudes as potential resources for comprehending how operations with positive fractions play a role in determining the magnitude of the results. The essential idea is that multiplication by a fraction can be understood as a composition of multiplication by the numerator and division by the denominator. The numerator acts like a whole number multiplier whereas the denominator acts like a whole number divisor. Accordingly, the impact of a fractional multiplier depends on the relative size of the numerator to the denominator. When the numerator equals the denominator, the fraction neither increases nor diminishes the multiplicand; it is an identity operator. When it is greater than the denominator, the product is greater than the multiplicand. When the numerator is smaller than the denominator, the product is less than the multiplicand.
While working with Exploring Relative Magnitude, students can find support for such reasoning in various scenarios associated with the extended model of ratio and proportion.
It is important that students realize, early on, that division of a quantity, A, by a whole number, n, can be modeled as taking one nth of A; that is, ÷ = & * = ! * . This is displayed in the common quotients scenario. In the common quotients scenario depicted in Fig. 14, a student attempts to assess whether : = 3: 2 by dividing A by 3 and H by 2. The outcome showing that ÷ 3 < ÷ 2 makes clear that : ≠ 3: 2. In fact, it shows that : < 3: 2.
The unit fractional quantities, ! # and ! " , are both smaller, respectively, than A and H. Division of a quantity by a natural number greater than 1 indeed yields a unit fractional quantity that is smaller than the dividend quantity.
It is interesting to note that, when the common quotient scenario is successful (see the selected ratio pane in Fig. 14), the part from the antecedent line segment is equal to the part from the consequent line segment: This stands in contrast to approaches that introduce fractions through partitioning. As Fig. 15 shows, partitioning a non-numeric magnitude into n parts can be thought of as an identity operation: × * * = * * = . It is important to establish from the outset a clear distinction between "dividing by n" and "dividing up into n shares or parts." This is particularly true when fractions are introduced through sharing activities. Multiplication (or division) by a fraction can be understood as a composition of whole number multiplication and division. There need be no conflict between one's intuitions about whole numbers and fractions when fractions are understood this way. This topic is ripe for research.
There is no need to discuss division by a fraction as a separate case. Division of a quantity by a fraction of the form * + can simply be defined as multiplication by the inverse, + * . In the model, a single scenario suffices for both cases.
Similarity in the Cartesian Plane
This paper has provided an introduction to a model that aims to unite ideas and representations related to teaching and learning about ratio and proportion, fractions, rational number, and linear functions.
In the software, Exploring Relative Magnitude, right triangles or rectangles may be displayed (Fig. 19), in addition to line segments, to allow students to explore issues regarding the geometric similarity of triangles or rectangles. There is compelling evidence that students in grade 3 may benefit significantly from discussions about proportionality in such geometric contexts (Lehrer, Strom & Confrey, 2002). It is likely that activities off the computer such as comparing triangles made in paper may be helpful.
Concluding Remarks
The model under discussion rests on the notion that one can find a ratio of whole numbers equal to a ratio of magnitudes if certain critical outcomes emerge from various operations on the magnitudes.
The diagrams shown in this paper should not be regarded as self-evident. They are intended to invite and provide structure for discussion among students and teachers about relationships among various mathematical objects.
In the real world, one can never be certain that two magnitudes are commensurable or incommensurable. Likewise, one can never be certain that two magnitudes are exactly equal. This uncertainty stems from measurement error and the limits of human perception.
A virtual environment can bypass such limitations by working with idealized, exact ratios and providing unambiguous information in the form of equalities and inequalities regarding the outcomes of operations on the ratios. Furthermore, it can restrict the set of candidate ratios to those expressible through a manageable range of whole numbers.
Euclid's definition of proportion, given in Book V of the Elements, is highly regarded for having provided Greeks of antiquity a means of representing ratios of incommensurable magnitudes without requiring the invention of irrational numbers. It also proves to be surprisingly relevant for applications to commensurable magnitudes and rational numbers where it can offer a test of whether a particular ratio of magnitudes equals a particular ratio of whole numbers. The test will be either "merely suggestive" or "definitive" depending on whether one assumes a real world or idealized perspective. In the idealized perspective, line segments A and B are proportional to whole numbers m and n if nA = mB. Such a perspective can be supported in a virtual world where one works with exact ratios and unambiguous notation.
When operations of partitive division and Euclidean division on magnitudes are united with multiplication, additional tests of proportion emerge. Each scenario offers a means for determining whether a given ratio of magnitudes equals a selected ratio of whole numbers. And each scenario is associated with notational variants for expressing proportions. Key variants, each a counterpart to : = :n, are listed in Table 9. (2) unit grid representations; (3) number line representations and (4) arithmetic notation. The systems of representation offer various means of expressing the equivalence of ratios (Table 10). # $ ) occupy the same position on a number line Arithmetic notation
Illustrations of equivalence
• reducing a fraction (e.g. from % 2 to # $ ) does not change its value • all equivalent ordered pairs of whole numbers yield the same decimal quotient (6 ÷ 8 = 3 ÷ 4 = 0.75) Equivalence is of paramount importance. Ultimately it underlies the shift from whole numbers to positive rational numbers, where fractions are regarded as members of equivalence classes.
A model built on ratios of unmeasured magnitudes offers some advantages over models in which fractions are givens construed as the number of highlighted parts among some total number of parts. This is true whether the model entails discrete quantities or continuous quantities that are associated, from the outset, with a given number of units. In principle, working with quantities that are not measured in conventional units may provide students with special opportunities for reasoning about relative magnitude.
Representations in the Cartesian plane are expected to be helpful for students to regard literals, such as m and n, not merely as constants but more generally, as variables. They can begin to view expressions such as + * = $ 1 not simply as equalities but as equations. And they can learn to view an inequality such as + * < " # as referring to a region in the plane delineated by the graph of a linear function, and where each point in the region corresponds to an ordered pair of numbers satisfying the inequality.
It is too early to know whether the model will make a contribution to mathematics education. Any potential benefits will depend on future developments and investigations. And, given the ambitious nature of the model, it would seem that a substantial investment of time and energy will be required for teachers to become familiar with the model and capable of using it productively in their classrooms. I would be grateful to hear from educators interested in working together on this.
|
2021-08-03T00:04:23.696Z
|
2021-03-15T00:00:00.000
|
{
"year": 2021,
"sha1": "29b2fcbfbf47aa74a4a4e462cdfe58116917051b",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-300613/v1.pdf?c=1631873535000",
"oa_status": "GREEN",
"pdf_src": "Adhoc",
"pdf_hash": "8d935353943702514e6116d074cd43d36183d63a",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
55564607
|
pes2o/s2orc
|
v3-fos-license
|
An investigation on impact of Glasser method on improving quality of life : A case study of mentally retarded children ’ s mothers
Article history: Received October 25, 2012 Received in revised format 12 December 2012 Accepted 14 December 2012 Available online December 14 2012 This study aims to evaluate the effectiveness of training on quality of life based on Glasser’s training program on parents with mentally retarded children. In our study, samples were divided into two groups of experiment, control and pretest were executed for two groups, and then Glaser’s method was performed in six consecutive 150 minutes long sessions and, finally, both groups were investigated, statistically. The population of this survey includes mothers of mentally retarded children and the study has been performed in city of Esfahan, Iran. We selected a group of 60 mothers and divided them into two equal groups of 30 people. World Health Organization Quality of Life Questionnaire contained 26 questions where 24 questions measure physical and psychological health, social relationships and environment health consisted of 7, 6, 3 and 8 questions, respectively. The results of the survey indicate that Glaser’s training program could significantly improve quality of life in our study. © 2013 Growing Science Ltd. All rights reserved.
Introduction
Child birth process is one of the most enjoyable events for parents.The hope for having healthy child normally helps them accept their newly born infant.However, a bad news on having child with some sort of disability could have some bad consequences on parents and all hope is turned to despair.Every child is unique and special to his/her parents but some children have special needs.Some of these children may face with some mental disabilities and need some cooperation from other people in the society.Recent survey on the number of children with mental disability has revealed that there were approximately 1.5 million children with some sort of mental deceases.
Another survey indicates that over 25% of world's population are involved dealing with people with mental disability such as acting as teachers, social workers, parents, etc. Children who suffer from some mental disabilities normally need more care than normal children do and there are many studies, which indicate their parents often suffer from general health care (Hickman, 2000).Other studies show that parents with disable children often feel guilty and blame themselves for having disable children.Unfortunately, physicians do not provide much social assistance and they get socially isolated from the society (Wang, 2006).The first person who builds a direct and continuous relationship with such as child is his/her mother.
According to Bigham et al. (2012) children with mental disability often have mothers who are suffering from general health care.Bertelli et al. (2012) also indicate that families with mentally retarded children are financially poor compared with other families.Therefore, it is necessary to find appropriate methods to empower these parents especially mothers of the families.In fact, when there is an improvement in quality of life among mothers with mentally retarded children, they could take care of their children.
One of the primary issues in handling children suffering with some sort of serious mental decease is to help parents cope with their children.According to Walsh et al. (2013), children with ASD have increased severity and incidence of pain symptoms compared with developing children and children with other disorders.They investigated pain and problem behavior as predictors of parent stress and also tried to find out how parenting style interacted with pain and problem behavior to impact parent stress.The results of their study demonstrated that problem behavior was a moderating factor between parent stress and pain and there was an interaction between pain and problem behavior predicting stress.Children with some sort of mental disabilities normally have serious problems in learning.Channell et al. (2013) compared reading-related skills of youth with intellectual disability (ID) with those of typically developing (TD) children of similar verbal capability level.The group with ID scored less than the TD group on word recognition and phonological decoding, but similarly on orthographic processing and rapid automatized naming (RAN).Channell et al. (2013) recommended that poor word recognition in youth with ID could be associated with poor phonological decoding.Holmes (2008) determined the library usage, attitudes, and requirements of an underserved population, people with developmental disabilities, and offered insights to librarians as to how to serve this population better.
In this paper, we present an empirical study to find the effect of Glasser's (2003) medication on mothers having children with mental disability.The organization of this paper first presents details of the survey in section 2 and the results are given in section 3. The paper ends with concluding remarks to summarize the contribution of the paper.
The proposed method
The proposed study of this paper uses two groups of pre and post exam in this survey.Table 1 shows details of our survey where RG1 represents pretest and RG2 shows after test.The proposed study of this paper uses Glasser's questionnaire (Glasser, 2003;Glaser & Morreau, 1986;Colella et al., 1992).In our study, samples are divided into two groups of experiment and control and pretest were executed for two groups and then Glaser's method was performed in six consecutive 150 minutes sessions and, finally, both groups were investigated.The population of this survey includes mothers of mentally retarded children's.We select a group of 60 mothers and divid them into two equal groups of 30 people.
World Health Organization Quality of Life Questionnaire contains 26 questions where 24 questions measure physical and psychological health, social relationships and environment health consists of 7, 6, 3 and 8 questions, respectively (Sorensen et al., 2002).The first two questions are more general forms of questions and do not belong to any area but 24 other questions measure these four perspectives in Likert scale of 1-5.However, questions 3, 4 and 26 are scored in reverse form.We first explain details of our medication programs.
First session
During the first session, we first distribute the questionnaire among participants before the session begins, then we introduce people to other people, present the rules and regulations and explain structure of all sessions.We also explain how we select them for this survey and present details of all objectives.We explain inside and outside control, control internal and external tools, introduce 4 groups of people who feel unfortunate and three basic theories associated with external control.We also explain that different 3 members groups need to be setup and finally, we discuss the next upcoming session events.
Second session
In the beginning of this session, we briefly explain human being's needs, introduce five basis human being's requirements.We also explain that different people have various expectations from society and world and we get people's feedback.
Third session
We explain how we can establish better relationship with society and practice on how we could find the cause of some problems.We introduce on how we can find better perception on real-world and definition of four different categories of behavior including thoughts, feeling, behavior and psychology.
Fourth session
We first review on what we have performed in the previous session, explain about machine's life mechanism and we describe how to reach to positive feeling.Another important issue is to explain creativity and positive and negative creativity are explained in this survey.We introduce seven destructive components of relationships including complain, punish, humiliate, blame, grumble, threaten and bribe.We also explain how we choose depression and consequence of depression.
Fifth session
We review last sessions' programs and explain seven effective habits for building relationships including caring about issues, trusting people, listening, supporting, discussing, having friendship relationships and encouraging people.We generally discuss on how these seven factors impact people to build better relationship with others.
Sixth session
We review what we have explained in previous five sessions and review any questions and concerns.We setup some case study where there are some conflicts and we explain how to resolve the issues.
The main hypothesis of this survey studies whether Glaser's method could help mothers' of some mentally retarded people improve the quality of their lives or not, which is stated as follows, H 0 : Glaser's method does not help mentally retarded children's mothers improve their quality of lives.H 1 : Glaser's method helps mentally retarded children's mothers improve their quality of lives.
The results
In this section, we present details of our findings and Table 1 summarizes the results.
Conclusion
In this paper, we have presented an empirical study to measure the effectiveness of training on quality of life based on Glasser's training program on parents with mentally retarded children.In our study, samples were divided into two groups of experiment and control and pretest were executed for two groups and then Glaser's method was performed in six consecutive 150 minutes sessions and, finally, both groups were investigated.The population of this survey includes mothers of mentally retarded children and the study has been performed in city of Esfahan, Iran.We have selected a group of 60 mothers and divided them into two equal groups of 30 people.World Health Organization Quality of Life Questionnaire contains 26 questions where 24 questions measure physical and psychological health, social relationships and environment health consists of 7, 6, 3 and 8 questions, respectively.The results of the survey indicate that Glaser's training program could significantly improve quality of life in our study.
We recommend implementation of Glasser's method for other societies and compare the results and in case of having wider rate of success, it is possible to set some rules and regulations to extend such programs in regularly.We also recommend examining other factors such as the type and amount of retarded children, disability and the mood of the child with the mother's quality of life.We also recommend to consider other factors such as the type and amount of retarded children with disabilities, and the mood of the child, maternal quality of life.
Table 1
The summary of testing the main hypothesis of this survey In order to examine the uniqueness of variances, we have used Levin test and the results are summarized in Table2as follows,The word Group in the last row of Table3represents both experiment and group.Since the F-value in the last row is equal 40.37 and it is statistically significant we can conclude that Glaser's training program could significantly improve quality of life in our study.Therefore, we can confirm the main hypothesis of this survey.
|
2018-12-06T23:06:35.037Z
|
2013-02-01T00:00:00.000
|
{
"year": 2013,
"sha1": "786b787309d6cdc11ca147544da880d0bcc06fdd",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.5267/j.msl.2012.12.013",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "786b787309d6cdc11ca147544da880d0bcc06fdd",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.