id stringlengths 3 9 | source stringclasses 1 value | version stringclasses 1 value | text stringlengths 1.54k 298k | added stringdate 1993-11-25 05:05:38 2024-09-20 15:30:25 | created stringdate 1-01-01 00:00:00 2024-07-31 00:00:00 | metadata dict |
|---|---|---|---|---|---|---|
233472878 | pes2o/s2orc | v3-fos-license | Determination of the protective effects of Hua‐Zhuo‐Jie‐Du in chronic atrophic gastritis by regulating intestinal microbiota and metabolites: combination of liquid chromatograph mass spectrometer metabolic profiling and 16S rRNA gene sequencing
Background Hua-Zhuo-Jie-Du (HZJD), a Chinese herbal prescription consisting of 11 herbs, is commonly used in China to treat chronic atrophic gastritis (CAG). We aimed to determine the effect of HZJD on the microbiome-associated metabolic changes in CAG rats. Methods The CAG rat models were induced by 1-methyl-3-nitro-1-nitrosoguanidine (MNNG) combined with irregular fasting and 2% sodium salicylate, which was intragastrically administrated in fasted animals for 24 weeks. The CAG rats in the Chinese medicine (CM) group were administered a daily dose of 14.81 g/kg/day HZJD, and the vitacoenzyme (V) group were administered a daily dose of 0.08 g/kg/day vitacoenzyme. All animals were treated for 10 consecutive weeks, consecutively. Hematoxylin and eosin (H&E) staining was used to assess the histopathological changes in the gastric tissues. An integrated approach based on liquid chromatograph mass spectrometer (LC-MS) metabolic profiling combined with 16S rRNA gene sequencing was carried out to assess the effects of HZJD on CAG rats. Spearman analysis was used to calculate the correlation coefficient between the different intestinal microbiota and the metabolites. Results The H&E results indicated that HZJD could improve the pathological condition of CAG rats. The LC–MS results indicated that HZJD could significantly improve 21 gastric mucosal tissue perturbed metabolites in CAG rats; the affected metabolites were found to be involved in multiple metabolic pathways, such as the central carbon metabolism in cancer. The results of 16S rRNA gene sequencing indicated that HZJD could regulate the diversity, microbial composition, and abundance of the intestinal microbiota of CAG rats. Following HZJD treatment, the relative abundance of Turicibacter was increased, and the relative abundance of Desulfococcus and Escherichia were decreased in the CM group when compared with the M group. Spearman analysis revealed that perturbed intestinal microbes had a strong correlation with differential metabolites, Escherichia exhibited a negative correlation with l-Leucine, Turicibacter was negatively correlated with urea, and Desulfococcus exhibited a positive correlation with trimethylamine, and a negative correlation with choline. Conclusions HZJD could protect CAG by regulating intestinal microbiota and its metabolites.
Background
Chronic atrophic gastritis (CAG) is a disease involving chronic inflammation of the gastric mucosa and is considered to be an important precursor of gastric cancer. Previous studies have shown that CAG is the most common cause of cancer-related death and is ranked second among other cancer-related causes of death worldwide [1,2]. CAG has increased in incidence because of various factors, such as eating habits, smoking [3], and microbial infections [4]. The treatment of CAG mainly involves protecting the gastric mucosa, vitamin C supplementation, endoscopic minimally invasive treatment and eradication of Helicobacter pylori [5][6][7]. These treatments, however, are often accompanied by adverse reactions [8]; therefore, complementary and alternative intervention to treat CAG are urgently required. At present, traditional Chinese medicine (TCM) has been shown to be one of the most promising methods for the treatment of CAG [9][10][11][12].
In TCM, Hua-Zhuo-Jie-Du (HZJD) is a Chinese formula that is widely used in the treatment of CAG based on the TCM theories of clearing away heat and dampness, and removing turbidity and detoxification. HZJD (Peilan) and Gynostemma pentaphyllum (Thunb.) Makino (Jiaogulan). Previous studies have reported on the possible mechanism of HZJD in the treatment of CAG, which is mainly suggested to be due to the regulation of cell proliferation and apoptosis [13,14]. In a clinical study, we compared the clinical efficacy, gastroscopic efficacy and pathological efficacy of HZJD and ALaTanWu-WeiWan for gastric precancerous lesions, and found that the efficacy of HZJD was better than the control group (P < 0.05), the mechanism was thought to be associated with the decreased expression of HIF-1α, and VEGF and the increased expression of PTEN [15]. TCM is a coordinated system for the treatment of multitarget and multicomponent diseases. In a network pharmacology study, we constructed the "herb-compound-target-disease" network diagrams, and found that 156 active ingredients in HZJD had common targets with CAG [13]. Metabolomics is a systems biology approach that comprehensively describes the pharmacological action of drugs and the association between disease processes and specific biological pathways. Metabolomics has been increasingly applied to establish the therapeutic effect of TCM [16,17] and has the potential to determine the novel regulatory mechanism of HZJD. Liquid chromatograph mass spectrometer (LC-MS) is an analytical method that is suitable for the separation and quantification of hardto-volatile compounds or compounds with poor thermal stability. At present, the identification of metabolites has been widely used in metabolomics research [18].
Approximately 100 trillion gut microbes exist in the human body. Specifically, the gut microbiota are composed of large number of different bacteria that produce various metabolites and participate in important metabolic functions, such as membraneand energy metabolism [19,20]. Accumulating evidence has shown that the gut microbiota contribute to the regulation of gastrointestinal function [21]. Previous studies have shown that the abundance of bacteria in patients with CAG is increased with the reduced secretion of gastric acid and that the changes in intestinal microbiota contribute to the progression from intestinal metaplasia (IM) to gastric cancer [22][23][24]. However, the association between the metabolic phenotype and intestinal microbiota in CAG and the changes noted during HZJD treatment of CAG remain unclear.
The present study investigated the effect of HZJD on the microbiome-associated metabolic changes in CAG rats by employing an LC-MS-based metabolomics method coupled with 16S rRNA gene sequencing. To the best of our knowledge, this is the first study to examine the effect of HZJD on the intestinal microbiota and metabolites of CAG rats.
Animals and treatment
Sprague Dawley (SD) rats (body weight, 150-180 g), were obtained from Liaoning Changsheng Biotechnology Co., Ltd (Liaoning, China). All of the processes related to animal care and use were approved by the Institutional Animal Care and Use Committee of the Hebei University of Chinese Medicine (DWLL2019012). The rats were maintained in a temperature-controlled environment (24 ± 4 °C) with a 12/12 h light-dark cycle with free access to food and water. After 1 week of adaptation, eight rats were randomly selected as the normal group and the remaining rats were used to establish a CAG rat model. The replication of the CAG rat model was performed according to previous studies [11,12,25], with minor modifications. Briefly, the normal group had free access to clean water and a normal diet. The CAG rat models were induced by 1-methyl-3-nitro-1-nitrosoguanidine (MNNG, 200 µg/mL; Tokyo Chemical Industry Co., Ltd, Tokyo, Japan) combined with irregular fasting and 2% sodium salicylate (Sangon Biotech Co., Ltd, Shanghai, China) was intragastrically administrated to fasted animals for 24 weeks. Two rats were selected randomly from the model group and were humanely killed by intraperitoneal injection of sodium pentobarbital (140 mg/kg). The stomach was obtained for pathological examination to confirm that the CAG model has been successfully establishied. Subsequently, the CAG rat models were randomly divided into a model group (M group), a Chinese medicine group (CM group) and a vitacoenzyme group (V group) (n = 8 rats per group). Rats in the CM group were administered a daily dose of 14.81 g/kg/day HZJD, rats in the V group were administered a daily dose of 0.08 g/kg/day vitacoenzyme.
Sample collection and preparation
Following fasting without water for 24 h, the SD rats were humanely killed with sodium pentobarbital (140 mg/ kg i.p.) and the gastric tissues and feces were collected. Feces were stored at − 80 °C until required for further analysis by 16S rRNA gene sequencing. The stomach tissues were divided into two parts on an ice tray; one part was immersed in paraformaldehyde (Servicebio, Wuhan, China) for histological evaluation, and the other part was quickly stored at − 80 °C until required for analysis by LC-MS.
Hematoxylin and eosin staining
We conducted hematoxylin and eosin (H&E) staining to observe the histological changes in the tissues. Tissues were immersed in paraformaldehyde for 24 h and embedded in paraffin wax blocks. Then, the paraffinembedded tissues were cut into two consecutive slices (4 μm thickness), and the slices were placed on a glass slide for incubation, conventional dewaxing using xylene and dehydration via gradient alcohol. We performed H&E staining and observed the histopathology of the gastric mucosa under an optical microscope (Nikon Group Companies, Japan, model: ECLIPSE NI-U).
Liquid chromatography quadrupole time-of-flight mass spectrometer conditions and mass spectrometry
We conducted the analyses using UHPLC (1290 Infinity LC, Agilent Technologies) coupled to a quadrupole timeof-flight mass spectrometer (AB SciexTripleTOF 6600; Shanghai Applied Protein Technology Co., Ltd).
We performed hydrop interaction liquid chromatography (HILIC) separation using a 2.1 mm × 100 mm 1.7 μm column (ACQUIY UPLC BEH; Waters Corporation, Milford, MA, USA) to analyze the samples. In both positive and negative electrospray ionization (ESI) modes, the mobile phase contained solvents A (25 mM ammonium acetate and 25 mM ammonium hydroxide) and B (acetonitrile). The gradient used was as follows: 85% B at 1 min, linear decrease to 65% at 11 min, 0.1 min decrease to 40% and constant solvent composition for 4 min, 0.1 min increase to 85% and 5 min of equilibration time.
We performed reversed-phase liquid chromatography (RPLC) using a 2.1 mm × 100 mm 1.8 μm column (ACQUIY UPLC HSS T3; Waters Corporation). In the ESI positive mode, the mobile phase was as follows: A, water and 0.1% formic acid; B, acetonitrile and 0.1% formic acid. In the ESI negative mode, the mobile phase contained A, 0.5 mM ammonium fluoride and B, acetonitrile. The gradient was as follows: 1% B for 1.5 min, linear increase to 99% up to 11.5 min, constant solvent composition for 3.5 min, decrease to 1% for 0.1 min and 3.4 min of equilibration. The gradient flow rate was 0.3 mL/min and the column temperature was maintained at 25 °C. Each sample injection contained 2 µL.
During tandem mass spectrometry MS/MS analysis, the ESI source conditions were set as follows: The ion source Gas1 and Gas2 were 60 and the curtain gas was 30. The source temperature was set at 600 ℃ and the ion spray voltage floating was set at 5500 V. In the MS-only acquisition mode, the instrument was set to acquire a m/z range of 60-1000 Da and the TOFMS scan accumulation time was set to 0.20 s/ spectra. The MS/MS automatic acquisition was set to acquire a m/z range of 25-1,000 Da and the product ion scan accumulation time was set to 0.05 s/spectra. The product ion scan adopted information dependent acquisition and the high sensitivity-mode was selected. The parameter settings were as follows: fixed collision energy 35 v and 15 eV and dimming potential 60 v (+) and 60 v (−). We performed the isotopes within 4 Da and monitored 10 candidate ionsper cycle.
16S rRNA gene sequencing analysis
We isolated total DNA from rat fecal pellets using the cetyl trimethyl ammonium bromide/sodium dodecyl sulfate (CTAB/SDS) method. We selected the V3-V4 region of 16S rRNA genes using specific primers with the following barcode: 341 F-806R. The polymerase chain reaction (PCR) products were detected by 2% agarose gel electrophoresis and the target fragments were cut and recovered. The AxyPrepDNA (Axygen Scientific Inc. USA) gel recovery kit was used.
We detected the PCR amplification products by the QuantiFluor ™ -ST blue fluorescence quantitative system (Promega Corporation, Madison, WI, USA). The corresponding proportions were mixed according to sequencing volume requirements of each sample. Subsequently, we used the Ultra ™ DNA Library Prep Kit (NEB, USA) for library construction. The library was sequenced on a computer following quality-check performed by agilent bioanalyzer 2100 (Agilent, Palo Alto, CA, USA) and qubit.
We performed sequence analyses using the UPARSE software package with the UPARSE-OTU and UPARSE-out ref algorithms. In-house perl scripts were used to analyze the alpha (within samples) and beta (among samples) diversity values. We assigned sequences with in-house perl to the same operational taxonomic units (OTUs).
Statistical analysis
One way ANOVA (GraphPad Prism 8) was used to analyze the changes in metabolites and abundance of the intestinal microbiota among different groups, P < 0.05 was considered statistically different. Date were expressed as means ± standard deviation.
Morphological changes of the gastric mucosa
We conducted histological examination of gastric mucosa samples derived from rats. We employed H&E staining to evaluate the pathological changes. In the N group, visual observation indicated that the gastric mucosa was smooth and mainly red and that the gastric folds were well-arranged. H&E staining showed that the gastric mucosal epithelial cells and the glands had well preserved integrity and were arranged regularly. We did not observe any expansion or hyperemia or any inflammatory cellular infiltration in the submucosa and muscularis.
In contrast to these findings, the gastric mucosa of CAG rats exhibited a white color, reduced number of gastric folds and visible white nodules. H&E staining demonstrated disorders of gastric mucosal glands in the CAG rats accompanied by inflammatory cellular infiltration and muscularis. Taken together, these results illustrated that the CAG rat model was established successfully.
The gastric mucosa morphology was significantly improved in the CM group compared with the M group. The gastric mucosa of rats in CM group was smooth and exhibited a red color with a small percentage of white nodules. H&E staining indicated that the glands were neatly arranged and a complete glandular tube structure was clearly observed. No abnormal thickening of the muscularis mucosa was noted. In the V group, the gastric mucosa was mainly white, but with obvious white nodules. H&E staining showed that the glands were disordered, inflammatory cell infiltration was visible, and there was no clear pathological improvement. On the basis of these pathological manifestations, we considered HZJD to be effective for the treatment of CAG and found that it was better than vitacoenzyme treatment (Fig. 1).
LC-MS method validation
We used XCMS software (SIMCA-P 14.1, Umetrics, Umea, Sweden) to extract the metabolite ion peaks of all samples. A total of 13,150 peaks were obtained in the positive ion mode, and 9173 peaks were obtained in the negative ion mode. We obtained the principal component analysis (PCA) model by pareto-scaling conversion of all peaks. As shown in Fig. 2a, b, the quality control (QC) samples were closely clustered in the positive and negative ion modes, which proved that the experiment had good repeatability. We performed pearson correlation analysis on the QC samples. The abscissa and ordinate marked the logarithm of the intensity value. Generally, a correlation coefficient greater than 0.9 indicated a good correlation. As shown in Fig. 2c, d, all correlation coefficients were greater than 0.9, indicating that the instrument analysis system was stable and the data could be used for subsequent analysis.
HZJD-treated gastric tissue metabolic profiling analysis
The PCA model reflected the variability between groups and within groups. The PCA model (Fig. 2a, b) showed that the M group had two samples that overlapped with the HZJD group and the V group. To further observe the changes in the metabolites, we used PCA analysis to analyze the N and M groups, the M and CM groups, and the M and V groups. As shown in Fig. 3, the N and M group had one sample overlapping, and the CM and V group had two samples overlap with the M group, but the overall trend of distinction was clear, which indicated that the metabolites of the CAG rats had changed. Subsequently, the orthogonal partial least square discriminate analysis (OPLS-DA) model was constructed, and crossvalidation proved that the model was reliable (Fig. 4). In the OPLS-DA model, a distinct separation was presented between the M and the N groups, the M and CM groups, and the M and V groups, suggesting that the concentration levels of the metabolites in the CAG rat models were significantly changed. Next, we used fold change (FC) analysis and t-test to further analyze the changes between the two groups of samples. As shown in Fig. 5, the M group and the N group, the M group and the CM group, and the M group and the V group were significantly separated in the positive and negative ion modes.
Based on the difference between the N and M groups of the OPLS-DA model, variable values were set at variable importance for the projection (VIP) > 1 and P < 0.05 to assess the differential metabolites that lead to the occurrence of CAG. The biomarkers were identified according to their structures in the Human Metabolome Database (HMDB; http:// www. hmdb. ca/). Notably, we identified 68 metabolites in the current study. To evaluate the rationality of metabolites, and to intuitively display the relationship between samples and the differences in the expression of metabolites in different samples, we used the level of expression of metabolites to perform hierarchical clustering of each group of samples; this allowed us to accurately screen marker metabolites and study the changes in related metabolic processes. As shown in Fig. 6, in this study, compared with the N group, the Following HZJD treatment, 21 metabolites showed a recovery trend (Fig. 7). Based on the differential metabolites, we employed the Kyoto Encyclopedia of Genes and Genomes (KEGG) (http:// www. kegg. jp/) to explore the most relevant pathways. We used Fisher's exact test to analyze the significance level of metabolite enrichment of each pathway to determine the metabolic and signal transduction pathways that were significantly affected. The top 20 affected transduction pathways are described in Fig. 8 and include the mTOR signaling pathway and choline metabolism in cancer.
HZJD-induced changes in the gut microbiome
To judge whether the grouping was meaningful, we used analysis of similarities to test whether the difference between the groups was greater than the difference within the group. The R value was between (− 1, 1), and the R > 0, indicating that the difference between the groups was significant. The R < 0 indicates that the difference within the group was greater than the difference between the groups. The reliability of the statistical analysis is expressed by P value, and P < 0.05 indicates statistical significance. In the present study, R = 0.471, P = 0.001 (Fig. 9a), which indicating that the difference between groups was greater than the difference within groups, and that the grouping is reasonable.
The sequences were clustered into OTUs with 97% identity. The OUTs in the M group were significantly different, and the rank-abundance of the slope of the M group was gentlerand spread wider on the horizontal axis compared with the N group. Following HZJD and vitacoenzyme treatment, the abundance and uniformity of species had changed significantly, and the N, HZJD, and V groups showed a consistent trend (Fig. 9b). We next sought to assess whether the microbiota composition differed among the N, M, CM and V groups. To assess this, we conducted principal coordinates analysis (PCoA) on the unweighted uniFrac distance to analyze the difference in gut microbial community structures. As shown in Fig. 9c, there was no obvious separation Differential metabolite hierarchical clustering diagram. The ordinate represents the metabolites that are significantly differently expressed, and the abscissa is the sample information. Red represents significantly up-regulated metabolites, blue represents significantly down-regulated metabolites, and the gray part represents no quantitative information on the metabolite between the N and M groups, the M group was clearly separated from the CM and V groups, and the CM group could not be clearly separated from the V group. These findings indicated that both HZJD and vitacoenzyme influenced the gut microbial composition and that oral administration of HZJD ameliorated the gut microbiome of CAG rats to a certain extent.
HZJD altered the abundance of the gut microbiome
To analyze the intestinal microbiota that caused this difference in abundance, we employed linear discriminant analysis (LDA) and effect site linear discriminant analysis effect size (LEfSe) to explore the differences between the N and M groups through an analysis of taxon abundance in the gut microbiota. A cladogram was obtained by the LEfSe method, and Fig. 10a shows the different microbial communities of each group at different levels. Fifteen different genera were identified in the intestinal microbiota of the N and M groups (LEfSe LDA > 2 and P < 0.05). Prevotella, Coprococcus, Turicibacter, Sutterella, Oceanobacillus, Sporosarcina, and Jeotgalicoccus were increased in the N group, whereas Desulfococcus, Fusobacterium, Proteus, Bifidobacterium, Allobaculum, Desulfovibrio, Escherichia, and Lactobacillus were increased in the M group (Fig. 10b). Following HZJD and vitacoenzyme treatment, compared with the M group, the relative abundance of Turicibacter, Sporosarcina and Jeotgalicoccus increased; the abundance of Desulfococcus, Escherichia, and Allobaculum decreased; and the abundance of Turicibacter, Desulfococcus and Escherichia had statistically significant changes in the CM group. These results indicated that HZJD contributed to the prevention of the microbiota changes noted in the CAG rats (Fig. 11).
Correlation of the gut microbiota and gastric tissues metabolic phenotype in CAG rats
We conducted further analysis to explore the functional correlation between the metabolite levels and the microbial identity. First, we organized the relative abundance of the bacterial populations with significant differences at the genus level (LEfSe LDA > 2 and P < 0.05) obtained by 16S rRNA gene sequencing analysis, and the significantly different metabolites obtained by LC-MS analysis (VIP > 1 and P < 0.05) in a table as input files for subsequent analysis. Taking into account the non-normal distribution of the original data, we used the spearman analysis method to calculate the correlation coefficient between the significantly different intestinal microbiota and the significantly different metabolites in the experimental samples.
Notably, we identified strong correlations were noted for the threshold of | r | < 1. The correlation matrix indicated correlations between the gut microbiota and the gastric tissue metabolic phenotypes (Fig. 12). Each row of the heat map represents a significantly different genus, and each column represents a significantly different metabolite. The tree branch on the left represents the results of clustering among the different genera, and the upper tree branch represents the results of the cluster analysis of the different metabolites. Clusters appearing in the same cluster with significantly different metabolites or different bacterial genera followed have similar correlation patterns.
Unlike the spearman correlation analysis hierarchical clustering heat map, the scatter plot reflected the correlation between a single significantly different metabolite and a significantly different genus. Figure 13 depicts several typical gut intestinal microbiota-associated metabolites that were highly associated with specific intestinal bacteria to demonstrate the functional correlation between intestinal microbiota and metabolites.
Discussion
This study explored the therapeutic mechanism of HZJD on CAG rats. In rats, CAG caused the gastric mucosa to turn white, disordered gland arrangement, and inflammatory cell infiltration. Following HZJD treatment, the pathological performance was improved, demonstrating that HZJD had a therapeutic effect on CAG (Fig. 1). Next, we performed LC-MS and 16S rRNA gene sequencing to probe the effect of HZJD on its metabolites and the intestinal microbiome of CAG rats. The data clearly showed that HZJD had a therapeutic effect on the metabolites and intestinal microbiota of CAG rats (Figs. 3,4,5,6,7,8,9,10 and 11). In addition, spearman analysis showed that these perturbed intestinal microbiota were strongly associated with changes in several related metabolites (Figs. 12 and 13). Accumulating evidence has shown that metabolites related to gut microbes under drug intervention are important factors for regulating tissue function and improving health [26,27]; thus, the mechanism of action of HZJD treatment of CAG might be due to regulation of the perturbed intestinal microbiota and its metabolites. These findings may provide mechanistic insights into HZJD treatment of CAG. In the present study, we identified 68 metabolites associated with CAG in rat gastric mucosal tissues by LC-MS (Fig. 6). The results showed that HZJD had a therapeutic effect against the alterations of 21 metabolites induced by CAG (such as choline, l-leucine, and l-serine) (Fig. 7). These affected metabolites were found to be involved in the central carbon metabolism in cancer, the mTOR signaling pathway, and the choline metabolism in cancer (Fig. 8). The 16S rRNA gene sequencing results demonstrated that CAG caused changes in the microbiota composition and relative abundance of the intestinal microbiota. Following HZJD treatment, at the genus level, the relative abundance of Turicibacte increased, Desulfococcus and Escherichina decreased in the CM group compared with the M group (Fig. 11). Amino acids play an important role in regulating the energy balance in organisms, and the central carbon metabolism, namely energy metabolism, is the main source of energy for organisms. Leucine is a branched-chain amino acid that is involved in energy metabolism in the body, and plays an important role in inflammatory reactions and autophagy. The glycine, serine and threonine metabolism provides precursors for the tricarboxylic acid (TCA) cycle, all of which are involved in energy metabolism. In the present study, the levels of l-leucine and l-serine decreased in the M group compared with those in the N group, indicating that CAG energy was insufficient, which was consistent with the study by YueTao Liu et al. [17]. Previous studies have shown that there is an insufficient cellular energy supply in CAG and that decreased cellular energy metabolism may be one of the causes of gastric mucosal atrophy [28,29]. Following HZJD treatment, l-leucine and l-serine increased in the CM group compared with the M group, suggesting that HZJD may mediated its effects in CAG by supplementing energy for the gastric mucosa.
Intestinal bacteria can affect the health of the host by regulating amino acids [28]. Evidence has shown that changes in intestinal microbes affect the utilization of amino acids, and microbes were the initial step of amino acid catabolism [30]. In the present study, l-leucine exhibited a negative correlation with Escherichia ( Figs. 12 and 13). Leucine is one of the essential amino acids, and it plays an important role in the inflammatory response [31], Escherichia is one of the most common causes of bacterial infections in humans and animals and also is the primary cause of gastroenteritis [32]. Previous studies have demonstrated that l=leucine was the most effective amino acid at excluding Escherichia in a microfluidic assay and its concentration was inversely proportional to the number of Escherichia bacteria [33,34], which is consistent with the results of this study. In the current study, l-leucine was decreased and the relative abundance of Escherichia was increased in the M group compared with the N group. Following HZJD treatment, the concentration of l-leucine and the relative abundance of Escherichia returned to normal levels, indicating that the therapeutic effect of HZJD on CAG may be achieved by reduce the abundance of Escherichia and increasing the l-leucine levels to provide energy for cells.
Autophagy also plays an important role in CAG, and enhanced autophagy may be one of the mechanisms leading to gastric precancerous lesions [35]. The effect of amino acids on autophagy is mediated by the mTOR signaling pathway, and l-leucine is an important nutritional signaling molecule that regulates mTOR [36]. Autophagy is pivotal for the maintenance of intestinal homeostasis, and an increase in the number of Escherichia bacteria has been shown to decrease the levels of autophagy [37,38]. mTOR is the core protein involved in the regulation of autophagy, and Escherichia regulation of l-leucine also may be mediated by the mTOR signaling pathway, which, in turn, also may affect autophagy. However, the mechanism remains to be examined in future studies. Microbial infections is one of the causes of CAG [4], in which the imbalance of intestinal microbiota and inflammation function together to cause damage to the mucosa [30]. CAG is a multistep, multifactor, and continuous process of inflammation [25], and persistent inflammation in the stomach is one of the important causes of intestinal microbial disorders. Arachidonic acid is an unsaturated fatty acid that is esterified in the cell membrane in the form of phospholipids. It is produced by inflammationstimulated phospholipase and plays an important role in gastrointestinal function [39]. We previously have shown that arachidonic acid is positively correlated with Allobaculum (Figs. 12 and 13). Allobaculum is dominant in the gastrointestinal tract, and is closely related to inflammation [40]. Evidence has shown that increases in the abundance of Allobaculum may be an indicator of cancer [41,42]. It is also known that microorganisms in the gut have lipases that can degrade phospholipids into polar head groups and free lipids [28]. Allobaculum may degrade arachidonic acid into free arachidonic acid, and although the mechanism remains unclear, current research suggests that inflammation may be the link between them. The increased abundance of Allobaculum increases the level of gastrointestinal inflammation, and the level of arachidonic acid increases under inflammatory conditions. Following HZJD treatment, the abundance of Allobaculum and arachidonic acid levels decreased, suggesting that HZJD could reduce the abundance of Allobaculum, inhibit inflammation, and thereby reduce the level of arachidonic acid. Turicibacter, an important member of the gut microbiota, is considered to be a healthy bacterial genus with anti-inflammatory effects, and may provide evidence that HZJD regulates intestinal microbiota and inhibits inflammation [31][32][33]. Microbes can break down urea through urease, and elevated urea indicates inflammation and infection in the body [43]. In the present study, Turicibacter was negatively correlated with urea ( Figs. 12 and 13), in that the abundance of Turicibacter decreased and urea levels increased in the M group compared with N group. Following HZJD Fig. 11 Effect of HZJD on the relative abundance of intestinal microbiota at the genus level. Compared with the M group, **p < 0.01; *p < 0.05. Compared with the N group, ## p < 0.01; # p < 0.05 Fig. 12 Spearman correlation analysis hierarchical clustering heat map of significantly different intestinal microbiota and significant difference metabolites. Each grid in the figure contains information on the correlation coefficient r and p-value, and the correlation coefficient r is represented by color. r > 0 means positive correlation, which is expressed in red, and r < 0 indicates a negative correlation, which is expressed in blue. The darker the color, the stronger the correlation. P-value reflects the significant level of correlation. P-value reflects the significance of the correlation, *0.01 < p < 0.05; **p < 0.01 treatment, both showed a recovery trend, indicating that the HZJD could increase the abundance of Turicibacter, improve the anti-inflammatory ability of the gastrointestinal tract, and inhibit the development of CAG.
The use of HZJD also can changes the concentrations of some compounds participating in choline metabolism in cancer, such as choline, phosphatidylcholine (PC), and glycerophosphocholine glycerol-3-phosphate. Choline and phosphorylcholine are important components of phospholipids that are essential for the stability and integrity of cell membranes, and choline deficiency may cause gastric mucosal damage similar to CAG [44]. The decrease in choline in the M group proved that gastric mucosal cells were damaged [16]. The level of choline increased compared with the M group after the intervention of HZJD, suggesting that HZJD could increase choline levels to protect the gastric mucosa. Choline is metabolized by the intestinal microbiota and enters the body to be converted into trimethylammonium, which has been confirmed to have a role in inflammatory diseases [45]. Interestingly, Desulfococcus exhibited a positive correlation with trimethylamine, and a negative correlation with choline in the current study (Fig. 12). Desulfococcus is the dominant microbiota of sulphate-reducing bacteria, which can reduce sulfate to produce H2S, which can poison intestinal epithelial cells. In the present study, compared with the N group, the abundance of Desulfococcus increased in the M group, the choline level decreased in the M group, and the trimethylammonium increased in the M group. After treatment with HZJD, however, there was a trend of recovery. These results showed that HZJD could reduce the metabolism of choline to trimethylammonium by regulating the intestinal microbiota, inhibiting inflammation, and protecting the gastric mucosa. Furthermore, these findings indicated that HZJD can treat CAG by regulating Desulfococcus to interfere with choline metabolism.
In summary, we combined LC-MS and 16S rRNA gene sequencing to assess the impact of HZJD on the gut microbiome and its metabolic profiles in CAG rats. We used spearman analysis to calculate the correlation coefficient between the significantly different intestinal microbiota and the significantly different metabolites. Metabolomic analysis revealed that HZJD altered a variety of metabolites, which revealed its role in the central carbon metabolism in cancer, the mTOR signaling pathway, and the choline metabolism in cancer. 16S rRNA gene sequencing indicated that HZJD could regulate the diversity, microbial composition, and abundance of the intestinal microbiota of CAG rats. Spearman analysis revealed that perturbed intestinal microbiota had a strong correlation with differential metabolites, Escherichia exhibited a negative correlation with L-Leucine, Turicibacter was negatively correlated with urea, and Desulfococcus exhibited a positive correlation with trimethylamine, and a negative correlation with choline, and HZJD could protect CAG by increasing the abundance of Turicibacter, reducing the abundance of Desulfococcus and Escherichia affecting the level of related metabolites.
Conclusions
HZJD could protect CAG by regulating intestinal microbiota and its metabolites. 13 Scatter plot indicating the association between specific gut microbes and metabolites. The scattered points in the figure represent samples; rho is the spearman correlation coefficient between the relative abundance of the strain and the metabolite intensity value, and the p-value is the significant level of rho. a Escherichia is negatively related to l-leucine. b Allobaculum is positively related to arachidonic acid. c Turicibacter is negatively related to urea. d Desulfococcus is negatively related to choline | 2021-05-02T13:36:35.943Z | 2021-05-01T00:00:00.000 | {
"year": 2021,
"sha1": "fdfffe0bf49b0086cdfa6e3a1d216efa3238146e",
"oa_license": "CCBY",
"oa_url": "https://cmjournal.biomedcentral.com/track/pdf/10.1186/s13020-021-00445-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a849f29139a33e11eb03afc6c58c5fafeea1deb2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211105866 | pes2o/s2orc | v3-fos-license | Application of Platelet-Rich Fibrin as Regeneration Assistant in Immediate Auototransplantation of Third Molar with Unformed Roots: Case Report and Review of Literature
Background Autogenous Tooth Transplantation (ATT) is the surgical movement of a maturely or immaturely formed tooth from its original site to another extraction site or a surgically prepared socket in the same individual. The most important factor in the healing process after autotransplantation is the presence of intact and viable periodontal ligament cells, which have the ability to differentiate into osteoblasts and able to induce bone production. ATT can successfully replace removable dentures as a restoration option in a growing patient, while implants can be placed only after skeletal maturity is attained. Case Presentation. In this case, we presented an immediate ATT of the third molar with unformed roots to the extraction socket of the first molar with evidence of continued root formation after 2 years of follow-up. Conclusion Platelet-Rich Fibrin (PRF) can induce sustainable and accelerated healing, and it can also induce the regeneration process of the periodontal tissues and pulpal formation. This process plays a key role in future root development and success rate.
Background
Autogenous Tooth Transplantation (ATT) is the surgical movement of a maturely or immaturely formed tooth from its original site to another extraction site or a surgically prepared socket in the same individual [1][2][3]. Vidman et al. in 1915 was the first to report autotransplantation [4]. Since then, the procedure has gained popularity in premolars and canines. Slagsvold and Bjercke in 1974 reported the results of the autotransplantation of 34 premolars with incompletely formed roots done in the period between 1959 and 1970 with a mean follow-up duration of 6 years. He showed a 100% survival rate with the maintained ability of the transplanted teeth to complete root development [5].
The biology of successful autotransplantation depends on the ability of root periodontal (PDL) cells to differentiate and induce dentine and cementum formation. PDL cells have the ability to differentiate into osteoblasts and are able to induce bone formation. Andreasen et al. showed that the presence of intact and viable periodontal ligament cells is considered to be the most important factor to have a successful healing process after autotransplantation [6]. PDL healing has a better prognosis in the case of incomplete roots transplanted in fresh extraction sockets than in the case of surgically formed sockets [7][8][9].
Many reports have emphasized that the optimum time to achieve a successful ATT is when the tooth root has reached two-thirds to three-quarters of its expected length. However, other factors play an important role including (1) performing atraumatic extraction to preserve Hertwig's root sheath and insure future root growth [10], (2) decreasing the duration of the teeth out of socket before implantation [11], (3) having the apical foramen dimensions > 1 mm to increase the probability of postoperative revascularization [12], and (4) maintaining a good alveolar bone support at the time of ATT [13].
ATT can be a permanent aesthetic restoration for patients who are not suitable for dental implants or fixed prosthesis. It has the advantage of low cost and the ability to move the teeth orthodontically in the future, if needed. Moreover, patients who undergo ATT maintain normal chewing and arch integrity [14], pulpal viability, and preserve periodontal ligament health, maintain normal proprioception reflexes, and stimulate eruption in growing patients [15]. ATT has the ability to preserve bone level and induce alveolar bone formation, which is important to keep the dental implant viable after the patient completes growth, even in the case of failure [7,13,16].
There are other types of autotransplantation including intra-alveolar transplantation, i.e., when the position of the tooth is changed within the original socket, e.g., surgical uprighting and intentional replantation, and when the tooth is replanted in the original socket after intentional extraction for treatment of endodontic lesions [17]. ATT is classified based on the time of procedure after the extraction of the recipient tooth, i.e., immediate or delayed transplantation after an initial phase of healing [18].
ATT is indicated to replace missing congenital teeth. In most of these cases, the source of the donor tooth is the crowding in the opposing arch [16]. ATT of lower premolars transplanted in the upper maxillary incisor sockets after traumatic tooth loss were also reported successfully [19]. ATT can be used for the reconstruction of marginal mandibular resection assisted with orthodontic treatment. Osterne et al. reported a successful case of the reconstruction of a mandibular alveolar bone defect in the region of the lower left canine and premolars post ameloblastoma resection with autotransplantation of the immature third molars followed by orthodontic treatment [11].
There is a high incidence of first molar loss in the pubertal patient because of caries and periodontal problems [7,14,20]. In these cases, ATT can successfully replace removable dentures, while implants can be placed only after skeletal maturity is attained [21].
In this case report, we present an immediate ATT of the third molar with unformed roots in the extraction socket of a first molar. We used Platelet-Rich Fibrin (PRF) to accelerate the healing and regeneration process of the periodontal tissues and pulpal formation. This process plays key role in future root development and success rate.
1.1. Case Report. A 16-year-old female patient presented in the Oral and Maxillofacial Surgery Clinic at the Royal Medical Services of Jordan-Prince Rashid Hospital. The patient was referred from the Conservative Clinic at the same institution complaining from pain and tenderness on percussion at the area of tooth number 19. The patient's medical records showed no medical history. Dental examination has showed that the patient has extensive class II caries into subgingival level in tooth number 19 with necrotic pulp. Panoramic radiograph showed a periapical condensing osteitis with deep class II caries of tooth number 19 as shown in Figure 1. After consulting with the endodontist, several treatment options were presented including root canal treatment with crown lengthening, extraction followed by ATT, or extraction followed by future implant or fixed prosthesis. The patient and her parent were interested in ATT, after considering the low cost in comparison to the implant option. Hence, the decision was made to extract tooth number 19, followed by ATT and application of PRF at time of transplant.
Treatment.
A consent form authorizing the procedure and explaining treatment risks and complications was signed by the patient's parent. On the day of ATT and one hour before the extraction, PRF was prepared according to Choukroun's protocol. Ten cc of venous blood was withdrawn, and blood was centrifuged at 3000 rpm for 10 minutes. The sealed tube was stored at 4°C and ready to use.
Extraction of Tooth Number 19.
The patient underwent local anesthesia of the left inferior alveolar, lingual, and buccal nerves with 2 ampules of 1.8 mm of 2% articaine with 1 : 100,000 epinephrine. Once a profound anesthesia was confirmed, atraumatic extraction of tooth number 19 was done by straight elevator with minimal luxation. Extraction socket walls and apexes were cleaned from any granulation tissue by curettage of the socket and copious irrigation with 0.9% saline (Figure 2). After confirming profound anesthesia, tooth number 16 was exposed using a small mucoperiosteal flap with mesial releasing incision distal to tooth number 15 with minimal bone removal. Tooth number 16 was moved with caution to the attached follicle and was stored in 0.9% saline. Copious irrigation of the transplant socket with 0.9% saline and smooth bony edges were confirmed before closure of the flap with interrupted 4/0 vicryl sutures.
ATT of Third Molar in Tooth Number 19
Socket. At the transplanted socket, PRF was separated from red corpuscles and placed in the socket. The donator third molar was positioned on the recipient site and adjusted to 1 mm infraocclusion position by trimming interradicular bone, then returning back to saline. The implanted tooth was secured with 0.7 mm wire and composite resin from tooth number 18 to number 20 (Figures 3 and 4).
Immediate postoperative periapical radiograph was taken to ensure the accurate position of the transplanted tooth in the recipient site as shown in Figure 5. In addition to postextraction instructions, the patient was instructed to rinse with 0.12% chlorhexidine gluconate mouthwash two times/day and was prescribed acetaminophen 500 mg tablets three times/day for 3 days, then PRN.
1.6. Postoperative Follow-Up. During the follow-up visit on day 7 postoperatively, the healing of the donor site and the implanted tooth were examined and no issues were reported. Subsequent follow-up was done on a weekly basis. During the 4-week follow-up visit, the splint was changed from nonrigid to flexible orthodontic wire. Subsequent follow-up visits were done at 3, 6, and 12 months.
One year later, the implanted tooth showed continuous root formation as shown in Figure 6, with normal periodontium, tooth mobility, and vital pulp testing. The patient was satisfied and did not report any complaint. On the 2-year follow-up visit, periapical radiograph showed continuous root development reaching >1 root/crown ratio with open apex (Figure 7), and no signs of root resorption confirming the previous findings (Figure 8).
Discussion
Tooth loss during puberty age especially the first molar is not uncommon. In such cases, limited treatment options are available, such as ATT and implants. Implants are not indicated because they lack the ability to grow and may result in unsightly infraocclusion if placed during growth spurt. Such growth issues warrant the consideration of refined treatment choices with growth-friendly treatment modalities in growing children and/or adolescents.
ATT is considered as a reliable treatment option with proven success. However, for reporting the long-term evaluation of results, the researchers use the survival rate for describing the percentage of transplanted teeth still present at the time of examination [13]. On other hand, successful ATT should fulfil specific criteria including viable tooth with stable occlusion, no pathological mobility or pocket probing depths, and without evidence of ankylosis, root resorption, or infection [22,23]. Schwartz et al. in 1985 [24] reported the survival rate of 210 autotransplanted teeth and found a 76.2% survival rate following 5 years postoperatively, while the 10-year survival rates dropped to 59.6%. A more recent study by Andreasen et al. [12], evaluating the long-term survival rate and pulpal healing of 370 autotransplanted premolars, with complete and incomplete root formation, has showed that survival rates of the completed root teeth is 98% and 95% for the uncompleted root teeth after an observation period ranging from 1 to 13 years. Andreasen et al. noticed that pulpal healing was highly related to the stage of root development at the time of transplantation. Teeth transplanted with incomplete and complete root formation showed 96% and 15% pulp healing, respectively.
Other studies showed that the ATT success rate may vary between mature and immature teeth. Kugelberg et al. [25] reported a 96% success rate for 23 immature teeth and 82% for 22 mature teeth following 4 years of follow-up. In 2010, Yan et al. [26] and Mensink and Van Merkesteyn [27] reported a 100% success rate of open apex teeth following more than 4 years.
The most recent systemic review and meta-analysis by Rohof et al. [28] in 2018 showed that ATT is a reliable treatment option with survival and success rates of autotransplantation of immature teeth >95% and with complications rates <5% in terms of ankylosis (2.0%), root resorption (2.9%), and pulp necrosis (3.3%).
Although ATT of mature and immature teeth is associated with a high success rate, endodontic treatment is usually necessary in mature teeth within 4 weeks to prevent development of pulp-associated lesion. On the other hand, ATT of immature teeth showed the ability of continuous pulp healing and reinnervation. Andreason et al. found that pulpal healing was the usual finding in teeth with immature root formation. He found necrotic pulp that necessitated endodontic treatment in all autotransplanted teeth with completely formed roots. However, according to Denys et al., the teeth with a root length less than half is associated with increased risk for arrested root development rather than more mature ones. His recommendation is that two-thirds to three-quarters of the expected root length is needed to optimize outcomes [30].
PRF was introduced by Choukroun et al. in 2001 as the second generation of platelet concentrates, composed of autologous leukocyte-platelet-rich fibrin matrix and containing various mitogenic factors such as platelet-derived growth factor, vascular endothelial growth factor, and transforming growth factor released by an α granule [31]. According to Choukroun, PRF is prepared simply by centrifuging a patient's own blood at 3000 rpm for 10 minutes without the Case Reports in Dentistry need for thrombin or anticoagulant additives [31]. After centrifuging, PRF collected as the middle layer containing growth factors [32]. PRF stimulates angiogenesis through migration, division, and phenotypic change of endothelial cells. It also promotes cell mitosis and induces osteogenesis without inflammatory reactions. These effects act in a slow sustained process for at least one week [33] and up to 4 weeks [34,35]. Dohan Ehrenfest et al. showed that PRF can induce a strong and continuous differentiation and stimulation of osteoblasts for 14 days with fibroblasts [36]. Moreover, PRF has shown successful results when used as a sole agent in periodontal regeneration like clinical attachment loss and intrabony defects [37,38]. PRF has also shown to be effective in regenerative endodontics. Wang et al. evaluated the effect of PRF in the regenerative therapy of immature permanent canines [39]. PRF was able to increase the thickness of dental-associated mineral tissue. After 12 months follow-up, Bakhtiar et al. reported radiographic evidence of further root development and apical closure in 4 immature teeth with necrotic pulps [40].
Root formation and development are induced by Hertwig's epithelial root sheath (HERS), which is a bilayered epithelium that functions as a stimulator of mesenchymal cell differentiation into odontoblasts and cementoblasts [41]. Within the scope of PRF effects in dental regeneration, the possible mechanism for root development after transplanting third molars with less than a quarter root formed could be explained by the fact that PRF contains a dense fibrin network and a concentration of many growth factors like platelet-derived growth factor and vascular endothelial growth factor in sustainable way. The most important factor is transforming growth factor β1 (TGFβ1), which is also secreted by HERS that induces positively the differentiation of dental papilla cells into odontoblasts ensuring a suitable environment for PDL cells to proliferate and the synthesis of an extracellular matrix [42].
Application of PRF in ATT has been reported successfully in 2 case reports followed for 6 months. Robindro Singh et al. has used PRF in the ATT of an impacted central incisor in a prepared socket as the second stage after the excision of odontoma. He applied the PRF membrane around the tooth at the time of ATT. His 6-month follow-up showed successful results with signs of root developments [42]. Devi et al. reported the immediate ATT of an immature third molar in the socket of an adjacent tooth. Radiographic and clinical examinations have shown successful results [3]. In addition to normal mobility and pocketing examination, two years of radiographic evaluation showed continued root development, as shown in the 3-year follow-up periapical radiograph after transplantation with other favorable clinical measurements.
Conclusion
From the presented case, we conclude that the benefits of including PRF in autotransplantation of teeth with immature roots, even when less than a quarter of the roots have formed, acts positively during the immediate and late regeneration process. PRF can eliminate the risk of arrested root formation and the need for pulpal treatment and can decrease the risk of complications.
Conflicts of Interest
The authors declare that they have no conflicts of interest. | 2020-01-30T09:09:50.038Z | 2020-01-21T00:00:00.000 | {
"year": 2020,
"sha1": "bcf20f1a77474742dc8920558635c830d3ca6e5b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/crid/2020/8170646.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "91e197ad4bef29a9af7738ba6105753904c5641f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55776298 | pes2o/s2orc | v3-fos-license | Semi analytical solution of a rigid pavement under a moving load on a Kerr foundation model
This paper analyzes the dynamic response of assumedly rigid road pavement under a constant velocity of traffic loads moving on its surface. The model of the rigid road pavement is a damped rectangular orthotropic plate which is supported by an elastic Kerr foundation. Semi-analytical solutions of the dynamic deflection of an orthotropic plate, with semi rigid boundary conditions are presented by using governing differential equations. The natural frequencies and mode shapes of the system are then, solved by using the modified Bolotin method considering two transcendental equations as the results of solving the solution of two auxiliaries Levy’s plate type problems. The moving traffic loads modeled by varying the amplitudes of dynamic transverse concentrated loads harmonically. Numerical studies on the soil types, foundation stiffness models, varying constant velocities and loading frequencies are conducted to show the effects of the dynamic response behaviors of the plates. The results show that the dynamic responses of the rigid road pavement influenced significantly by the type foundation stiffness models and velocity of the moving load.
Introduction
The vibration response of rectangular orthotropic plates is an interesting subject because of its widespread applications in structural engineering and transportation engineering.In bridge analysis, different models have been studied and investigated by researchers for rigid highway and airport pavements.In most of the previous works the type of plates considered are isotropic rectangular plates which are uniform in all directions.In reality, not all plates are isotropic.Another important type of plate is the orthotropic rectangular plate, which has been used to model the dynamic response of rigid concrete pavements.According to Alisjahbana and Wangsadinata, the dynamic moving traffic load can be represented by a single concentrated harmonic loading, moving with a constant speed along the mid-side of the plate [1].It was found that a dynamic load approach will lead to a better economical solution in comparison to solutions obtained using the conventional static load approach.
Conventional methods of rigid pavement design are using the elastic Winkler foundation model which is obtained from the static analytical solutions of infinite plates rest on elastic soil assumption.These were investigated by Westergaard in 1926 [2].In this elastic Winkler foundation model, the interconnections among the soil layers are neglected, leading to limitations in the physical model of the sub-soil system [3].These limitations can be eliminated by modeling the sub-grade soil medium by using two-parameter model, providing a shear interaction between independent spring elements.
Several researchers have verified the applicability of soil medium representation by using two parameter models in static [4,5], post buckling [6,7] as well as dynamic models [1,[8][9][10].Gan and Nguyen presented the two parameter model of soil medium in the large deflection analysis of functionally graded beam [11].
Paliwal and Ghosh studied the stability of rectangular orthotropic plates on a Kerr soil foundation model subjected to the in-plane static stresses in the orthogonal directions [12].The dynamic lateral loads are not discussed in their work.The Kerr model is one of the most advantageous models.However, due to the existence of an upper spring layer, no concentrated reactions occur.Kneifati has shown that more accurate base response of the flexible plates and beams subjected to a uniform load and boundary forces was obtained by using the Kerr model compared to the Pasternak and Winkler models [13].In addition, the Kerr model moreover shows comparable results with the continuum elastic theory.Therefore, in this paper, the authors will analyze the dynamic behavior of the rigid road pavement rests on a Kerr model and subjected to a moving load.
In most researches done previously, rigid road pavements are modeled as an orthotropic plate.Furthermore, it is commonly resting on an elastic foundation model such as the Pasternak or Winkler models.However, according to Paliwal and Ghosh in 1994, the Winkler model is unable to represent the behavior of soil medium materials with a larger void ratio or stiffer clay.Additionally, the Pasternak model is only better in predicting the behavior of hard soils.On the other hand, Kerr foundation model has more advantageous models due to no concentrated reactions occur due to an addition of an upper spring layer [12].
In this study, the dynamic response of a rigid road pavement modeled as a thin orthotropic plate rests on the Kerr model is investigated.To take into account the existence of the tie bars and dowels along its edges, the boundary conditions along its edges are semi rigid condition allowing the rotation at the supports and the translational movement at the edges.These type of boundary conditions are solved by using the Modified Bolotin Method [1].There has been no previous research using this specific method to study the dynamic response of the rigid road pavement subjected to a moving load.In this paper, semi analytical solution is used to calculate the dynamic deflection and internal forces distribution of the plate subjected to loading with constant velocity.The applicability of the present method is highlighted by solving the maximum dynamic deflection of the system for different types of soil conditions and elastics foundation model in order to design better rigid road pavements.
Governing equation
In this paper, a rigid pavement resting on Kerr foundation is considered.The orthotropic plate is semi rigid along its edges.It is considered to be of uniform thickness ℎ.The dynamic transverse load acting on the orthotropic plate is (, ).Based on the work of Paliwal and Ghosh [12], the governing differential equation of the rigid pavement subjected to the lateral load is given by: where , are the flexural rigidities of the plate in the direction and the direction respectively, is the torsional rigidity of a plate and is foundation response.Because the Kerr model [14] consists of two axial springs ( and ) and shear spring layer ( ), the deflection of the plate can be given as [13]: The contact pressures under the orthotropic plate and the shearing layer are given by and , respectively, where: The shearing layer was governed by the following differential equation: Eliminating from Eq. ( 3) and ( 5), we obtain: By substituting of Eq. ( 3) into Eq.( 6) and by taking into account the moving load, the structural damping and the inertia of the orthotropic plate, the differential equation of lateral motion of an orthotropic plate on a Kerr model can be obtained as: where is the spring stiffness of the first layer of the Kerr model, is the spring stiffness of the second layer of the Kerr model, is the shear modulus of the Kerr model, is the mass density of the plate, is the structural damping ratio, and ℎ is the thickness of the plate.
In reality, loads that are caused by vehicles are often of varying amplitude.This is a because of the coarseness of rigid roadway pavement as well as the vehicle's mechanical systems.Therefore, in practical analysis, a harmonic load model is generally used.In this study, harmonically moving a single concentrated load which is traveling with a constant velocity along the middle line is considered.Considering practical use, the dynamic load transmitted to the pavement (, , ) according to Eq. ( 7) can be expressed by using the Dirac function as: where is the coefficient of the type of vehicle, is the vibration frequency of the moving load, is the length of the orthotropic plate in the direction; is the maximum amplitude of the moving load [1].According to Fig. 1, the effective shear force and bending moment at the orthotropic plate boundaries are given as: The constraint of the elastic vertical support and rotation are characterized by and , respectively.The index = 1, 2, 3, 4 stems for = 0, and = 0, , where the index notation ⊥ of the Poisson's ratio shows the perpendicular direction of .
Determination of the Eigen frequencies
The solution of the homogeneous orthotropic plate equation given by Eq. ( 7) can be determined by the method of variable separation using the Fourier series techniques.According to this method, the homogeneous solution of the problem is a product of function of space and time: where is the undamped vibration frequency of the orthotropic plate and (, ) is a spatial function determined for the modal numbers and in the and -directions.The spatial function satisfying the initial conditions of the undamped free vibration equation.The substitution of Eq. ( 11) into Eq.( 7) yields: Since (, ) in Eq. ( 12) depends only on the spatial variables and the orthotropic plate vibrates with the same temporal behavior, each side of Eq. ( 12) must be equal to the arbitrary separation constant.A relationship between the undamped vibration frequencies of the orthotropic plate and the arbitrary separation constant can be expressed as [1]: where: Based on the modified Bolotin method, the two unknowns real numbers and in Eqs. ( 13), ( 14) can be solved from the two auxiliary Levy's type problems [15].
First auxiliary Levy problem
The solution of Eq. ( 12) for the first auxiliary problem that satisfies the boundary conditions defined in Eqs. ( 9), ( 10) can be assumed as: where () is the Eigen mode of the orthotropic plate in the -direction [15].Substituting Eq. ( 15) into Eq.( 12) which results in an ordinary differential equation for () in which the solution of the characteristic equation can be found by assuming () = .By substituting () = into the characteristic equation we found the sixth order characteristic equation of which has two imaginary roots and two real double roots.The solution of the first auxiliary problem can be expressed as: Boundary conditions along the -axis permit determining the coefficients from [16,17]: where are the coefficients.When the conditions of the boundary along = 0 and = in Eqs. ( 9), (10) are substituted into Eq.( 16), the characteristic determinant of Det = = 0 leads to the existence of nontrivial solutions.After expanding resulted in the first transcendental equation in terms of and .
The second auxiliary Levy problem in the -axis can be determined analogously to the above formulations.
Mode numbers
The determinants of the first and second auxiliaries Levy problems, being transcendental in nature, have an infinite number of roots.The Mathematica software [18] was used to solve the values of and symbolically.By substituting the values of and into Eq.( 13), the Eigen frequencies of the system can be obtained.The integer parts of and represent the number of mode in the system.The mode shapes of the system are therefore given by: (, ) = () (). (18)
Determination of the non-homogeneous solution of the system
Since a fundamental set of solutions of the homogenous partial differential equation is known and given by Eq. ( 18), a non-homogeneous solution of the system can be found by replacing the unknown constant coefficients in Eq. ( 16) in the direction as well as in the direction with unknown coefficient functions.The appropriate solution for the forced response can be expressed in the form: where () and () are the mode shapes of the system, () depends only on the temporal variable and can be determined from the non-homogeneous partial differential equation of time.From the natural frequency computed by Eq. ( 13), depending on the first and second spring stiffness and the shear moduli of the Kerr foundation, the temporal equation of () can be stated in [19] as follow: where is the damping ratio of the system and is a normalization factor expressed by: The corresponding homogeneous solution of Eq. ( 20) can be written: From the stationary state initial conditions ( = 0 s), () = = = 0 can be obtained.A particular and a general solution of Eq. ( 20) may be integrated to determine the temporal response of the problem for an arbitrary applied surface load [1]: Finally, the deflection solution of the governing Eq. ( 7) subjected to an arbitrary applied dynamic surface load (, , ) for 0 ≤ ≤ and > can be expressed as For 0 ≤ ≤ : For > : In which and are the deflection and velocity at the time = , respectively and = 1 − .is the damped vibration frequency of the orthotropic plate.
Numerical examples
Using the procedure described above, a rigid rectangular orthotropic plate doweled of road pavement subjected to dynamic traffic loads as shown in Fig. 1 3 .These parameters are typical of the material and structural properties of a highway [19].The traffic load magnitude is = 80×10 3 N and = 1/2.To calculate the influence of loading velocity to the dynamic behavior of the system, varies from 50 km/hr to 300 km/hr.It is also assumed that damping ratio of the system equals = 5 %.To compare the dynamic deflections of the orthotropic plate between the Pasternak and the Kerr foundation models, the following soil parameters are used: = 9.52 MN/m 3 ; = 27.25 MN/m 3 .All the dynamic response of the system is computed at = 1.5 s.This is the condition at which the dynamic moving load is within the plate region.
Influence of the foundation types
Time history of the dynamic deflection at the center of the plate ( 2 ⁄ , 2 ⁄ ), is calculated and plotted for = 1, 2, 3, …, 5 and = 1, 2, 3, …, 4. It is found that the dynamic deflection of the system is initially high, with rapid oscillations and high amplitudes for all three types of soil conditions studied in this paper.This observation is in agreement with Gibigaye et al. in the design of pavement plates rest on a soil whose inertia is considered [17].
Fig. 2(a) shows time histories of the system under dynamic moving load for soft soil and hard soil conditions of the Kerr foundation.The moving dynamic load is set to be = 60 km/hr and load frequency is = 100 rad/s.It is observed that the rapid oscillations occur at the moment of first loading and after the oscillations become stationary for two types of soil condition.In Fig. 2(a), it is also shown that the transient domain ends at around = 0.06 s and does not depend on the soil conditions.This trend was also observed in [17].From Fig. 2(b), it is found that the maximum dynamic deflection of the system supported by the Kerr foundation is lower than the maximum dynamic deflection of the system on the Pasternak foundation model at the mid-span if calculated with the same parameters [20].This result agrees with previous research done by Kneifati [13], where it was shown that the Kerr model is more accurate than the Pasternak models for the representation of the base response.
Conclusions
The responses behaviors of rigid road pavements that are subjected to dynamic moving loads with constant velocity have been investigated.The effects of moving velocity, load frequency and elastic foundation stiffness of the Kerr model are studied.The soil model used in this work is the Kerr model; a proposed generality of the Pasternak model by the inclusion of a layer of springs on the shearing layer.Based on the orthogonality properties of the Eigen functions, the semi-analytical solution form of the dynamic displacement was obtained.In the formulation of this paper, it was assumed that the supports at the boundaries of the plate are due to the tie bars and steel dowels, providing the plate with vertical and rotational restraints.This assumption represents a realistic plate, especially for joints between the rigid pavement plates, in which rotation and vertical shear deformation are found along the joints.
From these results, it is concluded the dynamic response and resonance velocity is significantly affected by the elastic foundation stiffness.When the rigid road pavement rests on a soft soil foundation, most of the pavement is affected by the loading while the resonance load frequency is small.From the obtained results, it is also concluded that the maximum dynamic deflection of the rigid road pavement on the Kerr model decreases significantly compared to that of Pasternak model.This result shows the possible economic gain of the Kerr model when it is used for representing the base response of the rigid road pavement.
Fig. 2 ( 2 .
b) shows the time history of the system supported by the Kerr foundation model and Pasternak foundation model.To generalize the Pasternak model, the Kerr model is introduced by adding a layer of spring on the shearing layer to eliminate the concentrated reactions that occurs along the free edges of a plate structure[13].a) Two soil conditions on Kerr foundation b) Kerr and Pasternak foundations Fig.Time history of the system under dynamic moving load
Fig. 3 (Fig. 3 .
Fig.3(a) depicts the maximum dynamic deflections of the plates which are subjected to a moving load with the harmonic load frequency is = 100 rad/s and soil condition is set to the parameter values for soft and medium soil conditions of the Kerr foundation.It can be observed that the speed of the dynamic load has an effect on the maximum lateral dynamic deflection.The maximum dynamic deflection at the lower value of foundation stiffness increases until about = 240 km/hr before decreasing.It shows that resonance conditions depend both on the speed of travel and the stiffness constants of the foundation.Fig.3(b)illustrates the effects of load frequency on the maximum lateral deflection under moving load for soft soil and hard soil conditions.It can be seen that load frequency has effects on both the resonance frequency and the maximum lateral deflection.The resonance load frequency for soft soil condition is smaller than the value of resonance load frequency for the hard soil condition. | 2018-12-07T11:39:48.837Z | 2018-08-15T00:00:00.000 | {
"year": 2018,
"sha1": "4f655d38b0cad0a37d190bd6fa0b61c4f51abb1f",
"oa_license": "CCBY",
"oa_url": "https://www.jvejournals.com/article/20082/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4f655d38b0cad0a37d190bd6fa0b61c4f51abb1f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Engineering"
]
} |
214723836 | pes2o/s2orc | v3-fos-license | Genomic landscape of tumor-host interactions with differential prognostic and predictive connotations
An immune active cancer phenotype typified by a T helper 1 (Th-1) immune response has been associated with increased responsiveness to immunotherapy and favorable prognosis in some but not all cancer types. The reason of this differential prognostic connotation remains unknown. Through a multi-modal Pan-cancer analysis among 31 different histologies (9,282 patients), we demonstrated that the favorable prognostic connotation conferred by the presence of a Th-1 immune response was abolished in tumors displaying specific tumor-cell intrinsic attributes such as high TGF-ß signaling and low proliferation capacity. This observation was validated in the context of immune-checkpoint inhibition. WNT-ß catenin, barrier molecules, Notch, hedgehog, mismatch repair, telomerase activity, and AMPK signaling were the pathways most coherently associated with an immune silent phenotype together with mutations of driver genes including IDH1/2, FOXA2, HDAC3, PSIP1, MAP3K1, KRAS, NRAS, EGFR, FGFR3, WNT5A, and IRF7. Our findings could be used to prioritize hierarchically relevant targets for combination therapies and to refine stratification algorithms.
117
Prognostic impact of ICR classification is different between cancer types 118 RNA-seq data of samples from a total of 9,282 patients across 31 distinct solid cancer 119 types were obtained from TCGA. To classify cancer samples based on their immune 120 orientation, we performed unsupervised consensus clustering for each cancer type separately Figure 1A) 126 Galon et al., 2013;Hendrickx et al., 2017;Turan et al., 2018). Expression of these genes
134
As shown in Figure 1B, the mean expression of ICR genes, or ICR score, varies 135 between cancer types, reflecting general differences in tumor immunogenicity between 136 cancers. While brain tumors (brain lower grade glioma's (LGG) and glioblastoma multiforme 137 (GBM)) typically display low immunological signals (McGranahan et al., 2017), skin cutaneous 138 melanoma (SKCM) and head and neck squamous cell carcinoma (HNSC) display high levels 139 of immune activation (Economopoulou et al., 2016;Passarelli et al., 2017;Thorsson et al., Table 1). To explore biological differences between cancer types in which a 150 highly active immune phenotype is mostly associated with favorable survival and cancer types 151 in which this phenotype is mostly associated with decreased survival, we categorized cancer 152 types in ICR-enabled (BRCA, SKCM, UCEC, SARC, LIHC, HNSC, STAD, BLCA) and ICR-disabled (UVM, LGG, PAAD, KIRC) groups, respectively ( Figure 1C). All other cancer types 154 in which ICR did not show an association or trend were categorized as ICR-neutral. Of 155 important note, this classification was used for explorative purposes, a role of the immune 156 mediated tumor rejection cannot be precluded in ICR-neutral cancer types.
157
First, we explored whether the ICR scores and their distributions were different among 158 these defined groups of cancer types. Mean ICR score is low for most ICR-disabled (ranging 159 from 3.97 to 8.34) compared to ICR-enabled cancer types (ranging from 7.26 to 8.36) 160 (Supplementary Figure 3A). This observation is most noticeable for ICR-disabled cancer 161 types LGG and UVM. Moreover, the difference (delta) between ICR scores in ICR High 162 compared to ICR Low groups is higher in ICR-enabled cancer types (range: 2. 98-4.97) 163 compared to ) and ICR-disabled cancer types (range: 2.29-3.35) 164 (Supplementary Figure 3B). These factors could underlie, at least partially, the observed 165 divergent associations with survival.
166
To define whether tumor pathologic stage might interact with the association between 167 ICR and overall survival (OS), we fitted a Cox proportional hazards model for each group of 168 ICR-enabled, ICR-neutral and ICR-disabled cancer types (Table 1)
181
For ICR-neutral cancer types, while ICR was not associated with survival in univariate 182 analysis, multivariate analysis indeed identified a positive prognostic value of the ICR 183 classification, though less robust than observed for ICR-enabled cancer types.
216
We then proceeded to examine which tumor intrinsic attributes correlate with immune Next, we aimed to identify genomic attributes related to the ICR immune phenotypes.
236
As previously observed (Thorsson et al., 2018), mean neoantigen count of each cancer type 237 strongly correlated with mean mutation rate (Supplementary Figure 6A-B). While mean non-238 silent mutation rate was significantly higher in ICR High tumors for some cancer types, no
245
Similarly, we studied the association between genomic instabilities, or aneuploidy, and
246
ICR. Specifically, we compared the individual tumor aneuploidy scores and the ICR score 247 across cohorts. Aneuploidy score was calculated as in Taylor et al (Taylor et al., 2018). As
248
has been reported previously, we found a broad negative association between aneuploidy and 249 raw or tumor purity adjusted ICR score (Davoli et al., 2017) ( Figure 3C). Interestingly, this 250 negative association was most strongly supported in ICR-enabled cancers, with 6 cancers out
276
Interestingly MAP3K1 mutations, whose effect on ICR Low has been described in 277 breast cancer (Hendrickx et al., 2017), were also associated to ICR Low tumors pan-cancer.
278
The
282
To better compare the association between specific mutations and ICR groups within 283 individual cancer types, we calculated, for each of the identified genes, the mean ICR score
341
To better clarify this concept, we selected two of the differentially expressed pathways 343 that were of special interest. Firstly, the "Proliferation" signature was used to classify all 344 samples independent of tumor origin in "Proliferation High" and "Proliferation Low" categories, 345 defined as an ES value >median or <median of all samples, respectively. This 52-gene cluster (Caliński and Harabasz, 1974) (source function available on GitHub repository, see cancer datasheets for plots with local maximum).
445
As we were interested to compare cancer samples with a highly active immune phenotype 446 with those that have not, the cluster with the highest expression of ICR genes was designated 447 as "ICR High", while the cluster with the lowest ICR gene expression was designated "ICR
498
Differences between ICR High, Medium and Low clusters were calculated through t-test, using
505
The R package "ComplexHeatmap" was used to plot ICR score ratios between 506 mutated versus wild-type groups. For cancer type/ gene combinations with a number of 507 samples of <3 in the mutated group, ratios were not calculated (NA; grey color in plot). A ratio 508 >1 implies that the ICR score is higher in the mutated group compared with WT, while a ratio 509 <1 implies that the ICR score is higher in subset of tumors without mutation.
511
Aneuploidy 512 Aneuploidy scores for each individual cancer were taken from Taylor et al (Taylor et al., 2018).
513
Briefly, each tumor was scored for the presence of aneuploid chromosome arms after 514 accounting for tumor ploidy. Tumor aneuploidy scores for each cohort were then compared to 515 ICR scores via linear model with and without purity adjustment. Purity adjustment entailed 516 correlating ICR score and tumor purity (as estimated via ABSOLUTE) and using the residuals 517 to evaluate the post-adjustment relationship between ICR score and tumor aneuploidy. In particular we made use of the precomputed aneuploidy scores and ABSOLUTE tumor purity 519 values. Raw ICR and aneuploidy score associations were evaluated by linear model in R via 520 the lm() function for each cohort independently. Adjusted ICR and aneuploidy score 521 associations were evaluated by first modeling ICR score by tumor purity, then taking the ICR 522 score residuals and assessing the association with aneuploidy score via linear model. Cohorts 523 with model p-values below 0.01 for adjusted or unadjusted ICR score and aneuploidy, 524 regardless of the directionality of the association, were included in Figure 3C.
526
Differential GSEA and stratified survival analysis 527 Differential ES analysis between samples of ICR-enabled and those of ICR-disabled cancer 528 types was performed using t-tests, with a cut-off of FDR-adjusted p-value (i.e., q-value) < 0.05 Table 2). Tumor intrinsic pathways that were differentially enriched between 530 ICR-enabled and disabled cancer types were selected. The heatmap used for visualization of 531 these differences was generated using the adapted heatmap.3 function (source function). For 532 each of these selected pathways, samples were categorized pan-cancer as pathway-High (ES
534
were defined for each pathway "High" and pathway "Low" group separately using the survival 535 analysis methodology as described above. Pathways for which a significant association 536 between ICR and survival was present in one group, but not in the other one, were selected 537 (Supplementary Table 3). Similarly, these pathways were used to categorize samples per 538 individual cancer type in pathway-High (ES > cancer specific median ES) and pathway-Low 539 (ES < cancer specific median ES). Differences between HRs of groups in individual cancer 540 types were calculated and plotted using "ComplexHeatmap" (v1.17.1).
542
Predictive value ICR score in immune checkpoint datasets 543 ICR scores, or the mean expression of ICR genes, were compared between responders and 544 non-responders to immune checkpoint therapy. For the Chen et al dataset, performed on 545 Nanostring platform, scores were calculated using the 17 ICR genes available in the 546 nanostring panel. Difference in mean ICR score between groups was tested using two-side t-547 test (cutoff <0.95) (Fig 7A). For datasets, GSE78220 (Riaz et al., 2017)
565
In our systematic analysis we showed that, across and within different tumors, the 566 coordinated overexpression of ICR identifies a microenvironment polarized toward a Th-567 1/cytotoxic response, which was then used to define the hot/immune active tumors.
568
In tumor types with medium/high mutational burden, the mutational or neoantigenic 569 load tended to be higher in hot (ICR high) vs cold (ICR low) tumors while this association was 570 not observed within cancer types with overall low mutational burden. By adding granularity to 571 previous observations that described an overall weak correlation between immunologic 572 correlates of anti-tumor immune response and mutational load (Danaher et al., 2018;Ock et 573 al., 2017;Rooney et al., 2015;Spranger et al., 2016;Thorsson et al., 2018), we demonstrated 574 here that the differences in term of mutational load was especially evident in tumors types 575 known to be constituted by a significant proportion of microsatellite instable cases, such as 576 COAD, STAD and UCEC. It is likely that, in hypermutated tumors, the excess of neoantigens 577 plays a major role in the immune recognition, while, in the other cases, additional mechanisms, 578 such as cell-intrinsic features, play a major role in shaping the anti-tumor immune response 579 (Hendrickx et al., 2017). Overall, a high mutational/neoantigen load was neither sufficient nor 580 necessary for the displaying of an active immune microenvironment.
581
When the ICR score was intersected with the enrichment of oncogenic signals as 582 predicted by the transcriptional data, interesting associations emerged. Although some 583 differences in terms of the degree of the correlation were observed across cancers, few tumor- (Salerno et al., 2016). Their expression was 591 associated with a T-cell excluded phenotype in melanoma and ovarian cancer, and here we extended our previous observation across multiple tumors (Salerno et al., 2016). The cell-593 intrinsic WNT-ß catenin activation impairs CCL4-mediated recruitment of Batf3 dendritic cells, 594 followed by absence of CXCL10 mediated T-cell recruitment, and was described initially 595 associated with T-cell exclusion in melanoma, and recently, in other tumor types (Luke et al.,
596
2019; . The efficiency of our approach in capturing previously described 597 oncogenic pathways indicates the robustness of the analysis. At the same time, our integrative 598 pipeline unveiled additional relevant pathways: telomere extension by telomerase and 599 mismatch repair, Notch, Hedgehog and AMPK signaling. Our findings suggest that the lack of 600 expression of transcripts involved with mismatch repair (in addition to their genetic integrity 601 (Barnetson et al., 2006)) might influence immunogenicity. Telomere dysfunctions result in 602 various disease, including cancer and inflammatory disease (Calado and Young, 2012). To 603 our knowledge, this is the first time that telomerase activity has been linked to differential 604 intratumor immune response. The Notch pathway can regulate several target genes controlled 605 by the NFκB, TGF-β, mTORC2, PI3K, and HIF1α pathways (Janghorban et al., 2018) and is 606 involved in the induction of cancer stem cells, but has not been described to be associated 607 with differential intratumoral immune response so far. As for the Hedgehog pathway, in breast 608 cancer models, inhibition of this signaling induces a marked reduction in immune-suppressive
623
As for somatic mutations, the top ten genes associated with the immune silent 624 phenotype include IDH1, IDH2,FOXA2,NSD1,PSIP1,HDAC3,ZNF814,MAP3K1,FRG1 625 and SOX17. Findings of IDH1 and NSD1 are consistent with the report of Thorsson et al 626 (Thorsson et al., 2018), in which these have been associated with decreased leukocyte 627 infiltration, and are complemented here by additional identification of IDH2. Interestingly, 628 MAP3K1 mutations were previously associated with low ICR in breast cancer in our previous work (Bedognetti et al., 2017;Hendrickx et al., 2017). Remarkably, mutations of other genes 630 of the RAS/MAPK pathways such as FGFR3 (previously associated with T-cell exclusion in 631 bladder cancer (Sweis et al., 2016)), EGFR, NRAS, and KRAS were associated with a low 632 ICR score, substantiating their potential role in mediating immune exclusion. FOXA2 is 633 involved in both neoplastic transformation and epithelial-mesenchymal transition (Wang et al., 634 2018, p. 2) and T helper differentiation (Chen et al., 2010)
648
As for genomic instability, tumors with high aneuploidy are associated with decreased 649 ICR score in a major subset of cancer types (Davoli et al., 2017). This observation is also in 650 agreement with negative association of a chromosome-instable type with an immune 651 signature that predicts response to immunotherapy with MAGE-A3 antigen as well as 652 response to anti-CTLA-4 treatment in melanoma (Ock et al., 2017). The only exceptions we 653 found were brain tumors LGG and GBM in which a positive association between aneuploidy 654 and ICR score was detected. In LGG tumors, however, ICR scores positively correlate with 655 tumor grade (Supplementary Figure 4), and it is possible that the observed positive 656 correlation between aneuploidy and ICR is actually driven by the higher genomic instability 657 characterizing the more advanced tumors.
658
To compare cancer types based on the prognostic value of ICR, we categorized them 659 into two groups: one for which ICR High was associated with increased OS and one for which (Chifman et al., 2016;Thorsson et al., 2018) or immunohistochemistry (Fridman et al., 2012) 669 but never explained.
670
The first notable difference we observed between ICR-enabled and -disabled cancer 671 types was the overall lower ICR value in the disabled cancer cohorts. In particular for UVM 672 and LGG, this low ICR could be a partial explanation for the lack of positive prognostic value types are ordered by mean nonsilent mutation count per cancer. Nonsilent mutation rate and 1178 predicted neoantigen load were obtained from Thorsson et al (Thorsson et al., 2018). C.
1179
Correlation between aneuploidy score and raw/purity adjusted ICR score for all cohorts with 1180 significant relationships between ICR and aneuploidy. | 2019-04-03T13:10:03.551Z | 2019-02-12T00:00:00.000 | {
"year": 2019,
"sha1": "8894f6f0154685c1be348048200e98e6878a69fb",
"oa_license": "CCBYNC",
"oa_url": "https://jitc.bmj.com/content/jitc/8/1/e000617.full.pdf",
"oa_status": "GREEN",
"pdf_src": "BioRxiv",
"pdf_hash": "45d9fd3b3980f0994c486c72b62f054ad3d623e2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
} |
3434656 | pes2o/s2orc | v3-fos-license | Effect of supplemental phytase and xylanase in wheat-based diets on prececal phosphorus digestibility and phytate degradation in young turkeys
ABSTRACT This study aimed to investigate the effect of phytase and a combination of phytase and xylanase on the prececal phosphorus digestibility (pcdP) of wheat-based diets in turkeys. A low-P basal diet (BD) based on cornstarch and soybean meal, and 2 diets containing 43% of different wheat genotypes (genotype diets GD6 or GD7) were fed to turkeys from 20 to 27 d of age. Diets were fed either without enzyme supplementation or supplemented with phytase (500 FTU/kg) or a combination of phytase and xylanase (16,000 BXU/kg). At 27 d of age, digesta were sampled from the lower ileum of animals to determine pcdP and pc myo-inositol 1,2,3,4,5,6-hexakis (dihydrogen phosphate) (InsP6) disappearance, and to analyze the concentrations of lower inositol phosphate isomers. Similar pcdP was observed in non-supplemented BD and GD (∼36%). Phytase alone increased the pcdP in all diets by 8 to 12%, but a beneficial effect of xylanase was found only for BD. Similar results were found for pc InsP6 disappearance, although xylanase addition compared to phytase alone decreased pc InsP6 disappearance in GD7 compared to phytase alone. Animals fed GD7 performed better than those fed GD6; however, these differences could not be linked to the pcdP. The pattern of lower inositol phosphates in digesta also changed with enzyme supplementation, resulting in lower proportions of InsP5 and higher proportions of InsP4. Phytase alone decreased Ins(1,2,3,4,6)P5 but increased D-Ins(1,2,3,4,5)P5 and D-Ins(1,2,5,6)P4 concentrations. An additional increase in D-Ins(1,2,3,4,5)P5 and D-Ins(1,2,5,6)P4 concentrations was achieved with xylanase, although for the former isomer, this was observed only with GD. These results indicate that enzyme supplementation alters the pc degradation of InsP6, and that combining both enzymes had a minor additional effect on the pcdP from wheat-based diets when compared to phytase alone.
INTRODUCTION
Phosphorus (P) is an element with high relevance for poultry feeding. In feeds of plant origin, the majority of P is bound as phytate, the salt of phytic acid [(myo-inositol 1,2,3,4,5,6-hexakis dihydrogen phosphate or InsP 6 ). InsP 6 -P has to be cleaved in the gastrointestinal tract by phytases and other phosphatases prior to absorption, but insufficient secretion of endogenous enzymes in nonruminants limits phytate hydrolysis.
Wheat is an important grain used in poultry diets (Coskuntuna et al., 2008). Among cereals, wheat contains a moderate concentration of total P and InsP 6 -P, while its intrinsic phytase activity is relatively high . Nevertheless, a substantial fraction of P remains non-digestible in wheatbased diets for poultry (van der Klis et al., 1995;Juin et al., 2001;Woyengo et al., 2008;Selle et al., 2009;Zeller et al., 2015a). Therefore, diets are usually supplemented with exogenous phytases of microbial origin. These exhibit higher activity within the gastrointestinal tract than intrinsic phytases of plants, because of their broader optimum pH range and higher resistance to proteases (Woyengo and Nyachoti, 2011). The beneficial effect of supplemental phytase on P digestibility from wheat-based diets for broilers has been confirmed in previous studies (Kiieskinen et al., 1994;Zyla et al., 2000;Wu et al., 2003;Afsharmanesh et al., 2008;Woyengo et al., 2008;Selle et al., 2009;Zeller et al., 2015a). Little is known about the effect of phytase on Table 1. Concentration of total P, inositol phosphate P, total arabinoxylans, phytase activity, and extract viscosity in wheat genotypes used in the present work. 1 Genotypes represent wheat genotypes no. 6 and 7 used in the "GrainUp" project . 2 Calculated at an assumed shear rate of 380 s -1 .
the digestibility of P from wheat-based diets in turkeys, although the P availability from different P sources seems to differ between poultry species (Rodehutscord and Dieckmann, 2005). Juin et al. (2001) reported a 15% increase in P retention in young turkeys following the addition of 500 U phytase to a low-P wheat-soybean meal (SBM)-based diet. However, the maximum P retention achieved was 61%, indicating that a high proportion of phytate-bound P remained undegraded. Similar results were obtained with broilers in the aforementioned studies of Wu et al. (2003), Afsharmanesh et al. (2008) and Selle et al. (2009). Thus, several attempts have been made to further improve phytate hydrolysis. One approach is the use of non-starch-polysaccharide (NSP)-degrading enzymes, such as xylanases (Adeola and Cowieson, 2011;Woyengo and Nyachoti, 2011), which hydrolyze arabinoxylans. These indigestible NSP constitute the major components of cell walls in the aleurone layer (Bacic and Stone, 1981), which is the main site of phytate storage in wheat (O'Dell, 1972). Xylanases may increase the permeability of the aleurone layer (Parkkonen et al., 1997), and those with an affinity for soluble and insoluble arabinoxylans can decrease digesta viscosity (Adeola and Cowieson, 2011). This may facilitate the accessibility of phytase to phytate (Adeola and Cowieson, 2011;Woyengo and Nyachoti, 2011) as previously indicated in vitro (Zyla et al., 1999). In studies using growing broilers, Selle et al. (2009) found that combining phytase and xylanase had a positive effect on feed efficiency and prececal (pc) digestibility of amino acids, nitrogen, and energy from a wheat-based diet. However, compared with phytase alone, no additionally beneficial effect of xylanase was detected on the pc digestibility of P (pcdP) in their study. Further studies also have reported that no synergistic interaction exists between phytase and xylanase on the P digestibility (Peng et al., 2003;Juanpere et al., 2005;Olukosi and Adeola, 2008;Woyengo et al., 2008;Zeller et al., 2015a) or pc InsP 6 degradation (Kühn et al., 2017) in broilers fed wheatbased diets. Nevertheless, Zeller et al. (2015a) showed that InsP 6 and most of the detected lower inositol phosphates (InsPs) tended to be less concentrated in the ileal digesta of broilers when both enzymes were added in combination than with phytase alone. However, to the best of our knowledge, no studies investigating the effect of xylanase alone or in combination with phytase on the pcdP in turkeys are available.
Therefore, the objective of the research reported herein was to examine the effect of phytase, alone or in combination with xylanase, on the pcdP, the pc InsP 6 disappearance, and the appearance of InsPs in the lower ileum of young turkeys fed wheat-based diets. As synergism between phytase and xylanase depends on factors such as dietary NSP concentration, NSP composition, and the intrinsic phytase activity (Woyengo and Nyachoti, 2011), we used 2 different wheat genotypes, which differed in their physical and chemical characteristics.
MATERIALS AND METHODS
The 2 wheat genotypes used in this study represent genotypes no. 6 and 7 as denoted and characterized by Rodehutscord et al. (2016). These genotypes were selected based on their P and arabinoxylan content, as well as their pcdP as demonstrated in a previous P digestibility study with broilers (Witzig et al., 2018). Whereas wheat genotype no. 6 had a low P and arabinoxylan content (Table 1) and showed a low pcdP in broilers, genotype no. 7 contained a relatively high P and arabinoxylan content and had a higher pcdP in broilers than did 7 other genotypes (Witzig et al., 2018).
Experimental Diets
The experiment involved testing of 9 treatments. Three diets, a basal diet (BD) and 2 genotype diets (GD), were tested at 3 different enzyme combinations. While this is a 3 × 3 factorial arrangement of treatments, in the model, the first factor (diet) is split into the factors diet type and diet. The latter has the advantage to distinguish between differences between the 2 genotypes and differences between BD and GD.
The BD, based on cornstarch and SBM, was formulated to contain adequate levels of all nutrients according to the recommendations of the Gesellschaft für Ernährungsphysiologie (GfE, 2004), with the exception of P and calcium (Ca) ( Table 2). To formulate the 2 GD, 43.4% of genotype no. 6 or 7 was included at the expense of cornstarch, making the wheat genotype the only source of P variation in these diets ( Table 2). The wheat was ground to pass through a 2-mm sieve screen before being added to the diet. To maintain a constant Ca: P ratio in all diets, additional limestone was added at the expense of cornstarch. Titanium dioxide was used as an indigestible marker (0.5%). Diets were fed to animals with or without supplementation with an Escherichia coli-derived thermotolerant 6phytase (Phy, Quantum TM Blue, intended activity 500 FTU/kg feed), alone or in combination with a commercial Trichoderma reesei-derived thermostable endo-1,4-β-xylanase (X, Econase R XT). The xylanase was supplemented to achieve an activity of 16,000 BXU/kg feed, which is the recommendation of the supplier for wheat-based diets. Both enzymes were provided by AB Vista, Marlborough, United Kingdom. Diets were mixed in the certified feed mill facilities of Hohenheim University's Agricultural Experiment Station, location Lindenhöfe in Eningen, Germany, and pelleted through a 3-mm die without the use of steam. The pellet temperature immediately measured after pelleting ranged between 55 and 78 • C. Representative samples of the 9 experimental diets were taken and pulverized using a laboratory disc mill (Siebtechnik GmbH, Mühlheim an der Ruhr, Germany) and stored at 4 • C until chemical analysis.
The analyzed activity of phytase and xylanase was very low in non-supplemented diets, and ranged from 376 to 521 FTU and from 14,400 to 18,900 BXU/kg feed, respectively, in enzyme-supplemented diets ( Table 3). The concentrations of total P and InsP 6 -P in the diets ranged from 3.24 to 4.94, and from 1.58 to 2.42 g/kg DM, respectively. The average Ca: P ratio was (SD) 1.5 (0.05).
Birds, Animal Management, and Sampling Procedure
The animal experiment was performed at the Agricultural Experiment Station of Hohenheim University, location Lindenhöfe in Eningen (Germany), in accordance with German Animal Welfare legislation. All procedures regarding animal handling and treatments were approved by the Animal Welfare Commissioner of the University.
The room temperature was set at 36 • C on d 1 and 2 before being gradually reduced to 21 • C until d 21.
Light intensity was 100 lx. During the first 2 d, light was provided for 24 h; thereafter, the provision of light was reduced to 18 h per day.
Turkeys were weighted at 20 d of age on a pen basis and randomly assigned to one of 9 dietary treatments using a non-resolvable incomplete block design within the animal house. The design included 18 incomplete blocks each with 4 pens, as 4 pens formed a row within the animal house. All treatments were tested in 8 (n = 8) pens and therefore in 8 out of 18 blocks. Throughout the experiment, animals had free access to feed and tap water. The experimental diets were fed to the animals for 7 d and ADFI as well as ADG were recorded. On d 27, birds were stunned with a mixture of 35% CO 2 , 35% N 2 , and 30% O 2 , and euthanized via CO 2 asphyxiation. The abdominal cavity of animals was opened immediately, the digestive tract removed, and the ileum (section between Meckel's diverticulum and 2 cm anterior to the ileoceco-colonic junction) was dissected. According to the method described for determination of the pcdP by the World's Poultry Science Association (WPSA, 2013), the digesta of the distal half of the ileum were gently flushed out with double-distilled water (4 • C) and pooled for all birds on a pen basis. Samples were immediately frozen at −18 • C, freeze-dried (Type Delta Table 3. Analyzed phytase and xylanase activity, and concentrations of Ti, Ca, total P, InsP 6 -P, and lower (D-) 1-24, Martin Christ Gefriertrocknungsanlagen GmbH, Osterode, Germany), ground to pass through a 0.12mm sieve screen at a speed of 6,000 rpm using an ultracentrifugal mill (Type: ZM 200, Retsch GmbH, Haan, Germany), and stored at 4 • C until chemical analyses.
Chemical Analyses
The DM content of feed and digesta samples was analyzed according to the official German methods (Verband Deutscher Landwirtschaftlicher Untersuchungsund Forschungsanstalten [VDLUFA], 1976; Method 3.1). Concentrations of Ca, P, and Ti in feed and digesta samples were analyzed using an inductively coupled plasma optical emission spectrometer following sulfuric and nitric acid wet digestion, with specifications described by Zeller et al. (2015b). Concentrations of InsP 6 and lower InsPs in the diets and digesta samples were analyzed following EDTA extraction at pH 10 using high-performance ion chromatography as described by Zeller et al. (2015c). Phytase and xylanase activities of the experimental diets were measured as described in detail by Zeller et al. (2015a). In brief, the intrinsic phytase activity of diets without enzyme supplementation was analyzed by the direct incubation method (quantification of liberated inorganic P), as described by Greiner and Egli (2003). Activity of the supplemented phytase and xylanase product in diets was assayed according to the manufacturer's protocol (U at pH 4.5 and 60 • C transferred to commonly used FTU by a validated factor and BXU at pH 5.3 and 50 • C). One FTU is the amount of enzyme that liberates 1 μmol of inorganic phosphate per min from a sodium phytate substrate at pH 5.5 and 37 • C. One unit of xylanase (BXU) is the amount of enzyme that liberates 1 nmol reducing sugars as xylose from birch xylan per s at pH 5,3 and 50 • C.
Calculations and Statistics
The ADG, ADFI, and feed-to-gain ratio were determined on a pen basis and adjusted for mortality, which was recorded daily. The pcdP, pcdCa, and pc disappearance of InsP 6 (y) were calculated on a pen basis according to the following equation: InsP 6 or P or Ca in the digesta (g/kg DM) InsP 6 or P or Ca in the diet (g/kg DM) Statistical evaluation of the data was carried out with the software package SAS for Windows (Version 9.3, SAS Institute, Cary, NC). A mixed models approach (procedure PROC MIXED) was used, considering the effects of treatment factors "diet type" (BD or GD), "diet" (BD, GD 6 , or GD 7 ), "enzyme" (0, phytase, phytase/xylanase), and interaction between these factors, as fixed and "block" effects as random. The model can be described by the following equation: Where: y ijkl = lth observation of the ith diet type, jth diet, and kth inclusion level of enzymes, μ = general mean, α i = effect of the ith diet type, β ij = effect of the jth diet within ith diet type, γ k = effect of the kth inclusion level of enzymes, (αγ) jk = interaction effect between the ith diet type and kth inclusion level of enzymes, (βγ) ijk = interaction effect of the jth diet and the kth inclusion level of enzymes within the diet type, b l = effect of the lth incomplete block, and e ijkl = error term associated with y ijkl . Prior to statistical analyses, data were graphically checked for normal distribution and variance homogeneity, and if necessary 1 BD = basal diet; GD 6 = genotype diet supplemented with wheat genotype no. 6 as used in the "GrainUp" project; GD 7 = genotype diet supplemented with wheat genotype no. 7 as used in the "GrainUp" project ; diets remained non-supplemented (0) or were supplemented either with phytase (Phy) or with phytase and xylanase (PhyX). Data are given as LS means or back-transformed LS means; n = 8 (BD0, BDPhyX: InsP 6 disappearance n = 7) pens per treatment with 15 birds per pen.
2 P-value of an F test testing for difference between levels of the according effect. 3 DT = Diet type = BD vs. GD. 4 D = Diet. 5 E = Enzyme. 6 n.d. = not detectable (InsP isomer was not detectable in the majority of samples). 7 Estimates within a row not sharing a common superscript differ significantly (multiple t tests in the case of interaction), P ≤ 0.05. * Lower cases for GD 6 and GD 7 are identical, as both diets did not differ. a-f Different superscripts indicate differences of LS means between BD and GD in the case of an interaction detected for DT and E.
(percentage data), subjected to arcsine square-root transformation. Least square means from the analysis were back-transformed for presentation only. If the Ftest was significant, the multiple t test was used for treatment comparison. The level of significance was set at α = 0.05.
RESULTS
Diet type significantly affected all response traits (Tables 4 and 5). As indicated by the increase in ADG and the decrease in the feed: gain ratio, birds fed the GD performed better than those fed the BD. The pcdCa of GD was, on average, 38% and lower than that of BD (45%). Enzyme supplementation had a significant effect on all traits, except on ADFI (p = 0.056). The BW at 27 d of age, the ADG, and pcdCa were increased, and the feed: gain ratio was decreased following the addition of phytase to the diets. However, additional supplementation with xylanase did not further increase the performance of turkeys or the pcdCa of diets. The pcdP of diets was affected by an interaction between diet and enzyme supplementation. Thus, in birds fed the BD, phytase alone increased the pcdP of diets from 36 to 48%, while a further increase of 5% was observed with additional xylanase supplementation. In GD 6 and GD 7 , supplementation with phytase also increased the pcdP from 35 and 36% to 45 and 44%, respectively, but no further increase was achieved with the addition of xylanase.
The significant effect of diet within diet type was restricted to the ADG and the feed: gain ratio of animals. The results indicated a significantly higher ADG, as well as a lower feed: gain ratio, in animals fed GD 7 than in those fed GD 6 . Table 5 shows the concentrations of the different InsPs detected in digesta samples of the lower ileum and the pc InsP 6 disappearance in turkeys. Concentrations of InsP 6 in digesta samples were not affected by diet, or by interactions between diet and enzyme supplementation; however, significant interactions were observed between diet type and enzymes. Irrespective of diet type, enzyme supplementation decreased the concentration of InsP 6 in the ileal digesta. However, in animals fed the BD, the lowest InsP 6 concentration was found following supplementation with both enzymes, whereas for GD, the lowest InsP 6 concentrations were achieved with phytase alone. Moreover, the concentrations of InsP 6 in birds fed the BD were lower than those in animals fed the GD. The pc InsP 6 disappearance was significantly affected by interaction effects between diet and enzymes. Phytase supplementation increased the pc InsP 6 disappearance irrespective of diet type, but a further increase with xylanase was achieved only for the BD. In animals fed GD, xylanase reduced the pc InsP 6 disappearance, and with GD 6 and GD 7 , a numerically lower and significantly lower pc InsP 6 disappearance was observed, respectively, than with phytase alone.
The concentration of D-Ins(1,2,4,5,6)P 5 was affected only by diet type, with lower values found for BD than for GD. Concentrations of D-Ins(1,2,3,4,5)P 5 and Ins(1,2,3,4,6)P 5 were also lower in BD than with GD; however, these InsP 5 isomers also were affected by an interaction between diet type and enzymes. In digesta samples of animals fed BD, supplementation with phytase alone or combined with xylanase led to a similar increase in the concentration of D-Ins(1,2,3,4,5)P 5 . In birds fed GD with xylanase, there was an additional increase in the concentrations of this isomer. In contrast, concentrations of Ins(1,2,3,4,6)P 5 in digesta samples decreased with enzyme supplementation. Upon feeding with BD, the concentration of Ins(1,2,3,4,6)P 5 decreased below the limit of detection when both enzymes were supplemented, whereas with GD, the concentrations did not differ between enzyme treatments. The InsP 5 isomer D-Ins(1,3,4,5,6)P 5 could not be detected in digesta samples.
The InsP 4 isomer D-Ins(1,2,3,4)P 4 could be detected only in digesta samples of animals fed the enzymesupplemented GD, with no obvious differences between GD 6 and GD 7 , or between enzyme treatments. Concentrations of D-Ins(1,2,5,6)P 4 were lower in animals fed BD than those fed GD, and higher with enzyme supplementation (phytase/xylanase>phytase>0), thus indicating an effect of diet type and enzymes, whereas no interaction effects were observed.
The detection of InsP 3 isomers was restricted to digesta samples from animals fed GD 6 and GD 7 supplemented with phytase and xylanase. One or more of the InsP 3 isomers D-Ins(1,2,6)P 3 , D-Ins(1,4,5)P 3 , and D-Ins(2,4,5)P 3 were found at concentrations of 0.16 and 0.15 μmol/g DM in samples obtained from treatments GD 6 and GD 7 , respectively. As in most samples, InsP 3 isomers were not detected; therefore, these values were not used for subsequent statistical evaluation.
Disappearance of InsP 6 and Prececal Digestibility of P and Ca in Response to Wheat Genotypes and Supplemented Enzymes
Although the intrinsic phytase activity in GD 6 was higher than that in GD 7 , there was no difference in the pc InsP 6 disappearance or pcdP. In studies with broilers, diets containing 20 or 40% of wheat genotype no. 7 exhibited an even higher pcdP than those including genotype no. 6 (Witzig et al., 2018). These results confirm those of former studies on broilers, which indicated a minor role of the intrinsic phytase activity in wheat-based diets on pcdP and pc InsP 6 disappearance (Shastak et al., 2014;Zeller et al., 2015a). Moreover, differences in the total of arabinoxylan and NSP contents, or in extract viscosity between both genotypes, seemed to be of minor importance in the present study.
In contrast, former studies on broilers have reported a reduced pcdP with increased intraluminal viscosity induced by feeding high-rather than low-viscosity wheat cultivars (van der Klis et al., 1995). However, those authors used wheat varieties that differed more in extract viscosity than the wheat genotypes used in the present study.
In turkeys, beneficial effects of phytase supplementation on P digestibility often have been reported (Lescoat et al., 2005;Koz lowski et al., 2010). However, in most studies, fungal phytases were used, whereas only a few studies have used E. coli phytases (Applegate et al., 2003;Koz lowski et al., 2010;Adebiyi and Olukosi, 2015;Tatara et al., 2015). Our findings confirm those of Applegate et al. (2003), who reported 9% higher P retention in 3-week-old turkeys fed a corn-SBM-based diet supplemented with 500 FTU/kg of an E. coli phytase than in birds fed a non-supplemented diet. Tatara et al. (2015) noted positive effects of an E. coli phytase (500 FTU/kg) added to a diet containing corn, SBM, and wheat, on skeletal properties in 16-weekold turkeys. Under similar conditions, Koz lowski et al. (2010) achieved an 8% increase in the pcdP with 500 FTU and a significant increase of 16% with 1,000 FTU. However, in young turkeys fed semi-synthetic diets containing 20 to 60% wheat distillers' dried grains with soluble, 1,000 FTU of E. coli phytase did not affect the pcdP (%) (Adebiyi and Olukosi, 2015). Those authors explained the lack of a phytase effect by the low phytate P content of the diets.
The 15% increase in pc InsP 6 disappearance in GD with phytase addition is consistent with previous reports on broiler chickens fed wheat-based diets supplemented with the same phytase product (Zeller et al., 2015a). However, in BD, in which SBM was the main source of P, a smaller increase in pc InsP 6 disappearance was achieved with phytase than in GD, while the opposite was found for the pcdP (%). As InsP 6 -P: total P ratios were similar in all diets, this indicates a more efficient utilization of P from lower InsPs in SBM, than from wheat following the addition of phytase. The opposite was observed for non-supplemented diets; despite lower pc InsP 6 disappearance in GD, the pcdP (%) did not differ between BD and GD. This demonstrates that the pcdP is not necessarily correlated to the pc InsP 6 disappearance, probably due to differences in the degradation of lower InsPs.
The decreased pc InsP 6 disappearance in the nonsupplemented GD with increased concentrations of InsP 6 underlines the limited capacity of animals to hydrolyze InsP 6 , as was previously shown in broilers fed corn-SBM-based diets and as we also observed in turkeys (unpublished). Moreover, these differences between BD and GD confirm that intrinsic phytase activity in wheat has only a minor role with respect to the degradation of InsP 6 .
In addition to phytase, the response of turkeys to xylanase addition differed between animals fed BD and GD. Beneficial effects of xylanase on pcdP and pc InsP 6 disappearance were restricted to animals fed BD with SBM as the main P source. SBM contains a much higher NSP content than wheat, but arabinoxylans are less abundant (Choct, 2015). While xylanase was shown to increase the release of total sugars from SBM in vitro (Narasimha et al., 2013), this effect may depend on the specific enzyme used. For the xylanase used in the present work, xylose was not detected to be a relevant degradation product (AB Vista, personal communication). This suggests an alternate mode of action of xylanases is likely playing a role such as prebiotic release resulting in changes in microbial metabolites (Singh et al 2012;Lee et al., 2017). However, because the enzyme was fed for only a relatively short time period, it is unclear whether one of the main xylanase effects-the production of arabinoxylooligosaccharides-in the digestive tract (Courtin et al., 2008) could have evolved microbial volatile fatty acid production. Such effects seem to need a longer application period to change the microbiome and by this increase in fatty acid production (Lee et al., 2017). Woyengo and Nyachoti (2011) noted that xylanase can interact synergistically with phytase in wheat-based diets for poultry only if the wheat has a NSP concentration higher than 10%. The genotypes used in the present study contained NSP concentrations close to 10% (9.6 and 10.5%), which under the given feeding conditions, were probably not sufficient for the animals to achieve a beneficial response to xylanase. Instead, the pc disappearance of InsP 6 decreased somewhat with the addition of xylanase in GD. The fact that the xylanase effect on pc InsP 6 disappearance was greater in the BD diet suggests that the NSP substrate content is not the only relevant factor in explaining xylanase effects. Zeller et al. (2015a) also reported no benefi-cial effect of xylanase when added in combination with phytase to wheat-based diets, on pcdP or pc InsP 6 disappearance in broilers. Those authors hypothesized that the accessibility of the remaining phytate would either not be restricted by arabinoxylans, or that other structures or the short retention time would not permit the sufficient degradation of the thick cell walls of the aleurone layer in wheat. Moreover, xylanase inhibitors in wheat are suspected to negatively affect the performance of exogenous xylanase (Smeets et al., 2014). However, as not all exogenous xylanases are inhibited, it is difficult to assess their role in the present study.
The results regarding the pcdCa are consistent with those of previous studies in turkeys, in which the pcdCa significantly increased in diets supplemented with 500 FTU/kg feed of an E. coli phytase (Koz lowski et al., 2010). The positive effect of phytase on the pcdCa in nonruminants is well known, and may be explained by reduced formation of insoluble Ca-phytate complexes in the small intestine due to lower proportions of InsP 3-6 entering this section (Adeola and Cowieson, 2011) or by the up-regulated absorption of Ca in response to increased P availability. This also may explain the higher numerical increase in the pcdCa with xylanase for BD than for GD. Moreover, differences in the pcdCa between BD compared to GD seems to be a result of the differences in the Ca level of those diet types.
Regardless of the diet used or the enzyme supplementation, the turkeys in the present study showed much lower pc InsP 6 disappearance (29 vs. 69%) and pcdP (36 vs. 55%) than did broiler chickens fed low-P diets based on wheat (Zeller et al., 2015a). In broiler studies using the same wheat genotypes fed to turkeys in the present study (Witzig et al., 2018), the pcdP of the respective GD was also 11 to 16% higher than that observed for the non-supplemented GD 6 and GD 7 in the present study. These observations are consistent with those from comparative studies on P retention and pcdP from low-P corn-based diets in 3-to 4-week-old broiler chickens and turkeys (Rodehutscord and Dieckmann 2005;Adebiyi and Olukosi, 2015). Reasons for the different capacity of pc InsP 6 hydrolysis and P digestion in young turkeys and broilers may include differences in the maturity of the small intestine (Adebiyi and Olukosi, 2015), endogenous P loss, pH along the gastrointestinal tract, and the passage rate (Rodehutscord and Dieckmann, 2005;Adebiyi and Olukosi, 2015).
As expected, supplementation with E. coli 6-phytase, which initiates dephosphorylation at the D-6 (L-4) position (Greiner et al., 2000), resulted in an increased concentration of D-Ins(1,2,3,4,5)P 5 in digesta samples. The additional main route of InsP 5 degradation by E. coli phytase is via D-Ins(2,3,4,5)P 4 . The latter may be co-eluted with D-Ins(1,2,5,6)P 4 , the concentration of which also was found to be increased with phytase supplementation. D-InsP(1,2,3,4)P 4 is a minor product of D-Ins(1,2,3,4,5)P 5 hydrolysis by E.coli phytase (Greiner et al., 2000), thus explaining the slight increase in its concentration in GD with phytase supplementation. These results confirm those reported from in vitro studies with wheat-or corn-based diets, in which the same phytase product was used (Sommerfeld et al., 2017). Similar effects of this E. coli phytase on InsP concentrations have been observed in digesta samples from the duodenum/jejunum section (Zeller et al., 2015a) or the lower ileum (Zeller et al., 2015b(Zeller et al., , 2015c) of broilers fed wheat or corn and SBM based diets.
Compared with phytase alone, supplementation with xylanase increased the InsP 6 concentration in digesta samples of GD. Data on lower InsPs indicated an accumulation of D-Ins(1,2,3,4,5)P 5 and a less rapid hydrolysis of InsP 5 to InsP 4 with GD than with BD, for which beneficial effects of xylanase were observed. Although the supplemented xylanase product possessed hydrolytic activity against insoluble and soluble arabinoxylans, it is possible that the release of high levels of soluble from insoluble NSP in wheat by xylanase increased the viscosity of digesta, and thus reduced the efficiency of phytase in GD. However, despite the decrease in pc InsP 6 disappearance, xylanase did not negatively affect the pcdP in GD, which contained more NSP substrate. Thus, an increase in digesta viscosity seems unlikely. The concentrations of InsP 6 and lower InsPs did not differ in the lower ileum of broiler chickens fed wheat-based diets supplemented either with phytase alone or with a combination of phytase and xylanase (Zeller et al., 2015a). Thus, differences in the pc hydrolysis of InsP 6 seem to exist between broilers and young turkeys when xylanase is added to wheat-based diets in combination with phytase.
To conclude, the results of the present study confirm the positive effect of supplementing wheat-based diets with an E. coli phytase on the pcdP in turkeys, as previously reported with fungal phytases by Juin et al. (2001). For the first time, these data report on the appearance of lower InsPs in the ileal digesta of turkeys fed wheat-based diets. These results revealed similar effects of E. coli phytase on the pattern of InsPs in the lower ileum, as previously reported for broilers; however, in combination with xylanase, a different response was observed in turkeys. Synergistic effects of both enzymes were restricted to the pc degradation of InsP 6 and pcdP of the cornstarch-SBM-based BD, and were not found for wheat-based diets. Thus, synergism between the enzymes seems to depend on the composition of the diet. The wheat genotype significantly affected animal performance, but the differences were not linked with the pcdP, pcdCa, or pc InsP 6 disappearance. Nevertheless, these results need to be considered in the context of the relatively short application period of the treatments employed. Moreover, our data suggest that intrinsic phytase activity in wheat is of only minor relevance to pcdP and pc InsP 6 degradation in turkeys.
ACKNOWLEDGMENTS
The project was supported by funds from the Federal Ministry of Food, Agriculture, and Consumer Protection (BMELV) based on a decision of the Parliament of the Federal Republic of Germany via the Federal Office for Agriculture and Food (BLE) under the innovation support program. | 2018-04-03T05:23:55.905Z | 2018-02-15T00:00:00.000 | {
"year": 2018,
"sha1": "7cb45736c9d02edeb2a5738523bffc85fd8de86b",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3382/ps/pey030",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "7cb45736c9d02edeb2a5738523bffc85fd8de86b",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
251190066 | pes2o/s2orc | v3-fos-license | Multi-Agent-Based Traffic Prediction and Traffic Classification for Autonomic Network Management Systems for Future Networks
: Recently, a multi-agent based network automation architecture has been proposed. The architecture is named multi-agent based network automation of the network management system (MANA-NMS). The architectural framework introduced atomized network functions (ANFs). ANFs should be autonomous, atomic, and intelligent agents. Such agents should be implemented as an independent decision element, using machine/deep learning (ML/DL) as an internal cognitive and reasoning part. Using these atomic and intelligent agents as a building block, a MANA-NMS can be composed using the appropriate functions. As a continuation toward implementation of the architecture MANA-NMS, this paper presents a network traffic prediction agent (NTPA) and a network traffic classification agent (NTCA) for a network traffic management system. First, an NTPA is designed and implemented using DL algorithms, i.e., long short-term memory (LSTM), gated recurrent unit (GRU), multilayer perceptrons (MLPs), and convolutional neural network (CNN) algorithms as a reasoning and cognitive part of the agent. Similarly, an NTCA is designed using decision tree (DT), K-nearest neighbors (K-NN), support vector machine (SVM), and naive Bayes (NB) as a cognitive component in the agent design. We then measure the NTPA prediction accuracy, training latency, prediction latency, and computational resource consumption. The results indicate that the LSTM-based NTPA outperforms compared to GRU, MLP, and CNN-based NTPA in terms of prediction accuracy, and prediction latency. We also evaluate the accuracy of the classifier, training latency, classification latency, and computational resource consumption of NTCA using the ML models. The performance evaluation shows that the DT-based NTCA performs the best.
Introduction
Traditional networks consist of several devices, including switches, routers, servers, deep packet inspection, firewalls, and hosts. These devices are traditionally implemented using hardware. A human administrator manages such small networks [1,2], through configuration and management interfaces. Typically, an administrator is necessary to configure and make the required changes. This is feasible only for relatively small size networks, albeit with limited performance and flexibility in terms of resource utilization, energy efficiency, service admission, latency, reliability, etc.
Nevertheless, within the last two decades, communication networks have tremendously evolved. It is now possible to interconnect a massive number of devices and a theoretical model, and the performance evaluation is performed using mathematical evaluation.
A similar approach with a focus on an agent-based algorithm was proposed in [16]. The authors presented a software agent to execute a set of computation loads in parallel, which enables a bottom-up systematic organization of the overall system. The paper discussed IoT objects to be an agent that is autonomous and able to collaborate with other peer agents. However, the approach is focused on high-level objects and does not consider protocol level and function level functionalities. Moreover, it is needed to consider service design approaches in comparison with microservices showing how to incorporate ML/DL algorithms in the agent design.
In this paper, using the design guideline from [12], we implement the internal composition of an NTPA and NTCA. We briefly introduce the ML and DL algorithms that are used as the cognitive component of NTCA and NTPA. We also propose and implement possible communication interface technologies. In addition to the legacy agent communication language (ACL), we suggest Restful-API, gRPC, and Websocket. This paper could be considered a continuation toward the implementation of MANA-NMS, the automation architecture proposed in [12] and an extended version of the work in [17]. The authors in [17] only considered classification agent design with limited evaluation that does not consider the resource consumption of agents. This is crucial in that agents are mostly deployed in a containerized environment with a resource-constraint environment, especially as the complete system may need a number of agents to be instantiated.
In [12], the authors used mathematical models and evaluation and left the implementation as future work. Therefore, this article tries to implement and simulate the ML/DL-based NTPA and NTCA and evaluate critical measures, such as accuracy and latency. The benefit and advantage of designing NTPA and NTCA agents are to use these agents as a service unit in the multi-agent system. In the simulation section, an example of the complete MANA-NMS system architectures is provided, incorporating the traffic classifier agent and traffic predictor as well as their role in the overall system as an example. The contributions of this work are as follows: • Design the internal architecture of NTPA and NTCA. • Implement and simulate both agents. • Evaluate the performance of NTPA and NTCA using accuracy prediction and classification, training latency, prediction and classification latency, and resource consumption, respectively.
The rest of the paper is organized as follows. Section 2 presents an overview of the network automation and enabling technologies. The prediction and classification agents' design architecture along with the agent communication technologies are discussed in Section 3.1. The implementation scenario for the proposed conceptual framework and dataset description is presented in Section 4. Section 5 presents the performance evaluation of the proposed NTPA and NTCA. Lastly, Section 6 concludes this work.
Overview of Network Automation and Enabling Technologies
This section provides a review of important concepts for network automation, including the applied ML/DL algorithms and multi-agent systems.
Overview of Network Management Automation
Autonomic network management is proposed as one of the solutions for the management of large and complex networks. As indicated above, it comes from autonomic computing, a manifesto started with IBM in 2001 [1,6,13]. IBM introduced the concept of self-management and self-adaptation of networks. Self-management implies that a network can make decisions on its own, while self-adaptation means that the network is aware of its environment and can adapt the performance according to the changes in the environment. The autonomic systems described by IBM are equipped with four self-management prop-erties [13], such as self-configuration, self-optimization, self-protection, and self-healing. These properties are discussed in detail below.
• Self-configuration offers the opportunity to discover new or evolving devices on a network, and to automatically establish routes and other configurations required to seamlessly connect the device. • Self-optimization is the ability of the network to change its settings to match its actions better with the system's user-defined purpose. • Self-protection requires a network to protect itself from a possible attack, such as denial-of-service (DoS) attacks or the modification of firewall rules based on alleged malicious traffic. • Self-healing is the capacity to correct, by itself, any issues that may occur, whether or not from the attempted behavior of the device.
Overview of the Multi-Agent System
Problem solving in complex environments requires distributed approaches. As a solution, distributed artificial intelligence (DAI) is proposed, where the system consists of many intelligent agents interacting with each other to achieve the goal [18,19]. DAI, traditionally, suggests two types of solutions. The first one is to break a problem into sub-problems. Then, micro solutions are provided to solve each sub-problem individually. The second solution is MAS, where autonomous agents cooperate with each other to provide a service. Figure 1 illustrates the general architecture of the MAS. Similar to distributed problem solving using microservice solutions, the agent works with other agents or individually to perform a given task. The agent also measures the environment continuously to obtain updated information about the environment [20]. The core characteristics of the MAS are discussed in Section 3.1 in the context of our agent definition.
Agent as a Decision-Making Element in Autonomic Networking
Agents have different definitions based on their applications in different environments [20]. Before defining network agents, we present a general definition incorporating all characteristics of the agents. An agent is an object that observes an environment and uses various parameters to execute an appropriate action at each step and achieve the goal. This definition is made of four keywords that are further explained below.
1.
Object: This refers to the kind of agent in different contexts, for example, a software agent, a hardware agent, etc.
2.
Environment: The environment receives the agent's actions and updates itself accordingly. As a result, the agent observes the changes and adjusts its actions.
3.
Parameters: These are the information that agents need to know to take the appropriate action.
4.
Action: The agent executes actions that result in changes in the environment. The action space can be continuous or discrete.
Each agent decides to perform its action with regard to time and resource constraints [20]. In order to perform a good action, the agent needs to interact with the environment as well as communicate with other agents through ACL, REST-API, gRPC, Websocket, as illustrated in Figure 1 to learn about the information. This information contains the previous agent's experience from interacting with the environment. This helps the agent with its decision-making process [21].
In this work, we present the MAS as an automated network management system, where the agents are known as the network decision elements (DEs).
Application of Traditional Machine Learning and Deep Learning as a Cognitive Component of Agents
As indicated in the Introduction section, and will be further discussed in the next section, one of the internal elements of an agent is the cognitive or reasoning component which enables the agent to perceive, reason, and decide. This would enable designing agents to be the smallest atomic element with autonomous and intelligent capability in performing a given task. Using such elements, we build the complete system. This is in line with the future service-based network design in a softwarized, containerized, and intelligent network environment. Incorporating in-network intelligence in the network functions is the prime target in 6G [3,4,22,23]. Hence, we require to incorporate ML/DL to create an intelligent agent, such as NTPA and NTCA.
In the next paragraph, we discuss the DL definition and its advantages and disadvantages over ML methods.
DL is a sub-field of ML that enables learning and decision capability in the agent, training the agent to learn like a human brain. DL statistics and predictive modeling enable agents to process massive datasets. It uses a layered structure and applies a nonlinear transformation to its input iteratively to learn the features and create a statistical model as output. There are several DL techniques that help with creating accurate predictive models from a large amount of unlabeled and unstructured data. We describe some of the techniques as follows: Although various neural network architectures have been proposed, such as RNN, CNN, and feedforward neural networks, they all function in similar ways. They benefit from a layered architecture, where input is given and the model figures out itself whether it has made the right decision about the data element by a trial-and-error process. Next, we list some advantages and disadvantages of the DL methods and compare them with ML algorithms.
Unlike the general ML, DL does not need to be given the feature set to make decisions. It learns the features by interacting with the environment and builds the feature set incrementally. Therefore, DL methods do not require the programmer to specify the features that the computer should be looking for. In other words, DL is able to learn without human intervention. While this makes the DL methods take much time to train, they are much quicker in testing than the ML algorithms. The ML test time increases as the data size grows. However, there are some issues with DL models, making data scientists choose traditional ML over DL. Its neural networks rely on the trial-and-error process. In any case, we used different ML and DL models for the NTCA and NTPA design and tested their performance.
LSTM-Based Network Traffic Predictor Agent
LSTM is an improved form of the traditional recurrent neural networks (RNNs) addressing the vanishing and exploding gradients problems using connected memory blocks in the layers. Each block consists of multiple cells containing three units: the input, output, and forget gates. Unlike the standard RNNs, this block design enables LSTM to capture long-term temporal dependencies in sequence learning problems. The network interacts with the cells through their gates. The role of the input and output gates is to achieve a constant error flow and avoid the irrelevant memory contact, respectively. However, an unstable error flow is observed through the backpropagation in existing RNNs [24,25].
GRU-Based Network Traffic Predictor Agent
GRU is a variant of LSTM and the improved version of standard RNN [24]. A GRU cell consists of two gates: update gate and reset gate to solve the vanishing gradient problem in recurrent neural networks. First, the reset gate is used to filter out the irrelevant information from the past time steps and create the memory content at the current time step. Next, the update gate is used to create the final content of the memory by deciding how much information still needs to be kept and passed to the output layer [24,25]. We describe the usage of the reset and update gates with mathematical notation below: first, the reset gate (similar to the forget gate in the LSTM setup) filters out the irrelevant past information as shown in Equation (1).
where m (t) represents the current memory content. The element-wise product between the reset gate and weighted previous memory content (r t W m t−1 ) determines what information needs to be removed from the past time steps. Next, the update gate determines what information from the current memory content and the previous time steps need to be passed through the network for computing the final result as shown in Equation (2), where u t is the update gate and controls the memory content.
MLP-Based Network Traffic Predictor Agent
MLP is a fully connected feedforward artificial neural network composed of many perceptrons (neurons) that are organized into at least three layers: an input layer, a hidden layer, and an output layer [26,27]. Each neuron in the layer except the input layer utilizes a non-linear activation function to learn the non-linear data. The learning process in an MLP is described as, when the data are given to the MLP, they go through their layers. Each node in the layer computes its error by comparing its output and the expected result. The goal is to minimize the error and then adjust the node weights [26]. The error is calculated by the following equation: where d j is the target value and s j is the value generated by the node j. The error for processing data point n by the nodes is obtained by the equation below: Next, the weight adjustment is calculated using the gradient descent: where s i is the output of the previous node, α is the learning rate to ensure that the weight corrections converge quickly, and a j is the induced local field or activation potential.
CNN-Based Network Traffic Predictor Agent
CNN is a kind of feed-forward neural network that has major application in image classification [27]. It has also applications in time series, where we can utilize it for network traffic prediction. It consists of an input layer, convolution layer, pooling layer, fully connected layer (as hidden layers), and an output layer. The convolution layer is a good substitute for a fully connected layer, as it is scalable to massive datasets. The role of the convolution layer is to reduce the dimension of the input images without losing critical features, allowing the network to be deeper and learn more easily.
A part of a convolution layer is the kernel/filter that has smaller dimensions than the input image and moves over the image and performs matrix multiplication until the entire image is traversed. This process produces a convoluted feature output, extracting high-level features of the image, such as edges. The next layer is called the pooling layer. The pooling layer is in charge of reducing the spatial size of the convoluted feature output even more, using two operations: max pooling and average pooling. Max pooling returns the maximum value of the calculated convoluted features corresponding to a portion of the image. Average pooling returns the average of all values generated by the kernel/filter. Finally, a fully connected layer (multilayer perceptrons) is added to classify the images using the softmax classification technique.
Decision-Tree-Based Network Traffic Classifier Agent
We presented four possible ML solutions for the NTCA design in [14] in more detail. For the readers' convenience, we briefly discuss these traffic classifier ML algorithms.
A non-parametric-based supervised learning approach utilized for classification is a DT. DTs learn from knowledge to approximate a sine curve using a sequence of if-then-else decisions. The longer the tree, the more complex the rule of decision in fitting a given model [28,29]. Figure 2 illustrates the DT diagram.
Naive Bayes Based Network Traffic Classifier Agent
NB is a classification technique that, given a finite set of class labels, assigns the labels to the data samples using a probability distribution [30]. It assumes each feature value independently builds the probability specifying the data sample's class regardless of the correlations between the features.
Support Vector Machine Based Network Traffic Classifier Agent
SVM attempts to find a hyperplane (decision area), having the maximum distance between the two categories of data points in high or infinite dimensional space [29][30][31]. Figure 3 illustrates the SVM diagram.
K-Nearest Neighbors Based Network Traffic Classifier Agent
K-NN classifies the data point based on the most common class among its k-nearest neighbors [29]. Figure 4 shows how K-NN can help one to identify the class of a new data point.
Proposed Network Traffic Predictor and Traffic Classification Agents
In this section, we discuss the proposed NTPA and NTCA frameworks. First, let us redefine and contextualize the agents. The agent has input, output, and communication tools to observe the environment, provide the output or execute the action, and communicate with other existing agents in the MAS. Once an agent is employed in a given environment and it observes the environmental variables, it can make decisions. The decision is based on the agent's final goals, which could be to predict the incoming traffic and classify them accordingly. By measuring the incoming traffic over a given link, it could analyze and use it for load balancing, resource slicing, or resource allocation. The important characteristics of an agent are as follows: • Situatedness: that means agents use sensors or perception capabilities as an interaction with the environment, and it uses actuators to perform actions on the environment. In the NTPA and NTCA contexts, it means taking incoming data as input and providing prediction and classification results as output, respectively. • Autonomy: the agents' internal state is protected from any external disturbance from other agents in the MAS. NTPA and NTCA are capable of performing their respective function autonomously without any external support. • Inferential capability: this enables agents to work on an exact goal until it is achieved. Agents can analyze the available data for decision making, such as network traffic prediction and traffic class determination. • Responsiveness: defined as the ability of an agent to perceive the environment and perform on it with minimal latency, which is important in real-time processes. Agents should be designed such that the task execution (prediction and classification in case of NTPA and NTCA) must be performed with a strict time delay. • Pro-activeness: means agents should take advantage of particular opportunities that aim to improve their action performance to dynamically adapt to the changes in the surrounding environment of the network. By analyzing the historical data, it could predict the amount of traffic that comes in a given link. Using this knowledge, it pro-actively allocates the required resources, such as bandwidth. • Social behavior: the agent's decision should not be affected by any external interference, either from human interactions or other agents in the system. NTPA and NTCA should be capable of performing independently but also could communicate and collaborate to perform a sequence of tasks, such as classification followed by prediction or vice versa. For instance, NTCA could classify traffic into various classes and provide the output to the predictor. Based on the classes, the predictor could predict the amount of incoming traffic of a given time in the coming hours or days.
Proposed Network Traffic Predictor Agent Architectural Framework
A given NTPA is equipped with an "input unit" that enables it to receive a network traffic processing request and network state sampling/measuring points. Furthermore, NTPA is equipped with several facts that are model features that could be the amount of incoming network traffic, the available bandwidth in a given link, IP addresses, source and destination ports, and labels (protocols) [20]. The main target to focus on is the "cognitive/reasoning unit". It is the brain of NTPA that enables it to have reasoning capability, which is typically a DL model. The "planning strategy" component of NTPA is the steps for the action to be performed in accomplishing the required processing. Another component of NTPA is a validation unit. It is composed of validation rules for the final decision, for example, knowing the amount of arriving network traffic of given traffic classes that helps make decisions. Finally, we have an "output or action" unit. This is the final outcome that could be a prediction result to be sent to the other agents. Figure 5 (bottom) shows the building blocks of an NTPA.
Proposed Network Traffic Classifier Agent Architectural Framework
The general architecture of NTCA is similar to NTPA. However, the actual values, parameters, and algorithms used are different. That means the input parameter, knowledge base, cognition algorithms, validation technique and the output for NTCA are different from NTPA. In general, taking the network traffic as input, NTCA classifies them into the classes to which the traffic belongs. NTCA helps other network agents in making network decisions, including QoS provisioning and monitoring based on SLA, resource provisioning, and network slicing. The NTCA design and implementation framework is depicted in Figure 5 (top).
In particular, simpler than NTPA, NTCA has an "input-unit" that can be used in receiving a service request and network state sampling/measuring points. In addition, NTCA has facts, i.e., incoming traffic features, which could be the size of a packet, source and destination IP addresses, protocols, etc. The "cognition/reasoning unit" is the component that gives the central intelligence for NTCA. It uses it to identify and classify traffic. The "planning strategy unit" is the steps/procedures in providing the class of the traffic. Another component of NTCA is the validation unit. It is composed of validation rules for the final classification decision, i.e., knowing the amount and type of arriving network traffic of given traffic classes that helps make decisions. Finally, the remaining NTCA component is an "output unit", which could be a classification result to be sent to other agents.
Communication Interface between Agents
The communication interface enables the creation of an agent chain for collaboration and cooperation to create a complete MANA-NMS. Agents could be deployed in a containerized and distributed environment that communicate based on open-source communication interfaces. The standard communication interface for agents is ACL [32]. ACL enables a complex knowledge and information exchange mechanism which includes facts and a knowledge base, the agent's goal, strategy, plans, and outputs. We used ACL as a communication interface in our simulation. However, other generic communication interfaces were tested in our work in [14]. These are REST, WebSocket, and gRPC. Each of them has its pros and cons in terms of latency for web-based communicating intelligent agents.
Implementation Scenario and Conceptual Framework
We used the osBrain [33] library for the design and implementation of NTPA and NTCA. osBrain is an agent simulation environment implemented using Python.
We provide the hardware configuration information shown in Table 1, as it affects the prediction results. Therefore, the same hardware was used to run the LSTM, GRU, CNN, and MLP models.
Agents within the MAS are running independently as system processes (uses multiprocessing). Since osBrain uses ZeroMQ [34], agents communicate using zeroMQ. zeroMQ is a messaging library for flexible and efficient communication between agents. It is an open-source universal messaging library that uses asynchronous communication between agents in the MAS. The basic communication patterns used by osBrain are Push-Pull, Request-Reply and Publish-Subscribe.
Implementation Conceptual Framework for MANA-NMS Using NTPA
NTPA was implemented in a MANA-NMS framework as a proof of concept. The MANA-NMS architecture is depicted in Figure 6, showing the integration of NTPA along with the dispatcher, the audit log agents, and other service processing agents. NTPA mainly uses LSTM. However, we also used GRU, CNN, and MLP algorithms to compare the performance against LSTM-based NTPA. MANA-NMS is assumed to be deployed in an edge/cloud data center, where the incoming traffic is collected at the gateway. We assumed a Poisson arrival process for the service arrival into the system. The Poisson arrival process is considered with a 1st degree approximation of the traffic arrival. However, other models could be used, such as fractal process arrival.
We collected the amount of incoming traffic to observe the service request distribution in time. The data were collected for 1-year period. To store and manage the history of traffic data for future retraining of agents, a centralized database can be used. Alternatively, it could be possible to provide the agent with database management capability. In the agent interaction dynamics, agents are responsible for pulling results from other agents and logging their actions. NTPA uses the collected dataset to train or retrain its' internal cognitive reasoning algorithms. The training of the DL component of NTPA could be performed centrally (remotely) or locally (at the agent). The training process is performed periodically. After training is complete, NTPA predicts the incoming traffic for the next hours/days. The predicted data indicate the real incoming traffic demands at a given time.
By using the NTPA prediction traffic values, the resources/agents are dynamically allocated, creating an agent chain for that particular period. In other words, the predicted value is used to instantiate the number of service agents by the orchestration agent through a request communication protocol channel. Then the orchestration agent creates the required agent chain that is considered the network resources to perform the required service execution. This can save a lot of unused resources at some time of the day. By doing so, we optimize the performance of the network, reduce the running cost, and increase the network efficiency.
Moreover, we have a dispatcher agent (event handler agent), whose goals are to add the incoming daily traffic data/services in a FIFO queue, divide the traffic into three different queues consulting the NTCA and based on the task/service priority, whether high, medium, or low, and communicate through three different communication channels to the service agents (A, B, C).
The scheduling agent schedules the incoming tasks using the first-in-first-out (FIFO) queuing algorithm while using the classifying agent to classify the tasks into high, medium, or low priority classes. This represents three main application areas in 5G network ultrareliable machine-type communication (uMTC), massive machine-type communication (mMTC) or extreme mobile broadband (xMBB), respectively. The service agents execute tasks scheduled in three different queues depending on the task priority. They represent the complementary service agents in the MANA-NMS. Three types of service agents are assumed to serve 5G/B5G/6G traffic, which are (a) service agent type A serving the high priority queue (uMTC), (b) service agent type B serving the medium priority queue (mMTC), and (c) service agent type C serving the low priority queue (xMBB). Finally, once the service agents finish their processes, the audit-log agent will pull the results through three separate communication channels and save them.
Implementation Conceptual Framework for MANA-NMS Using NTCA
Similar to the NTPA, we used the NTCA design guideline presented above, as depicted in Figure 5, for the agent implementation. Using the input traffic, NTCA classifies the incoming network traffic into the appropriate classes. NTCA helps with important decisions in network management, for instance, for QoS provisioning, resource provisioning, dynamic network slicing, etc. In performing autonomous service management and provisioning, such as resources provisioning and task scheduling, it is necessary to classify all incoming network traffic as per the protocol to which it belongs to. The protocols indicate the application or category that the traffic/user or service belongs to. The NTCA uses an ML algorithm as an internal cognitive part to perform the required traffic classification.
Supervised ML algorithms are employed for the NTCA implementation. The supervised ML uses a labeled (protocols or port numbers) network traffic dataset. The supervised learning models used are K-NN, DT, SVM, and NB, and their performance is compared. We also assume that the NTCA can collect incoming network traffic and store it as a historic dataset. This dataset will be used by the agent to label and utilize it in training and re-training the internal cognitive component. The assumption is that the dataset will be stored in a centralized/distributed database: it could be internal or through the use of other database management agents. In our case, NTCA can handle the dataset and enable the dataset to retrieve it when needed.
In the NTCA operation, the traffic dataset is collected using Wireshark in real time. About 100,000 entries are collected while surfing the internet for about an hour.
The data are saved in CSV format and used in the feature extraction and feature selection stages. The extracted features are the source and destination IP address, packet size, and protocol. The main goal is to demonstrate the implementation of an NTCA before using it to recompose a MANA-NMS system.
NTCA is a building block in the MANA-NMS framework as depicted in Figure 7. The NTCA collects the incoming network traffic and classifies them into three different classes. Similar to the NTPA implementation, we assumed high priority (HP), for uMTC service types, medium priority (MP) for mMTC service types, and low priority (LP) for xMBB service types. After the network traffic is classified, it is sent to an event handler (task manager/dispatcher). The event handler then decides where to schedule the traffic relaying based on the scheduling policy. A FIFO policy is used in the case of traffic belonging to the same classes. Then a task is scheduled onto the appropriate service agent to be executed.
Job/Task/Event Handler and Queuing Scheduling
A task (event) manager agent is another type of agent in a given MANA-NMS architecture. It receives the classified network traffic from NTCA. We assumed every class of the network traffic as a task requiring execution in the service processing agent. The task manager decides which service processing agent should handle the service. This decision is performed using a scheduling policy or intelligent scheduling agent.
Scheduling Policy
"Join Class-Related Queue" is used for scheduling. Scheduling is a policy that sends tasks to the queues of the appropriate service processing agent. The agent should be subscribed to that particular task execution. For instance, when an HP class job is sent by the event manager, it will be placed in the queue of a particular service agent that is allocated to process an HP task.
Queuing Model Employed
We assumed an M/M/1 Markovian model for the queue. The queuing policy is assumed to be FIFO similar to the NTPA scheduling policy. In general, M/M/1 means that the system has a Poisson arrival process with an arrival rate of λ, an exponential service time distribution with service time µ, and one server. Jobs arrive at the server's queue at independent exponentially distributed time intervals with mean 1/λ, they join the queue, and once in the head of the queue, they get served by the server whose service time per job is independent and exponentially distributed with parameter 1/µ. A task is scheduled into the appropriate queue when the task manager/event handler releases it. We assumed three different queues that are associated with a particular service agent chain, which indicates the three classes of services to be scheduled in the appropriate agent chain.
Performance Evaluation
This section presents the performance evaluation results of NTPA and NTCA equipped with different DL and ML models, respectively. We used metrics such as prediction and classification accuracy, training and predicting latency, and memory and CPU utilization to evaluate the performance. Therefore, at the end of this section, we will able to identify appropriate DL/ML-based NTPA and NTCA for different prediction/classification network applications with respect to their time and resource constraints. Table 2 shows the DL models parameters used in the experiments. Hidden-layer 1 1 1 1
Accuracy of NTPA Using Different DL Models
We used the sliding window technique to train our DL models and cross validation to avoid overfitting with a validation split of 0.1 as mentioned in Table 2 along with other training parameters. The validation split determines the fraction of training data used for validation. The validation data are not used for training but evaluate the loss at each epoch. Figure 8 illustrates the performance of the different Deep learning models, where the x-axis is day and the y-axis is traffic (terabit per second (Tbps)). In our experiment, we predicted traffic over different days with different DL models (GRU, CNN, MLP and LSTM) and took the average of the traffic and compared it with real traffic. Figure 8 shows the LSTM, GRU, MLP, and CNN based NTPA prediction results against the actual network traffic. As shown in the figure, there is no significant difference between their prediction results, while MLP predicts the network traffic differently. Overall, LSTM outperforms other models.
To measure the accuracy for different models, we used two methods: root mean square error (RMSE) and mean absolute percentage error (MAPE). RMSE is a measure that evaluates how good or bad the prediction is, using the Euclidean distance by calculating the difference between the true value and prediction. Additionally, MAPE is a different method that measures the prediction accuracy as a ratio. RMSE and MAPE are expressed as follows: where in Equation (6), N is the number of data points, p(i) is the measured true value, andp(i) is the predicted value. In Equation (7), n is the number of fitted points, V i is the actual value, and P i is the predicted value. Figures 9 and 10 demonstrate the accuracy for DL models with respect to a different dataset size. We calculated the averages for one week, one month, three months, six months, and one year for the MSE and MAPE errors and compared different models. As shown in the figures, LSTM is the most accurate model to predict the network traffic. Additionally, as the training dataset size grows, the error value decreases. In other words, training the NTPA with larger datasets provides higher prediction accuracy. The RMSE error values for 1 year long dataset as shown in Figure 9, for LSTM, GRU, CNN, and MLP are 0.03, 0.06, 0.2, and 5.7, respectively. Moreover, the MAPE error values for 1 year long dataset as shown in Figure 10, for LSTM, GRU, CNN, and MLP are 0.24, 0.44, 0.77, and 7.17, respectively. Both figures indicate that LSTM has the most accurate prediction performance. Figure 11 shows the comparison of training latency for the LSTM, GRU, CNN, MLP models. Training latency is represented as the amount of time taken in training an NTPA. The training latency was measured over different dataset size. According to the figure, we see that CNN has the lowest training latency compared to other models. However, GRU has the highest training latency. The average training latency values for 1 year long dataset over 20 training trials for the CNN, MLP, LSTM, and GRU models are 74.036 s, 120.873 s, 269.647 s, and 291.274 s, respectively. In addition to the training latency measure, we assessed the predicting latency. Predicting latency is defined as the amount of time taken to predict the traffic at a previously unknown time slot. As shown in Figure 12, the predicting latency was evaluated over a simulation period of one hour. On average, the LSTM, MLP, CNN, and GRU NTPA designs have predicting latency values of 0.769 µs, 0.834 µs, 1.025 µs, and 1.523 µs, respectively.
Training Latency and Predicting Latency of NTPA Using Different DL Models
The result shows that NTPA takes longer to be trained; however, it takes a shorter amount of time to predict, compared to NTCA. This confirms the advantage of the DL methods over ML methods, where a DL model is much quicker in testing, while the ML model's testing time will increase as the dataset size grows as mentioned in Section 2.3. We examined the memory usage (in MB) of DL models. As shown in Figure 13, we observe that the GRU-based NTPA has the highest and the CNN-based NTPA has the lowest memory usage compared to other models.
CPU Usage
We also examined the CPU usage of DL models. The result is shown in Figure 14. We observe that LSTM has the lowest CPU usage compared to other models. Additionally, as seen in the figure, the CPU usage of GRU is the highest.
Accuracy of NTCA Using Different ML Models
We evaluated our ML algorithm for the NTCA designs. We split the collected data into training (80%) and test (20%) sets. The classification accuracy is considered by various factors, such as the dataset size and the number of features.
For the sake of generating a generic result that is consistent across time and multiple trials in a given environment for a given model, we ran the NTCA algorithms several times, running our models 10 times. We calculated the percentage of error made by the classifier agent for a number of classification trials. We evaluated the mean and variance in terms of accuracy for different NTCA using various ML models. The result is presented in Table 3.
According to the table, we can understand that the DT-based NTCA is the most robust and accurate of all the classifier agents. This is because it has the smallest variance while the algorithm is run multiple times. Training latency refers to the delay incurred in training a given ML model. The mean training latencies of 154.3, 154.9, 155.1, and 162.2 ms were measured for the DT-, K-NN-, NB-, and SVM-based NTCAs, respectively. It is worth mentioning that the DT-based NTCA design exhibits the lowest average training latency with 154.3 ms, as shown in Figure 15. We observe that NB, DT, and K-NN models train faster than the SVM-based NTCA model. Classification latency is how long it takes to classify new data. In Figure 16, we compare the NTCA classification accuracy, simulating over a one-hour simulation period. The average latencies of K-NN-, SVM-, NB-, and DT-based NTCAs are 28.01 µs, 3.668 µs, 2.773 µs, and 1.593 µs, respectively. Relatively, SVM, NB, and DT NTCAs impose the least mean latency, which is less than 4 µs. Additionally, the DT-based NTCA has the least average latency, which is 1.593 µs. 5.6. Computing Resource Utilization of NTCA Using Different ML Models 5.6.1. Memory Usage Figure 17 shows that the NB traffic classifier agent design uses the least average memory of 141.1 megabytes (MB) to train the model and classify new data instances. Despite the fact that the DT-based NTCA design offers the best classification accuracy of 99.504%, it uses the most memory resources amounting to 150 MB. This gives the system designer a chance to make a trade-off between the classification accuracy and memory usage while trying to choose a traffic classifier agent design. Figure 18 shows the statistical differences of each traffic classifier performance, including the minimum, mean, and maximum CPU usage. The SVM traffic classifier has the least maximum CPU utilization of 31.2%, while the DT-based NTCA has the largest maximum CPU utilization of 79.2%.
Conclusions and Future Work
Traffic classification and prediction are essential for network resource management, QoS provisioning and monitoring, security, and network failure detection. For example, proactive-based network management, resource allocation, and traffic scheduling can be performed using the prediction result. An organized introduction of intelligence in the network is required to have automated network control and management. Using multi-agent-based service-orientation, a MANA-NMS architecture was presented in our previous work.
In this work, we designed and implemented traffic classifier and predictor agents. Both NTCA and NTPA leverage ML/DL as an internal cognitive component of the agents to use in their decision process. The agents are autonomous in performing their respective decision. Using different cognition elements in the design and implementation, the performance of both agents is evaluated in terms of classification and prediction accuracy, classification and prediction latency, training latency, and computation resource utilization. The result suggests that LSTM-based NTPA and DT-based NTCA perform better than the rest in terms of prediction and classification accuracy and latency, respectively.
As a future work, a realistic testbed implementation is required. Moreover, a complete system using the autonomous and atomic agents as a building block is necessary. However, we implemented only NTPA and NTCA in this work. Therefore, it is necessary to have other services, such as routing, QoS monitoring, 5G core access and mobility management function (AMF), session management function (SMF), firewall, deep packet inspection, and other novel functions to have a complete MANA-NMS model for network systems, such as 5G wireless network, autonomic network management, etc. Thus, it is an open challenge to implement such functions as intelligent agents. Furthermore, the agent is expected to be deployed in a container or virtual environment. The experimental evaluation of such deployment is also another open challenge that we will be working on.
Author Contributions: Conceptualization, methodology, formal analysis, investigation, supervision, original draft preparation, writing-review and editing, S.T.A.; methodology, formal analysis, writing-review and editing, original draft preparation, Z.A. and M.E.; resources, supervision, project administration, writing-review and editing, and funding acquisition, M.D.; writing-review and editing, supervision and funding acquisition, F.G.; All authors have read and agreed to the published version of the manuscript.
Funding: This work has been partially funded by NATO Science for Peace and Security (SPS) Program in the framework of the project SPS G5428 "Dynamic Architecture based on UAVs Monitoring for Border Security and Safety", and partially by the US National Science Foundation under the New Mexico SMART Grid Center-EPSCoR cooperative agreement Grant OIA-1757207. | 2022-07-31T15:07:50.144Z | 2022-07-28T00:00:00.000 | {
"year": 2022,
"sha1": "1fc0cd80b44e16eb339da06051d45ae0becaa241",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-5903/14/8/230/pdf?version=1659001557",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "15955853b96fe87a294a7af058c94b53e0f9d873",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
213626760 | pes2o/s2orc | v3-fos-license | Distinguishing between Bos and Bison petrous bones. A case study: bovines from the Des-Cubierta Cave (Pinilla del Valle, Madrid)
The taxonomic identification of large bovine remains in archaeological and palaeontological sites is important in order to infer the palaeoenvironment of these sites and to know if their inhabitants were hunters of Bos or Bison . Their presence may also have biostratigraphic or archaeozoological implications. Although the petrous bone is one of the elements of the skeleton with the greatest preservation potential in prehistoric sites, due to its hardness and compactness, it is not frequently used by the palaeontologists to distinguish between Bos and Bison , the two genera commonly present at the sites during the Pleistocene. Due to the abundance of petrous bones at the Late Pleistocene layers of the Des-Cubierta cave, the aim of this work is to identify Bos and Bison in this site through the morphological features defined by Guadelli (1999) for this bone and using morphometric geometrics with material of Bos taurus , Bos primigenius , and Bison priscus in order to identify the differences among petrous bones of these species.
The steppe bison was one of the most abundant large mammals in the north of the Iberian Peninsula during the Late Pleistocene. It was frequently represented in this region in Upper Palaeolithic engravings and paintings (González Echegaray & González Sáinz, 1994;Altuna & Mariezkurrena, 2014). The steppe bison, together with the mammoth and the woolly rhinoceros, is one of the main taxa represented in typical tundra-steppe wildlife associations, which feed on pastures in open landscapes (Guthrie, 1990;Sher, 1971;Brugal, 1985).
The aurochs (Bos primigenius Bojanus, 1825) is the unique ancestor of cattle livestock (Bos taurus Linnaeus, 1758) (Zeuner, 1963;Clutton-Brock, 1999). It inhabited the Iberian Peninsula from the Middle Pleistocene (Bos primigenius was found for the fi rst time in Torralba and Ambrona sites, dated around 500 ka: Soto et al., 2001) to the Holocene. The latest bone remains of this species have been found at the Roman sites from the Basque Country (Altuna & Mariezkurrena, 2002). The aurochs probably inhabited in wet habitats, such as river valleys, lake fringes, marshy forests (Van Vuure, 2005) and open woodlands (Kurtén, 1968).
Although Bos and Bison did not have the same ecological preferences, they have been found together at some Iberian sites, in some cases in the form of engravings and paintings (e.g., Ekain, Altxerri; Altuna & Mariezkurrena, 2014), and in other cases as associations of bone remains of both species (e.g., at the Lezetxki site and the Morín site; Altuna, 1972, and at the Búho and Zarzamora caves; Sala et al., 2010).
Assigning the large bovine remains from the Middle and Late Pleistocene sites to one of these species is complex because both species have a similar size and bone morphology. There is an extensive literature devoted to the distinction of Bos and Bison (Schertz, 1936;Bibikova, 1958;Stampfl i, 1963;Sala, 1986;Altuna, 1972;Brugal, 1985;Gee, 1993;Buitrago-Villaplana, 1992;Prat et al., 2003;Sala et al., 2010;among others). The skeletal elements that allow us to better distinguish between the two species are the metacarpals and the skull (especially the horn cores, the frontals, parietals and occipitals) (Lavocat & Piveteau, 1966). Nevertheless, these diagnostic bones are not represented at most sites or they are not well preserved enough to allow for an unequivocal identifi cation. In these cases, the fossil remains are often grouped without distinction into the category "Bos/Bison".
In this context, the purpose of this contribution is to highlight the importance of the petrous bone as an element that can serve to identify the large bovines found at Iberian sites. The petrous bone is, along with the teeth, one of the skeletal elements with the greatest preservation potential in archaeo-palaeontological sites, due to their hardness and compactness (Lam et al., 1999;Bar-Oz & Dayan, 2007) and constitutes one of the best anatomical elements for extracting DNA (Gamba et al., 2014). Some previous works have demonstrated the existence of morphological diff erences between the petrous bones of aurochs and steppe bison (Guadelli, 1999). In this work, these diagnostic criteria are applied to the identifi cation of a relatively large sample of large bovine petrous bones from the Late Pleistocene levels of the Des-Cubierta cave site (Pinilla del Valle, Madrid) in order to confi rm their validity by crossing the identifi cations obtained from the study of petrous bones with that of the identifi cation of the partial skulls to which they belong.
The GM (geometric morphometrics) is applied for the analysis of shape of the internal auditory canal, which is considered a diagnostic element to distinguish Bos and Bison, according to Guadelli (1999).
The site
The Des-Cubierta site is one of the sites located at the Calvero de la Higuera in Pinilla del Valle (Community of Madrid) (Fig.1). It was discovered in 2009 and since then it has been the subject of annual excavations. It has a long stratigraphy that includes Middle and Late Pleistocene sediments.
have been recovered at the site. Therefore, in this work, this bone is going to be used to identify these bovines.
The material
The studied material is composed of 27 petrous bones: 16 petrous bones from Bovinae of the Pleistocene levels 1, 2, 5 and 101 from the Des-Cubierta cave site (Pinilla del Valle, Madrid) and 11 that belong to modern individuals of Bos taurus (Table 1). One petrous bone from Des-Cubierta cave, associated with its corresponding Bison priscus skull, and 11 that belong to modern Bos taurus have been used for comparison. Bos primigenius is the ancestor of modern cattle, Bos taurus, and, because petrous bones of aurochs are so diffi cult to fi nd, the petrous bones of modern cattle have been used for the analysis. Seven of these come from the osteology collection of the Veterinary School's Anatomy and Embryology Department at the UCM (University Complutense of Madrid), two from the animal bone comparative collections of the UCM-ISCIII Center on Human Evolution and Behaviour (Madrid) and two from the animal bone comparative collection of the Regional Archaeological Museum in Alcalá de Henares.
In order to distinguish between Bos taurus, Bos primigenius and Bison priscus, 33 photos of medial faces of the petrous bone were used, according to the list included in Table 1. Diff erent cameras were used in the taking of the images of the material from the Des-Cubierta cave: Sony DSC-H50; Nikon D500 and Nikon D810 (Mario Torquemada, Regional Archaeological Museum at Alcalá de Henares).
Methodology
In order to appreciate the morphology of petrosal bone in the skulls of present-day bovines, two skulls belonging to the collections of Anatomy and Embryology Department's laboratory at UCM's Veterinary School were cut longitudinally and prepared for study.
A complete morphological description, as well as metrical analyses, have been carried out from the diff erent faces whenever possible, following the criteria defi ned by Guadelli (1999). The terminology used in the description of the diff erent parts of the medial face from a bovine petrous bone is shown in Figure 2.
Metrical analysis
The metrical data have been taken using a Sylvac digital caliper (03.02/SYL-235-F, D, E/681.046-100) to the nearest The recovered bovine bones that have been studied in this work were found in the Late Pleistocene levels. A Homo neanderthalensis mandible and some teeth have been discovered in these levels . Nevertheless, the most outstanding feature of these levels is the vast amount of partial crania of Bison priscus, Bos primigenius, Cervus elaphus, and Stephanorinus hemitoechus . At least 20 individuals of bovines have been identifi ed using horn cores (the most abundant element). Most of the horn cores belong to Bison priscus (15 individuals) and only 2 horn cores belong to Bos primigenius (2 individuals), although the number of identified individuals is growing up as new excavation campaigns take place. In some cases in which the apexes are the only preserved part of the horn cores, the generic adscription of some individuals has not been possible. In order to determine the abundance of both taxa in this site, some cranial bones other than the horn cores can be used in the identifi cation of bovine remains. The petrous bone is one of the best candidates. This element constitutes the auricular or tuberous portion of the temporal bone, and includes its tympanic part. Due to its preferential preservation with respect to other parts of the skeleton and the skull in particular for the reasons indicated above, numerous well-preserved petrous bones Guadelli (1999) 3) Dorso-ventral diameter of the internal auditory meatus (DvdIAM) (3): length of the meatus from one extremity to the other in the dorso-ventral direction. 4) Rostro-caudal diameter of the internal auditory meatus (RcdIAM) (4): length of the meatus from one extremity to the other in the rostro-caudal direction.
For each petrous bone, an attempt has been made to characterize its morphology according to the criteria that distinguish Bos primigenius from Bison priscus according to Guadelli (1999), which are the following: 1) On the medial face (Fig. 4), the internal acoustic meatus of Bison is more elongated than in Bos. In order to test this feature we have used an index: the ratio rostrocaudal diameter (4) with the dorso-ventral diameter (3) of the internal auditory meatus (Meatus acusticus internus). According to the internal canal description by Guadelli (1999), this index should be close to 1 for Bos and, further away from 1 and greater than 1 for Bison. The imprint of the trigeminal nerve (located between the antero-inferior apex and the petrous crest) is clearly concave in Bison, as opposed to Bos in which it is weakly concave or even almost rectilinear. The ventral part under the imprint of the trigeminal nerve does not protrude downwards in Bison (no salient towards the basis), whereas it clearly protrudes downwards in Bos (salient towards the basis). Fig. 2) criteria, for the medial view (see Fig. 2 for nomenclature and Fig. 3 for measurements): 1) Dorso-ventral diameter (Dvd) (1): length between the antero-inferior apex and the ventral edge of the cerebellar fossa (measured on the rostral part of the fossa).
2) Rostro-caudal diameter (Rcd) (2): length between the edge of the petrous crest and the edge of the caudal crest (measured along the cerebellar fossa). 2) The extent of the rostral face is very developed in Bovinae. In the case of Bison, the Fallopian hiatus hole (Canaliculus nervi petrosi majoris) is wide and a groove duct on the rostral surface of petrous bone is observed towards the ventral face. In Bos, the Fallopian hiatus opens directly downwards because it is not traversed by a groove (Fig. 5). Moreover, the rostral face is relatively more extensive in modern cattle than in aurochs.
3) The caudal face supports the basioccipital bone. In Bison the caudal face is regular. From back to front, a fl at surface, a triangular depression, often wide and deep, and a small hole where a bone spine protruding from the occipital bone are observed. In Bos, it is irregular and undulating and there is no triangular depression like in Bison.
In all descriptions, the petrosal bone is considered to be in anatomical position, with the orientation relative to the Frankfort plane. This means that the cerebral face corresponds to the rostral face and the caudal face to the occipital face (according to the methodology from Mallet & Guadelli, 2013).
Geometric morphometrics
For the taxonomical identification of Bos taurus, Bos primigenius, and Bison priscus, the 32 photos were used (the medial face of one petrous bone, CDC10/H'49/2/158a, was broken so it wasn't included for this analysis). Using geometric morphometrics, the internal auditory canal was analysed and 6 landmarks were defi ned (Fig. 6): Landmark 1: the rostro-ventral apex of the internal auditory canal; Landmark 2: the most rostral apex of the internal auditory canal; Landmark 3: the rostro dorsal apex of the internal auditory canal; Landmark 4: the caudo-dorsal apex of the internal auditory canal; Landmark 5: the most caudal apex of the internal auditory canal; Landmark 6: the caudoventral apex of the internal auditory canal. The landmarks were digitalized using TpsDig v.2.17 (Rohlf, 2015). After digitizing all landmarks using TpsDig, the PCA analyses were performed using the Mor phoJ v. 1.06d (Klingenberg, 2011).
The medial or cerebellar face
In Bos taurus, the trigeminal nerve imprint (Impressio nervi trigemini) is only very slightly concave and even completely fl at. The index (4/3) that shows the shape of the internal auditory meatus (Meatus acusticus internus) is less than 1.6 (Table 3), which indicates the internal auditory meatus is not very elongated. In all cases of the Bos taurus petrous bones, a salient ventral-caudally is observed below the antero-inferior apex (Apex antero inferior partis petrosae).
In the bovines from Des-cubierta cave, the imprint of the trigeminal nerve (Impressio nervi trigemini) is concave in most cases of the fossil bone remains from the Des-Cubierta cave, except in CDC10/I'41/1/29a and CDC12/ I'40/101/36a, where the concavity is not so clear ( Table 2). The index (4/3) of the internal auditory meatus is far from 1, which indicates that the shape is elongated, according to the criteria observed by Guadelli (1999). The index (4/3) is closer to 1 in three cases of the bone material from Des-Cubierta cave: CDC10/H'49/2/86, CDC10/I'41/1/29a, CDC10/I'41/1/29e (see Table 3). This means that the internal auditory canal is not as elongated as in other fossil remains. No salient is observed below the antero-inferior apex in most of the cases.
Osteometry
In all cases of the petrous bones for the comparative modern cattle, the index 4/3 is less than 1.6 (from 1.18 to 1.56) ( Table 3). This indicates an internal auditory canal that is a bit elongated, according to the Guadelli (1999) criteria.
Geometric morphometrics analysis of the internal auditory canal
The PCA was performed on Bos taurus, Bos primigenius, and Bison priscus and the bovine petrous bones from Des-Cubierta cave (Fig. 7). The PC1 and PC2 explain respectively 56.88% and 18.39% of the total variance, thus explaining 75.27% of the shape variation within the sample. The variation expressed by the PC1 showed diff erences in the width of the internal auditory canal between Bos taurus, Bison priscus, and Bos primigenius. These diff erences are illustrated in the Figure 7. In general, the internal auditory canal in Bos taurus is wider than in Bison priscus. The internal auditory canal of Bos primigenius is wider than Bison and included within the range of variability of Bos taurus. Nevertheless, the width of the internal auditory canal for Bison priscus does not overlap with the width for Bos primigenius (Table 3). Accordingly (Fig. 7) seven of the 15 petrous bone remains from Des-Cubierta cave fall within the ellipse of Bison priscus. Three samples fall within the ellipse of Bos primigenius and four samples are closer to the ellipse of Bos taurus and Bos primigenius than Bison priscus.
The rostral face (Table 4)
In Bos taurus, on the rostral face, the Fallopian hiatus opens directly downwards. No groove from the Fallopian hiatus is observed. The Fallopian hiatus hole is wide and there is a groove opening towards the ventral face in most petrous bones from the bovines of Des-Cubierta cave. This is a typical characteristic of Bison, according to Guadelli (1999).
The caudal face (Table 5)
In Bos taurus, the caudal face has not been morphologically described due to the lack of visibility of this face because all cattle petrous bones were connected to their skulls. In most cases of the bovines from the Des-Cubierta cave, a fl at surface following a triangular depression is observed. A small hole is present in almost all of the petrous bones.
DISCUSSION AND CONCLUSSIONS
In the medial face, the index (4/3) for the internal auditory canal is less than 1.6 in all cases for Bos taurus and larger than 1.6 in most cases of the petrous bone remains from Des-Cubierta cave (Table 3).
Using geometric morphometrics for the medial face, we can see that, in general, the internal auditory canal in Bos taurus is wider than in Bison priscus, in agreement with the criteria proposed by Guadelli (1999). The internal auditory canal width for Bos primigenius is included within the metrical range for Bos taurus. There is no overlap between the internal auditory canal width for Bos primigenius and Bison priscus. According to the analysis, there are no other shape diff erences, since the most determining variables are width and length.
In seven of the 15 petrous bone remains from Des-Cubierta cave, both the morphological features and the geometric morphometrics analysis indicate that these belong to Bison priscus (Table 6). In 2 cases, CDC10/ I'41/1/29a and CDC10/I'41/1/29e (possible same individual), both the morphological features and the GM analysis indicate that they belong to Bos primigenius. In fi ve cases, the morphological features do not concur with the MG analysis and, thus, these petrous bones are classifi ed as Bos/Bison. The morphological features of CDC10/H'49/2/158a suggests that it belongs to Bison priscus but the MG analysis could not be carried out because the bone is broken. | 2020-01-30T09:03:44.433Z | 2020-01-21T00:00:00.000 | {
"year": 2020,
"sha1": "0d17fdfcc33dcd9d7908aa44744235dde52e53c9",
"oa_license": null,
"oa_url": "https://ojs.uv.es/index.php/sjpalaeontology/article/download/16115/15011",
"oa_status": "GOLD",
"pdf_src": "Unpaywall",
"pdf_hash": "0d17fdfcc33dcd9d7908aa44744235dde52e53c9",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Geography"
]
} |
2323827 | pes2o/s2orc | v3-fos-license | Comparative Study between Transcriptionally- and Translationally-Acting Adenine Riboswitches Reveals Key Differences in Riboswitch Regulatory Mechanisms
Many bacterial mRNAs are regulated at the transcriptional or translational level by ligand-binding elements called riboswitches. Although they both bind adenine, the adenine riboswitches of Bacillus subtilis and Vibrio vulnificus differ by controlling transcription and translation, respectively. Here, we demonstrate that, beyond the obvious difference in transcriptional and translational modulation, both adenine riboswitches exhibit different ligand binding properties and appear to operate under different regulation regimes (kinetic versus thermodynamic). While the B. subtilis pbuE riboswitch fully depends on co-transcriptional binding of adenine to function, the V. vulnificus add riboswitch can bind to adenine after transcription is completed and still perform translation regulation. Further investigation demonstrates that the rate of transcription is critical for the B. subtilis pbuE riboswitch to perform efficiently, which is in agreement with a co-transcriptional regulation. Our results suggest that the nature of gene regulation control, that is transcription or translation, may have a high importance in riboswitch regulatory mechanisms.
Introduction
For decades, genetic expression has generally been thought to be mostly regulated at the promoter level. Nevertheless, the description of many new mechanisms, such as small regulatory RNAs and ribozymes, clearly indicates that post-transcriptional regulation is as important as transcription initiation [1,2]. Among the newly characterized mechanisms of post-transcriptional regulation are riboswitches, which are genetic modulators located in untranslated regions of mRNAs. Riboswitches are cellular sensors that modulate gene expression through their ability to alter their conformation in response to cellular changes [3][4][5]. These RNA switches, which have been observed in all kingdoms of life, can regulate transcription, translation, mRNA processing and mRNA splicing [3]. Riboswitches use various factors to control gene expression [3], such as metal ions [6,7], temperature [8,9], small metabolites [1,3,[10][11][12], or uncharged tRNAs [13,14], and mostly employ structural rearrangement to achieve gene expression regulation [5]. Recently, riboswitches have been found to regulate in trans the expression of the virulence factor PrfA in Listeria monocytogenes [15], suggesting that riboswitches may use an even wider range of regulation mechanisms than previously thought.
Riboswitches are composed of two modular domains consisting of an aptamer and an expression platform. The aptamer is the most conserved domain of the riboswitch and is involved in the binding of a specific cellular metabolite. The second domain, varying widely in sequence and structure, is the expression platform, which modulates gene expression mostly by altering the mRNA structure. Among the smallest riboswitches known to date, the purine-specific class comprises the adenine and the guanine riboswitches which are remarkably similar but exhibit a very high specificity and affinity toward their cognate ligands, adenine and guanine, respectively [16,17]. Although guanine riboswitches negatively regulate expression by attenuating transcription [18], adenine-specific switches activate expression at the level of transcription [19], and presumably also at the level of translation [20]. For instance, while the Bacillus subtilis (B. subtilis) pbuE adenine riboswitch was predicted to control gene expression by modulating the formation of a transcription attenuator, the Vibrio vulnificus (V. vulnificus) add riboswitch was anticipated to modulate the expression by controlling the formation of a translation sequestrator ( Figure 1A and 1B). Notably, previous work suggested that the add and pbuE adenine riboswitches behave differently in their ligand binding properties [20][21][22], suggesting that they possess significant differences in their respective mechanisms.
Most riboswitch studies are carried out in vitro by using renatured RNA molecules obtained from T7 RNA polymerase (RNAP) transcription systems. In various cases, however, transcription renaturation of RNA molecules is much longer in vitro than in vivo, suggesting that the transcription process dictates the RNA folding pathway and kinetic traps [23][24][25][26][27]. Indeed, during transcription elongation, because the upstream RNA section folds first, this will influence the folding pathway of the downstream RNA section [23]. Recently, the transcription process was shown to have an important role for the regulatory activity of an FMNresponsive riboswitch from B. subtilis [28]. In this elegant work, Breaker and coworkers observed that the riboswitch and the FMN ligand do not achieve thermodynamic equilibrium by the time the RNA polymerase reaches the decision point between transcription elongation or termination [28]. This indicates why higher FMN concentrations are required to trigger riboswitch regulation (T 50 ) relative to the dissociation constant (K D ). Because this mode of regulation primarily relies on the rates of ligand binding and riboswitch transcription, it was concluded that the riboswitch operates under a kinetic regime. Additional factors such as transcriptional pause sites were also observed to provide more time for the ligand to bind before the genetic decision is made. This is in contrast to a riboswitch operating under a thermodynamic regime in which the time needed to attain an RNA-ligand equilibrium is short compared to the transcriptional time scale and where the K D should be determinant for riboswitch activation. In principle, a ''mixed regime'' differing from a purely kinetic or thermodynamic regime may occur depending on cellular conditions [21]. For example, in the context of a normally kinetically-driven riboswitch, a change in cellular conditions favoring slower transcription (i.e., low NTP concentrations) could provide more time for the ligand to bind to the aptamer resulting in lower ligand concentrations to trigger riboswitch activity. As a result, this riboswitch would exhibit a more thermodynamic character in its regulation regime and thus the attribution of the riboswitch regulation regime (kinetic vs thermodynamic) should be achieved by taking in account biochemical parameters as well the cellular context.
Because there are currently few examples of riboswitch regulation mechanism that have been characterized, it is crucial to examine in details the mechanisms of additional riboswitch representatives.
Herein we report the characterization and the comparison of the regulatory mechanisms of add and pbuE adenine riboswitches. Even though they bind the same ligand, we find that both add and pbuE representatives employ different regulation mechanisms to positively modulate gene expression. Our results with the add riboswitch are consistent with a thermodynamic model in which ligand binding and riboswitch regulation can occur posttranscriptionally, and where transcription-translation coupling is not required for efficient genetic control. In contrast, a kinetic regime is proposed for the pbuE riboswitch that needs to fold and to bind adenine co-transcriptionally to activate transcription. Our results show that the pbuE riboswitch regulation not only depends on the transcription elongation rate and transcriptional pausing but on the NusA elongation factor too. NusA positively affects riboswitch regulation most probably by reducing the transcription rate. Our findings provide the first evidences suggesting that transcriptional and translational riboswitches exhibit mechanistic regulatory differences.
Determination of the transcriptional start sites of add and pbuE riboswitch mRNAs
Previous studies on adenine riboswitches were performed using truncated versions of either the aptamer or the riboswitch domains, due to lack of information on promoter locations. Because this could lead to biased results, we determined the transcriptional +1 by performing primer extension analyses for both the add and the pbuE transcripts. Total RNAs were extracted either from V. vulnificus or B. subtilis and the +1 transcription start sites were determined for add and pbuE mRNAs, respectively ( Figure S1A and S1B). The deduced RNA sequences differ from previously used truncated versions and new numbering nomenclature taking into account these variations were thus employed ( Figure 1A and 1B). The newly determined transcription start site of the pbuE riboswitch differs from a previous report, which may result from different cellular genetic background [29].
Adenine riboswitches show distinct ligand binding properties
The fluorescent nucleobase 2-aminopurine (2AP) is strongly quenched when stacked upon adjacent nucleotides indicating changes in its immediate environment [20][21][22][30][31][32][33][34][35][36][37]. The adenine riboswitch recognizes both 2AP and adenine in a similar manner, as previously shown using in-line probing and ßgalactosidase assays [22]. To investigate the ligand binding activity of the add and pbuE riboswitches, we took advantage of 2AP fluorescence to monitor the RNA-ligand interaction occurring in both riboswitches, using either the aptamer or the complete riboswitch (aptamer and platform). The fluorescence intensity of 2AP (at 50 nM) was first measured as a function of increasing concentrations of the add aptamer (from 0 to 5 mM). As shown in Figure 1C (insert), the 2AP fluorescence signal progressively decreased until near total quenching at 5 mM aptamer. The fluorescence data were well-fitted by a simple two-state binding model and an apparent dissociation constant K Dapp of 115615 nM was obtained. Furthermore, when the same experiment was repeated using the complete riboswitch sequence ( Figure 1C), a very similar K Dapp of 15667 nM was obtained. This indicates that, for add, both the aptamer and the riboswitch
Author Summary
Bacterial genetic regulation is mostly performed at the levels of transcription and translation. Recently discovered riboswitches are RNA molecules located in untranslated regions of messenger RNAs that modulate the expression of genes involved in the transport and metabolism of small metabolites. Several riboswitches have recently been shown to employ various regulation mechanisms, but no general rules have yet been deduced from these studies. Here, we have analyzed two adenine-sensing riboswitches of Bacillus subtilis and Vibrio vulnificus that differ by the level at which they control gene expression, which is transcription and translation, respectively. We find that, beyond the obvious difference in transcriptional and translational modulation, riboswitch regulation mechanisms of both adenine riboswitches are fundamentally different. For instance, while the adenine riboswitch from B. subtilis performs co-transcriptional binding for gene regulation, the riboswitch from V. vulnificus relies on reversible ligand binding to achieve gene regulation during mRNA translation. In agreement with co-transcriptional binding of the B. subtilis riboswitch, we find that transcriptional pausing is crucial for gene regulation. Our results suggest that the nature of gene regulation control, that is transcription or translation, may have a high importance in riboswitch regulatory mechanisms. bind equally well to the ligand, which is consistent with previous results obtained using truncated RNA molecules [20,30].
The ligand binding activity of the pbuE adenine riboswitch was also monitored using 2AP assays ( Figure 1D). Upon titrating the pbuE aptamer, a very efficient 2AP binding similar to that of add was observed (K Dapp of 518627 nM). In contrast to the aptamer, the pbuE riboswitch had very little effect on 2AP fluorescence, indicating that the expression platform has a negative influence on the ligand binding activity. These results are in agreement with previous fluorescence and probing data obtained using truncated riboswitch molecules [20][21][22], indicating that the natural pbuE adenine riboswitch inefficiently binds the ligand in vitro.
The add riboswitch exhibits structural changes upon ligand binding
Our data indicate that the add and pbuE riboswitches do not share similar ligand binding properties ( Figure 1C and 1D). Indeed, in contrast to the pbuE riboswitch, both the add aptamer and riboswitch sequences perform ligand binding in vitro, suggesting that the unbound add riboswitch is competent to bind adenine and to fold upon ligand binding ( Figure 2A). However, an alternative explanation is that the in vitro add structure is intrinsically adopting the ON structure even in absence of the ligand, which would favor constitutive ligand binding. To investigate whether the add riboswitch acts as a reversible switch and undergoes secondary structure rearrangement upon ligand binding, we used selective 29hydroxyl acylation analyzed by primer extension (SHAPE) to provide information about the folding of the riboswitch [38]. This technique is particularly useful to discriminate local nucleotide flexibility, where 29OH groups are more reactive in flexible regions to electrophiles like N-methylisatoic anhydride (NMIA). When subjected to NMIA reaction in absence of adenine, the add riboswitch showed modifications throughout the entire riboswitch sequence where various single-stranded regions were reactive ( Figure S2). Upon addition of adenine, clear changes were observed both in the aptamer and in the expression platform, where the Shine Dalgarno (SD) and AUG start codon were more reactive to NMIA ( Figure 2B and Figure S2). Thus, these changes are consistent with the ligand-dependent increased accessibility of both the SD and the AUG codon sequences, which is required for the translation activation to take place.
Because ligand binding reorganizes the riboswitch secondary structure, we speculated that the riboswitch conformation would be important for ligand binding. To test this, we introduced mutations in add to favor either the OFF or ON state ( Figure 1B) and monitored the 2AP binding activity ( Figure 2C). When we interconverted the 59 and 39 P1 stem sequences to prevent the formation of the sequestering stem while still allowing P1 stem formation (ON state mutant), we measured a 2AP binding activity (K Dapp of 458633 nM) very similar to the value obtained for the wild-type add riboswitch. However, when the 59 sequence of the P1 stem was mutated to prevent P1 formation, thus making an OFF state conformer, very little binding activity was observed ( Figure 2C). Because the mutated sequences are not directly involved in ligand binding, these results demonstrate that the add riboswitch binding is dependent on the adoption of the ON state. The ligand-dependent structural change of the add riboswitch was further characterized using a partial nuclease digestion assay using the single-stranded guanine-specific ribonuclease T1 ( Figure 2D and Figure S3). A partial RNase T1 cleavage assay was first performed on the natural add riboswitch as a function of magnesium ions and adenine, where the increased exposition of the SD sequence (G104-G109 region) could be observed as a function of both magnesium ions and adenine ( Figure 2D). Nuclease reaction sites were also determined for the complete riboswitch sequence which agreed well with previously obtained data ( Figure S3) [22]. The comparison of the cleavage pattern between the wild-type and the two mutants confirmed that the wild-type riboswitch can readily switch from the OFF to the ON state upon adenine binding, consistent with our SHAPE data ( Figure 2B). Together, our results show that the add riboswitch binds adenine in vitro and undergoes structural changes that are consistent with the SD and the start codon sequences being more accessible in the ligand-bound state.
The add riboswitch performs translational control in vitro
Our data demonstrate that the add riboswitch changes conformation in vitro upon ligand binding. To further investigate the riboswitch gene regulation mechanism, we developed an in vitro translation assay using different constructs. We hypothesized that if add undergoes structural changes upon ligand binding ( Figure 2B and 2D), it should be able to efficiently control translation initiation as a function of adenine. In bacteria, the processes of transcription and translation are coupled events so that most bacterial genes initiate translation soon after the SD sequence has been transcribed. Because the add riboswitch regulates gene expression at the translational level, we investigated whether the transcription-translation coupling was important in the regulation control. We thus developed an in vitro translation assay where the coupling between transcription and translation is either allowed or disrupted by using a DNA or mRNA template, respectively. When performing in vitro translation assays where transcription and translation are coupled, the presence of adenine increased the level of synthesized protein by 2-fold after 15 minutes ( Figure 3A, lower panel). In addition, when the add riboswitch was transcribed before translation ( Figure 3A, upper panel), such that transcription and translation are uncoupled, the addition of adenine increased by 3fold the expression of the protein. We then used both ON and OFF riboswitch mutants to demonstrate that the riboswitch conformations are responsible for the adenine-dependent modulation of translation. When we disrupted the P1 stem of the riboswitch (OFF state mutant), the level of synthesized protein the folding of a terminator structure that promotes premature transcription termination while the presence of adenine favors the ON state and antitermination. Outlined letters represent nucleotides that are involved in the formation of both the terminator and the aptamer structures. The nucleotide numbering is derived from the present study. (B) Secondary structures of the add riboswitch associated with the ON and OFF states. The presence of adenine promotes translational activation. The Shine-Dalgarno (SD) and the AUG initiation codon are both boxed. Mutants used in this study are indicated in rounded rectangles. ON and OFF state mutants are indicated. The P1-39 mutant corresponds to the wild-type P1 stem but in which the 39 strand was mutated for the sequence corresponding to the ON state mutant. The nucleotide numbering is derived from this study. (C) Normalized 2AP fluorescence intensity plotted as a function of add aptamer (circles) and riboswitch (triangles) molecules. Changes in fluorescence (dF) were normalized to the maximum fluorescence (F) measured in the absence of RNA. Lines show the best fit to a simple binding model, yielding K Dapp of 115615 nM and 15667 nM for the aptamer and the riboswitch, respectively. The insert shows fluorescence emission spectra for each add aptamer concentration. The indicated line represents 2AP fluorescence in absence of RNA. (D) Normalized 2AP fluorescence intensity plotted as a function of pbuE aptamer (circles) and riboswitch (triangles) molecules. An apparent binding affinity (K Dapp ) of 518627 nM was calculated for the pbuE aptamer. No value was determined for the riboswitch due to the absence of significant fluorescence change. doi:10.1371/journal.pgen.1001278.g001 became barely detectable ( Figure 3B). However, when we interchanged both strands of P1 to prevent the formation of the SD sequestering stem (ON state mutant), the translation was constitutively activated ( Figure 3B), and this independently of adenine. Taken together, these results indicate that the add riboswitch controls translation through conformational changes and that the coupling between transcription and translation is not required to efficiently perform gene regulation.
The add riboswitch performs translational control in vivo
Although the add riboswitch has been characterized in vitro, no in vivo data are available to assess the riboswitch regulation mechanism. To address this, we engineered transcriptional and translational constructs of the add riboswitch fused to the reporter gene lacZ in Escherichia coli (E. coli). Using primer extension assays, we confirmed that the transcription start site of our constructs in E. coli is identical to that of V. vulnificus ( Figure S1).
We tested our constructs by growing cells containing wild-type riboswitches in minimal medium in absence and in presence of adenine. As seen in Figure 3C, the addition of adenine had no significant effect on the transcriptional fusion, which indicates that the transcript level was not affected by adenine. In contrast, the bgalactosidase activity of the translational fusion was increased by 3fold by the addition of adenine, in agreement with the adenine- Digestions were also done for the ON and OFF state mutants. Lanes N and L represent samples that were not reacted and that were subjected to partial alkaline digestion, respectively. Nuclease digestions were performed as a function of 10 mM magnesium ions and 10 mM adenine. Substantial cleavage sites are indicated on the right. The complete gel is shown in Figure S3. doi:10.1371/journal.pgen.1001278.g002 dependent translational activation mechanism of the add riboswitch. We also determined that the optimal concentration of adenine needed for translation activation is 500 mM (data not shown). Because the intracellular adenine concentration in bacteria is around 1.5 mM [39], this suggests that adenine does not penetrate the cell freely, or is rapidly used by the metabolism to reach homeostasis.
Additionally, we generated constructs of the add riboswitch translational fusion to confirm in vivo our fluorescence and probing data obtained in vitro. We first mutated two nucleotides in the loop L2 (G31C/G32C, Figure 1B) to prevent the formation of the looploop interaction. This interaction is critical for the folding of the riboswitch in presence of adenine [22]. As shown in Figure 3D, this mutation makes the riboswitch non-responsive to adenine. We then mutated both strands of the P1 stem independently or additionally to shift the equilibrium toward ON or OFF states. For instance, when the 59 strand of the P1 stem was mutated to prevent P1 stem formation (OFF state mutant), all activity of the translational fusion was lost, suggesting that the sequestering stem is formed constitutively in absence of P1. On the other hand, when we mutated the 39 strand of P1 to disrupt both the P1 and sequestering stems (P1-39 mutant), we reestablished the same basal level than in the wild-type fusion, but lost the adenine effect on translation. Finally, by swapping the sequence of the two strands of P1 to favor the formation of the ON state mutant, we observed a constitutively active translation suggesting that the presence of the P1 stem correlates with translation activation. The results obtained with these mutants confirm that the add riboswitch controls translation in vivo by sequestering the SD region specifically under low adenine concentration. Overall, although riboswitch-driven genetic repression was previously demonstrated for several riboswitches [4,18,[40][41][42][43][44][45], our study provides novel insights about ligand-dependent translation activation in vivo occurring through the action of a riboswitch.
The transcription process is important for the pbuE riboswitch ligand binding activity Although most riboswitches can bind their ligand in vitro, we and others have reported that the pbuE riboswitch exhibits very poor adenine binding activity ( Figure 1D) [20][21][22]. Nevertheless, the pbuE adenine riboswitch can modulate the expression of a lacZ reporter gene [19]. Thus, we speculated that the in vivo transcriptional context might be essential for the ligand binding activity of the pbuE riboswitch. To investigate this, we developed an in vitro assay where full-length transcription depends on the binding of the ligand. However, in the absence of the ligand, a prematurely terminated transcript should be produced. Since 2,6diaminopurine (DAP) has been recently crystallized in complex with an adenine riboswitch aptamer [32], showing a very similar structure compared to the adenine:aptamer complex [17], we used DAP which exhibits ,30-fold higher affinity compared to adenine [19]. The assay was performed using single-round transcription reactions [28,46], which were carried out by using B. subtilis RNA polymerase (RNAP) in presence of increasing concentrations of the ligand. As shown in Figure 4A, a significant increase of readthrough transcripts was observed as a function of DAP concentration, which occurred concomitantly with the reduction of the prematurely terminated transcript. The fraction of read- through transcript for each reaction was calculated and the DAP concentration required to obtain half of the change in transcription elongation, defined as T 50 [28], was determined to be 2.160.2 mM ( Figure 4A). These results suggest that the pbuE adenine riboswitch requires a transcriptional context to efficiently bind DAP.
Next, to establish whether the riboswitch activity is polymerase dependent, we repeated the experiment using the E. coli RNAP and obtained a T 50 value of 0.560.1 mM. Moreover, when we substituted DAP for the natural ligand adenine, we observed a transcription modulation that was characterized by a higher value of T 50 (2.360.3 mM), consistent with the lower affinity of adenine for the riboswitch aptamer ( Figure 4B, insert) [19]. No liganddependent transcription modulation was observed when using the bacteriophage T7 RNA polymerase (data not shown), suggesting that specific elements to bacterial polymerase (e.g., pause sites) may be important for riboswitch activity. Together, these results show that the pbuE riboswitch depends on transcription to perform ligand binding.
Transcription elongation depends on adenine-related ligands and requires aptamer stabilization
It has been previously shown that the adenine aptamer exhibits efficient ligand binding in presence of adenine, 2AP and DAP, but not with guanine-related compounds [19]. In presence of 10 mM ligand, efficient transcription readthrough was observed for adenine, 2AP and DAP, the latter resulting in the highest transcription readthrough (57%, as shown in Figure 4C). However, hypoxanthine and guanine failed to support transcription readthrough, also consistent with in-line probing data, showing their inefficiency to bind the adenine riboswitch aptamer domain [19]. Our results indicate that transcription readthrough is only observed in presence of ligands known to bind the adenine riboswitch aptamer, suggesting that transcription elongation is achieved via a riboswitch-mediated control mechanism. The inability of the pbuE riboswitch to efficiently bind adenine posttranscriptionally was suggested to result from the formation of a highly stable terminator stem ( Figure 1D) [20][21][22]. Accordingly, we speculated that the binding of the ligand to the aptamer domain stabilizes the aptamer structure and prevents formation of the terminator. We thus carried out thermal denaturation experiments (TDE) of the pbuE aptamer to determine whether ligand-binding induces aptamer stabilization. TDE monitors the heat-induced unfolding of the RNA as a function of temperature by observing absorbance changes [47]. Using TDE, we followed the absorption of the aptamer at 258 nm in absence and in presence of DAP ( Figure 4D). After normalizing the data, melting temperatures corresponding to half change in absorbance were determined to be ,53uC and ,60uC in absence and presence of the ligand, respectively. Thus, our results are consistent with the idea that ligand binding to the RNA promotes aptamer stabilization, which is in agreement with previous studies on the pbuE aptamer using optical-trapping assays [48]. The aptamer stabilization is central for the disruption of the highly stable terminator stem and for the transcription elongation.
Since our results show that ligand binding promotes pbuE transcription elongation and thermal stability of the riboswitch aptamer ( Figure 4C and 4D), we carried out in vitro transcription experiments to determine to which extent aptamer stabilization is important for the riboswitch regulation ( Figure 4E). To do so, we modulated the stability of the P1 stem that is directly involved in the switching mechanism ( Figure 1A). By altering the sequence located to the 59 side of the P1 stem as a way to extend the P1 stem by 2, 3 and 5 bp, we determined transcription readthrough efficiencies of 17%, 44% and 78%, respectively ( Figure 4E). These results indicate that transcription elongation can be modulated solely by altering the stability of the P1 stem, suggesting that aptamer-ligand interactions are not strictly required for transcription elongation. Even higher readthrough efficiencies were observed when the P1 stem was extended by 8 and 10 bp, suggesting that the ON conformer was further stabilized ( Figure 4E).
The transcriptional control of the pbuE riboswitch was also studied by removing base pairs from the P1 stem ( Figure 4F). By destabilizing the latter, it is predicted that the OFF conformer is favored, which should inhibit the ligand-induced production of the readthrough transcript. While the removal of 1 bp did not significantly alter the transcription control, the removal of 2 or 3 bp completely abolished the ligand-induced response ( Figure 4F), suggesting that a P1 stem of at least 4 bp is required for efficient riboswitch regulation. These results correlate well with our previous study showing that aptamers with reduced P1 stem do not exhibit efficient ligand binding activity [30].
The pbuE riboswitch regulation is consistent with a kinetic regime It has been previously hypothesized that the pbuE adenine riboswitch is driven by a kinetic regulation mechanism, in which not only ligand binding but also RNAP transcription rates are important to establish the riboswitch activity [21]. Under this control regime, it is expected that ligand binding is highly dependent on a ''temporal window'' defined by the RNAP sequence position. For instance, while RNAP must have transcribed the aptamer region for ligand binding, the presence of the downstream terminator domain strongly precludes this binding [20][21][22]. Thus, it is expected that further transcription of the terminator sequence should reduce ligand binding. If this is true, high transcription rates should reduce ligand binding, and inversely, low transcription rates should improve ligand binding. To determine whether the pbuE riboswitch operates under such a kinetic regime, we conducted single-round transcription assays in presence of various rNTP concentrations and monitored transcriptional control using a range of DAP concentrations. When analyzing transcription reactions performed using rNTP concentrations of 20 and 150 mM, we calculated T 50 values of 0.860.1 mM and 1.960.3 mM, respectively ( Figure 5A), showing that the rNTP concentration is proportional to the ligand concentration (T 50 ) required to trigger riboswitch activity. In addition, a ,2-fold decrease in the transcription readthrough was observed in presence of 20 mM rNTP, consistent with the influence of the latter on transcription termination [49]. We observed no difference for experiments performed at 65 mM or 150 mM rNTP, most probably because transcription rates are optimal at 65 mM. However, clear differences in T 50 were observed when using the E. coli RNAP, which is consistent with transcription elongation being modulated by altering rNTP concentrations ( Figure S4). Thus, our results support the idea that RNAP transcription rates can influence riboswitch activity, in agreement with a kinetically-driven regulation regime as determined for the FMN riboswitch [28].
Transcriptional pausing has been previously shown to be important in the transcriptional folding of ribozymes [23][24][25], and also for riboswitch activity [28,50]. To determine potential transcriptional pause sites on the pbuE riboswitch, we performed transcription time courses ( Figure 5B and Figure S5). We observed prominent transcriptional intermediates paused in the region U114-U117, which largely disappeared over incubation time and chase reaction ( Figure 5B, upper panel, and Figure S5).
Our data show a pause lifetime of ,60 s (see Materials and Methods), which is similar to what has been found for the FMN riboswitch [28]. Notably, we observed a very similar pause site (U117) and half-life when using E. coli RNAP ( Figure 5B, lower panel). The identified pause site corroborates a region (U110-U115) that was previously speculated to be part of a pause site [21]. To further investigate this pause site, we introduced mutations in the terminator domain that did not alter the base pairing potential of the stem but that modified the pause site sequence ( Figure 5C). When performing in vitro transcription using the A95U:U113A mutant, we found that a higher ligand concentration was required to activate the pbuE riboswitch (T 50 = 2.660.1 mM) and that the lifetime was decreased to ,22 s. These results are consistent with the hypothesis that the disruption of the pause site decreases the time for the ligand to bind to the riboswitch, which resulted in a greater ligand concentration to promote riboswitch activity. In addition, the extent of transcription elongation was also decreased by a factor of ,2-fold, consistent with a faster transcription rate reducing ligand binding and transcription elongation ( Figure 5C).
The transcription elongation factor NusA positively affects pbuE riboswitch regulation
The transcription factor NusA is an RNA-binding protein known to modulate termination by increasing the E. coli RNAP pausing time and to reduce the rate of transcription [23,51]. It has also been shown to assist the FMN riboswitch activity by reducing the transcription rate [28]. To verify if NusA could affect the pbuE riboswitch regulation mechanism, we performed in vitro transcription kinetics in absence and in presence of the B. subtilis NusA and observed a significant decrease in transcription rate in presence of NusA (,15 s difference for significant full length formation, see Figure 5D, insert). When performing transcription reactions as a function of DAP concentrations, the ligand requirement was decreased in presence of NusA where values of T 50 of 2.160.7 mM and 1.160.3 mM were obtained in absence and in presence of NusA, respectively ( Figure 5D). No significant change of lifetime at pause sites was observed in presence of NusA. This suggests that, at least in our experimental conditions, NusA modulates riboswitch activity by decreasing the general transcription reaction [23].
Discussion
Our study on the regulation mechanisms of adenine riboswitches has demonstrated two major findings. First, we show that although the transcriptionally-and translationally-acting pbuE and add adenine riboswitches recognize the same ligand, they use different regulatory mechanisms to modulate gene expression. Second, we provide biochemical evidence about both the structural reversibility and the non-requirement of a transcription-translation coupling for the translationally-acting add riboswitch, all of which are consistent with a thermodynamic control. This regulation mechanism is in contrast to what we have observed for the transcriptionally-regulating pbuE riboswitch, which performs co-transcriptional binding and exhibits similar features to those found in kinetically-controlled riboswitches.
The translationally-regulating add riboswitch is a true riboswitch regulator
Our analysis of the add riboswitch confirms and extends prior reports suggesting that add undergoes structural changes to control translation initiation [20,52]. However, a very different view emerges for the add riboswitch regulation regime when compared to the pbuE riboswitch for which our results suggest a kinetic regime. For instance, 2AP fluorescence showed that the presence of the add expression platform does not inhibit ligand binding ( Figure 1C). Also, structural probing and mutagenesis studies of the add riboswitch revealed that the Shine-Dalgarno and the AUG codon sequences are more accessible in presence of adenine ( Figure 2). These results are consistent with add modulating translation initiation in a ligand-dependent manner, which has been observed both in vitro and in vivo (Figure 3). Moreover, given that adenine can bind to the add riboswitch post-transcriptionally, it suggests that, unlike the pbuE variant, add is a reversible switch that can adopt either the OFF or ON structure at the equilibrium. A secondary structure analysis performed using the program mfold predicts similar free energies of 223.8 kcal/mol and 223.1 kcal/ mol for the OFF and ON structures, respectively. This supports the idea that the add riboswitch can fluctuate readily between the ON and OFF states when compared to pbuE [22]. The structural reversibility of add is consistent with the riboswitch activity not requiring a coupling between transcription and translation. Indeed, both the structural reversibility and the absence of coupling for riboswitch regulation suggest that add may benefit of an extended time compared to pbuE because ligand binding can occur post-transcriptionally (after the riboswitch portion of the mRNA is transcribed). For the add riboswitch to operate under a purely dictating thermodynamic regime, it would require that the T 50 value approximates the K D of the riboswitch-ligand complex. It may be difficult to ascertain whether this is the case as it would demand to determine the K D of the riboswitch-adenine complex in vivo. However, the quantitative determination of metabolite concentration in vivo is becoming increasingly widely applied to map relative concentration changes induced by environmental alterations [39]. Nevertheless, our results are consistent with add operating under a thermodynamic regime that may exhibit a mixed character depending of various cellular conditions, such as the concentration of adenine [21]. The intracellular ligand concentrations of kinetically-controlled riboswitches are typically significantly higher than K D values of corresponding riboswitchligand complexes. As observed for the pbuE adenine riboswitch, the adenine concentration in B. subtilis was found to be ,30 mM [53]. Although the adenine concentration is not known in V. vulnificus, it is possible to provide an estimation about its intracellular concentration (,1.5 mM) if we consider that a similar adenine concentration exists in both gammaproteobacteria V. vulnificus and E. coli [39]. In such a case, the add riboswitch could be in presence of adenine concentrations in the low micromolar range which would be closer to the determined K D values [20,21,30,32], providing an additional indication about the thermodynamic character of the add riboswitch regulation regime. Together with these observations, the structural reversibility of the riboswitch and the absence of coupling requirement for the transcriptiontranslation suggest that the add riboswitch regulates translation initiation using a thermodynamic regime that may be modulated depending on cellular conditions. Other translationally-regulating riboswitches responding to S-adenosylmethionine (SAM), adenosylcobalamin, thiamine pyrophosphate (TPP) and riboflavin have also been shown to exhibit ligand binding, structural rearrangement, and ribosome binding in vitro [44,45,[54][55][56][57][58][59]. Whether all translationally-regulating ligand-binding riboswitches need to benefit from a thermodynamic regime to regulate gene expression will require further investigation.
Additional findings provided indications that other cellular players could be involved in bacterial translation initiation control. Recently, Burmann et al. have observed using NMR spectroscopy that the transcription factor NusG can associate alternatively with NusE or Rho [60]. Interestingly, because NusG contacts RNAP and that NusE is identical to the ribosomal protein S10, it was concluded that NusG may act as a functional link between transcription and translation. In a cellular context where translation initiation is inhibited, NusG is expected to be available to interact with Rho, which stimulates a Rho-dependent transcription termination. However, in conditions promoting translation initiation, the formation of a NusG-NusE complex should prevent Rho binding and transcription antitermination is predicted to take place. The molecular details about these mechanisms and to which extent they are linked to riboswitch translation initiation regulation will be a fertile subject for future research.
The pbuE riboswitch operates under a kinetic regime
In contrast to what we have observed for add, the pbuE aptamer shows a remarkable inefficiency to bind 2AP in presence of the expression platform ( Figure 1D) [20][21][22]. However, in vitro ligand binding is attained when occurring in a transcriptional context ( Figure 4A and 4B). While not as drastic as our observations with pbuE, ligand-binding inhibitory effects have often been observed for transcriptionally-regulating riboswitches when in presence of their expression platform, suggesting that the transcriptional context is critical for the ligand binding activity of some riboswitches [22,28,40,41]. It is likely that the additional sequence downstream of the aptamer allows structures that are incompatible with ligand binding. Accordingly, the presence of Rho-independent terminators, which are GC-rich helical domains, may disrupt ligand-binding structures as seen for the pbuE adenine riboswitch [20][21][22]. Moreover, in the case of negative regulation such as the FMN riboswitch, the antiterminator domain must be very stable as it competes with both the terminator structure and the aptamer domain. However, during transcription, because the aptamer domain is synthesized before the terminator, it may fold without competing with the terminator, which allows ligand binding. This is indeed what is observed in the cases of pbuE and ribD riboswitches. Because riboswitch folding is strongly influenced by the transcriptional process, co-transcriptional ligand binding is critical for the ''genetic decision'' to take place. More work will be required to draw general rules about riboswitch transcription regulation mechanisms and will very likely reveal additional factors allowing greater flexibility in riboswitch genetic regulation.
Recently, ligand binding parameters were studied for riboswitches responding to c-di-GMP and preQ 1 where additional regulation strategies were characterized [61][62][63]. For instance, the transcriptionally-regulating PreQ 1 riboswitch was found to exhibit two different coexisting stem-loop structures in the expression platform [61]. Upon PreQ 1 binding to the riboswitch, it was observed that the equilibrium of the competing hairpins becomes significantly altered. By studying the riboswitch mechanism, the authors provided a model for how a riboswitch presenting no obvious overlap between aptamer and terminator domains may regulate gene expression by employing bistable sequence elements [61]. In addition, the structural basis of ligand binding by a c-di-GMP riboswitch was obtained [62,63]. It was found that the affinity of the complex is very strong, exhibiting a K D of ,10 pM [63]. When comparing to the adenine and FMN riboswitches, it was observed that although the on-rate of c-di-GMP is similar to that of both riboswitches, the off-rate is significantly slower by ,5 orders of magnitude [63]. Thus, it is expected that the complex has a very slow approach to equilibrium so that ligand binding is effectively irreversible on a biological time scale, consistent with the riboswitch operating under a kinetic regulation regime. Thus, together with previously characterized riboswitches, these findings show that riboswitches may use various regulation strategies to achieve gene modulation.
In the present study, our findings are consistent with pbuE operating under a kinetic regime where the rate of ligand binding, more than the dissociation constant itself, is crucial in the decision of the transcription outcome. Our results corroborate with previous findings showing that fine-tuning of the transcription elongation is central not only for riboswitch regulation, but for RNA folding in general. As such, transcription is central for the Tetrahymena group I intron folding and splicing and for the bacterial RNase P RNA catalytic activity [64]. Because of the uneven elongation rate of the RNAP, which is modulated by several factors such as pause sites and transcription factors, many RNA structures will accumulate even though they are not the most optimally stable. For instance, pause sites in riboswitches play important roles in the case of the FMN riboswitch [28], and more recently for a pH-responsive riboregulator [50]. In both cases, the presence of the NusA transcription factor helps to decrease the polymerase rate, which increases the decision time frame. Additionally, both the pbuE adenine and FMN riboswitches demonstrate similarities as they carry a pause site in a U-rich sequence that is important for the riboswitch regulation ( Figure 5B and 5C). Even though pbuE and FMN riboswitches possess a pause site, NusA does not appear to increase RNAP pausing. Instead, NusA generally reduces the rate of transcription, as previously observed in the case of the RNase P [23]. In principle, transcriptional pausing can be affected by cellular conditions to yield longer pause lifetimes. Because of this, lower adenine concentrations may be required to bind the aptamer and to trigger gene regulation. Thus, because changes in cellular conditions are inherently involved in the riboswitch regulation regime, it is likely that the pbuE riboswitch regime may exhibit a more thermodynamic character depending upon transcription time scale, rate of ligand binding and protein factors involved at the transcriptional level [21].
General considerations for riboswitch regulation mechanisms
While they bind to the same ligand, transcriptionally-and translationally-regulating adenine riboswitches employ regulatory mechanisms that differ in several aspects ( Figure 6). For instance, although the add riboswitch could in principle bind adenine only during transcription, and could require coupling between transcription and translation, it appears as we have found here that this is not the case ( Figure 3A). To occur after transcription, ligand binding requires similar free energies for the adoption of both ON and OFF state structures, and may thus impose an additional selection pressure on the riboswitch. Similar free energies for both structures also suggest that such regulatory systems are more prone to ''leakiness'' from their OFF state, i.e., adoption of the ON state even in absence of adenine. Leakiness from the pbuE riboswitch is expected to be very low given the presence of a very stable terminator structure [22], which is important given that the regulated gene is a purine efflux pump [53,65]. Because the terminator stem is highly stable and disrupts the aptamer domain in absence of adenine [22], it is thus important that the riboswitch binds the ligand in a cotranscriptional manner (Figure 6), as this allows the binding to occur before the terminator is transcribed. The formation of the RNA-ligand complex increases the stability of aptamer domain and this process is aided by the presence of a pause site in the expression platform that gives more time for the ligand to bind. Thus, transcriptionally-regulating adenine riboswitches may rely on different regulatory mechanisms, compared to translationallyregulating ones, to increase the ''window of decision'' for gene regulation. However, because reversible switches such as add can perform ligand binding post-transcriptionally, there is no obvious need for these switches to contain pause sites such as found in pbuE ( Figure 6). Given that add operates at the level of translation, it will be important to experimentally determine if such a structural reversibility is a pre-requisite to control ribosome accessibility, and if so, to assess whether a non-required coupling between transcription and translation is always needed. Additional work will be required to determine if our findings may be expanded to other classes of ligand-dependent bacterial riboswitches.
Strains
Derivatives of E. coli MG1655 were used in all experiments. DH5a strain was used for cloning procedures. Transcriptional and translational fusions of V. vulnificus add gene were constructed by inserting a PCR product (using chromosomal DNA of V. vulnificus YJ016 as template) into the plasmids pFRD [66] and pRS1551 [67], respectively. Table 1 describes the oligonucleotides used in this study. PCR fragment containing 2237 to +18 relative to add start codon (oligonucleotides EM620-EM621) was digested by EcoRI and BamHI and ligated into EcoRI/BamHI digested pFRD and pRS1551 to generate the transcriptional add'-lacZ and translational add'-'lacZ fusions. Other constructs were generated by the three-step PCR mutagenesis as described previously [68]. Briefly, the add'-lacZ construct was used as a template for two independent PCR reactions. Oligonucleotides EM194-EM644 and EM195-EM643 were used to generate the G31C/G32C construct. Oligonucleotides EM194-EM827 and EM195-EM826 were used to construct the OFF state mutant, and oligonucleotides EM194-EM1012 and EM195-EM1011 were used to generate the P1-39 mutant construct. The two PCR products were mixed to serve as a template for a third PCR reaction using oligonucleotides EM194-EM195. The resulting PCR products were digested by EcoRI and BamHI and ligated into EcoRI/BamHI-digested pFRD and pRS1551. The ON state mutant was generated by adding the Figure 6. Schematic showing proposed regulation mechanisms for add and pbuE adenine riboswitches. Top, regulation mechanism of the add adenine riboswitch. The OFF state is represented with the Shine-Dalgarno (GAA) and AUG start codon sequences base paired in the sequestrator helix. Upon adenine (Ade) binding, the ON state is formed which increases the accessibility of both GAA and AUG sequences. The structural reversibility and the lack of requirement of a transcription-translation coupling for the regulatory activity of the riboswitch are consistent with a thermodynamic regulation regime. Bottom, regulation mechanism of the pbuE adenine riboswitch. In this regulation mechanism, a low intracellular adenine concentration leads to the formation of the OFF state. However, an elevated adenine concentration may co-transcriptionally bind to the riboswitch aptamer on a paused transcription complex, thereby stabilizing the aptamer, which will then ultimately lead to the formation of the ON state and the expression of pbuE. Because adenine binding occurs co-transcriptionally and is largely dependent on the rate of transcription, the regulation mechanism is consistent with a kinetic regime. Aptamers having thick lines represent stabilized complexes in presence of adenine. doi:10.1371/journal.pgen.1001278.g006 P1-39 mutation to the OFF state mutant using the same three-step PCR method. Transcriptional and translational fusions were inserted in single copy into the bacterial chromosome of wild-type strain EM1055 (described in [69]) at the l att site as described previously [67]. Stable lysogens were screened for single insertion of recombinant l by PCR as described [70].
In vitro RNA synthesis RNA was transcribed by the T7 RNA polymerase (Roche, Germany) using a PCR product as a template. Transcription reactions were performed in T7 transcription buffer (40 mM Tris-HCl at pH 8.0, 6 mM MgCl 2 , 10 mM dithiothreitol, 2 mM spermidine), with 400 mM NTPs (A, C, U and G), 20 units RNA guard, 20 units T7 RNA polymerase and 0.5 mg DNA template. After 4 h of incubation at 37uC, the mixture was treated with 2 units of Turbo DNase (Ambion) and extracted once with phenolchloroform. RNA transcripts were purified on denaturing acrylamide gel. The primers used for generating DNA templates for in vitro RNA synthesis were EM821-EM886 (wild-type add) and EM890-E886 (add OFF state and add ON state mutants). To generate the transcription templates, the genomic DNA of EM1005 strains harboring either ladd'-'lacZ, laddOFF'-'lacZ, or laddON'-'lacZ fusions were used as template for PCR reactions. The aptamer sequences used in this study are based on the genomic sequence to which a GCG sequence was added to the 59 side to allow high transcription yield and to minimize 59 heterogeneity [71].
Fluorescence spectroscopy
Fluorescence was performed on a Quanta Master fluorometer. Data were collected at 10uC in 10 mM MgCl 2 , 50 mM Tris-HCl (pH 8.0) and 100 mM KCl. Spectra were corrected for background and intensities were determined by integrating data collected over the range 365-475 nm. 2AP was excited at 300 nm to obtain a good separation between the Raman and the fluorescence peak.
The fraction of quenched 2AP fluorescence was determined by monitoring the fluorescence data using a fixed concentration of 2AP (50 nM). Titrations were performed using an increasing concentration of a given aptamer or riboswitch molecule. Because the total RNA concentration is in large excess relative to 2AP, the binding can be described by the equation where dF and F are the change in fluorescence intensity and the maximum fluorescence intensity in the absence of RNA, respectively. K Dapp is the apparent dissociation constant and the parameter a is a dimensionless constant proportional to the ratio of quantum yields of 2AP in the complex and in free solution [22]. The value of a is obtained with K Dapp by nonlinear least-squares fitting following the Levenberg-Marquardt algorithm and typically corresponds to a value of 0.05. The reported errors are the standard uncertainties of the data from the best-fit theoretical curves. The standard uncertainty of the measurement is thus assumed to be approximated by the standard deviation of the points from the fitted curve [22,72,73]. For each experiment, at least three measurements have been performed and all exhibited very similar uncertainties from best-fit curves.
Partial RNase T1 cleavage
Radioactively 59-labeled RNA was incubated in 50 mM Tris-HCl (pH 8.0) and 100 mM KCl in the presence of MgCl 2 and/or adenine at the indicated concentrations for 5 min at 37uC. RNase T1 (1 U) was allowed to react for 2 min and reactions were quenched by adding an equal volume of a solution of 97% formamide and 10 mM EDTA. Products were separated on a denaturing polyacrylamide gel, which was subsequently dried and exposed to PhosphorImager screens.
SHAPE assays
The SHAPE reaction was prepared using 1 pmol of purified RNA that was resuspended in two volumes of 0.56 TE buffer, in which was added one volume of 3.36 folding buffer containing 333 mM K-HEPES (pH 8.0), 333 mM NaCl, 10 mM MgCl 2 and the indicated concentration of adenine. The samples were heated to 65uC and allowed to cool down to 30uC before being preincubated 10 min at 37uC. N-methylisatoic anhydride (NMIA, Invitrogen) dissolved in dimethyl sulfoxide (DMSO) was then added and allowed to react for 80 min at 37uC. Modified RNA were precipitated, washed with 70% ethanol and resuspended in 0.56 TE buffer. Reverse transcription reactions were performed as previously described [38] and products were separated on a 5% denaturing polyacrylamide gel. Gels were dried and exposed to PhosphorImager screens. The RNA molecule used for SHAPE assays corresponds to nucleotides 1 to 165 of the transcribed mRNA. The region used for the primer corresponds to nucleotides 146 to 165.
Transcriptional and translational b-galactosidase assays
Kinetic assays for b-galactosidase activities were performed as described previously [74] using a SpectraMax 250 microtitre plate reader (Molecular Devices). Briefly, overnight bacterial cultures were incubated in LB media at 37uC and diluted 1000-fold into 50 ml of fresh LB media at 37uC. Cultures were grown with agitation to an OD 600 of 0.3 before adding adenine at the indicated concentrations. Specific b-galactosidase activity was calculated using the formula V max /OD 600 . The reported results represent data of at least three experimental trials. In vitro translation assays Translation reactions were performed in E. coli S30 extracts (Promega, L1030). Coupled and uncoupled reactions were performed using either 25 nM of DNA or 250 nM RNA as template, respectively. Template RNAs were transcribed as described above with T7 RNA polymerase. Reactions contained the following (in a total volume of 15 ml): 0.4 ml [ 35 S] methionine (588 Ci/mmol), 4.5 ml of S30 extract, 6 ml of S30 premix, 0.1 mM of amino acid mix without methionine, and the template. When indicated, adenine was added at a final concentration of 500 mM. Reaction mixtures were incubated at 37uC for 30 min and samples were analyzed on 15% SDS-PAGE, exposed on phosphor screen, and revealed on a Storm 860 (Molecular Dynamics).
Single-round transcription
E. coli RNAP was purchased from Epicentre Biotechnologies. B. subtilis RNAP and B. subtilis NusA were purified as previously described [28,75]. The B. subtilis sigmaA transcription factor was purified as previously described [76]. DNA templates were produced by recursive PCR using oligonucleotides containing the xpt promoter sequence for transcriptions using the E. coli polymerase [77]. The GlyQS promoter was used for transcriptions using the B. subtilis polymerase [78]. A transcription start site was generated 51 nt upstream of the aptamer domain. The construct was engineered to allow transcription initiation using an ApC dinucleotide and a halt at position +1 by omission of CTP, and a readthrough transcript product terminating at 40 nt after the AUG start codon. P1 stem mutants were made by altering the required number of base pair to achieved the indicated P1 stem length. Thus, elongated P1 stem constructs were performed by mutating the P1 stem 59 sequence to complement the corresponding 39 sequence to generate a P1 stem of the indicated length. Shortened P1 stem constructs were made by changing the P1 59 sequence to its Watson-Crick complement so that base pair formation with the P1 39 sequence is inhibited. Single-round transcriptions were performed as previously described [77], using 300 fmol DNA templates and 0.6 mg of either E. coli or B. subtilis RNA polymerase together with an equivalent amount of the sigmaA factor. Transcriptions were initiated in a tube containing 16 transcription buffer including 150 mM ApC, 0.75 mM UTP, 0.25 mM [a-32 P], 2.5 mM of ATP and GTP [28] by incubating at 37uC for 15 min, which was subsequently incubated on ice for 10 min. Transcription elongation was initiated by adding rNTP to a final concentration of 65 mM in presence or absence of ligand at 37uC for 15 min. Heparin (20 mg/mL) was added to prevent transcription re-initiation. The rNTP concentration was decreased to 20 mM when performing time course experiments. Sequencing transcription reactions were performed by including 120 mM of 39-O-methyl-NTP. Reactions were stopped by adding one volume of 95% formamide. Transcription products were resolved using denaturing gel electrophoresis. Gels were dried and exposed to PhosphorImager screens. Experiments have been performed at least three times and all exhibited very similar uncertainties (,10%). The uncertainties on calculated T 50 values were obtained as previously reported [46]. Thermal denaturation 300 nM RNA was incubated in 10 mM MOPS (pH 8.0), 25 mM NaCl and 2 mM MgCl 2 in presence or absence of 10 mM DAP, and was degassed 5 min. Denaturation profiles were obtained at 258 nm using a Shimadzu UV2501 spectrophotometer equipped with a temperature controller. Samples were heated at a rate of ,0.5uC/min over a range of 20uC to 80uC and an average of 3 s was used for each reading. Absorbance data was normalized by subtracting the pre-and post-transition to obtain the proportion of the unfolded state. Data was smoothed over a 3uC range and melting temperature values were determined by evaluating the temperature required to obtain half of the transition of the resulting profiles.
Primer extension
Transcriptional +1 of add and pbuE riboswitches were determined as previously described [74]. Briefly, 40 mg of total RNA were incubated in presence of 2 pmol of radioactively 59labeled DNA oligonucleotides and the reverse transcription reaction was allowed according to the Superscript II protocol (Invitrogen, Burlington, ON). Reactions were precipitated and migrated on denaturing polyacrylamide gels. PCR reactions were used as sequencing markers. Gels were dried and exposed to PhosphorImager screens. Figure S1 Determination of the transcription start sites of add and pbuE mRNA by primer extension. (A) Total RNA extracted from E. coli and V. vulnificus were used in primer extension reactions using the oligo EM794. PCR reactions were used as sequencing markers for determination of the +1 start site of the add mRNA (lanes C, T, A and G). The arrow represents the +1 start site that is identical in both E. coli or V. vulnificus. (B) Total RNA extracted from B. subtilis was used in a primer extension reaction using the oligo X5short. PCR reactions were used as sequencing markers (lanes C, T, A and G). Found at: doi:10.1371/journal.pgen.1001278.s001 (2.66 MB TIF) Figure S2 SHAPE modification of the add riboswitch done in absence (2) or in presence (+) of 10 mM adenine. Left, sequencing reactions are indicated for each nucleotide and positions where NMIA reaction was modified upon adenine-induced folding are indicated on the right. N represents a primer extension performed on unreacted RNA and U, A, C and G represent sequencing reactions. Right, secondary structure summarizing SHAPE data. Protected regions in presence of adenine are highlighted by black circles. Because the resolution of the gel does not allow nucleotide resolution for the region J1/2, protections in this region were not attributed to nucleotide positions. Enhanced reactions in presence of adenine are identified by a star. Overall, our results are consistent with the Shine-Dalgarno (GAA) and AUG start codon sequences being more exposed to the solvent in presence of ligand. Found at: doi:10.1371/journal.pgen.1001278.s002 (4.40 MB TIF) Figure S3 RNase T1 cleavage assay of the add riboswitch showing the structural change of the expression platform in presence of adenine, and in the context of ON and OFF state mutants. Left, lanes N and L represent samples that were not reacted and that were subjected to partial alkaline digestion, respectively. Nuclease digestions were performed as a function of 10 mM magnesium ions and 10 mM adenine. Substantial cleavage sites are indicated on the right. Right, secondary structure summarizing RNase T1 data. Protected regions in presence of adenine are highlighted by black circles. Enhanced reactions in presence of ligand are identified by a star. respectively. The correlation between the T 50 value and the rNTP concentration is consistent with a mechanism in which a higher transcription rate gives less time for the formation of the riboswitch-ligand complex to form, which results in an increased T 50 value. | 2014-10-01T00:00:00.000Z | 2011-01-01T00:00:00.000 | {
"year": 2011,
"sha1": "a415e593be8ac82be1bf7287a010d0d2605ff3a8",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1001278&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a415e593be8ac82be1bf7287a010d0d2605ff3a8",
"s2fieldsofstudy": [
"Biology",
"Engineering"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
14806653 | pes2o/s2orc | v3-fos-license | The Role of GC-Biased Gene Conversion in Shaping the Fastest Evolving Regions of the Human Genome
GC-biased gene conversion (gBGC) is a recombination-associated evolutionary process that accelerates the fixation of guanine or cytosine alleles, regardless of their effects on fitness. gBGC can increase the overall rate of substitutions, a hallmark of positive selection. Many fast-evolving genes and noncoding sequences in the human genome have GC-biased substitution patterns, suggesting that gBGC—in contrast to adaptive processes—may have driven the human changes in these sequences. To investigate this hypothesis, we developed a substitution model for DNA sequence evolution that quantifies the nonlinear interacting effects of selection and gBGC on substitution rates and patterns. Based on this model, we used a series of lineage-specific likelihood ratio tests to evaluate sequence alignments for evidence of changes in mode of selection, action of gBGC, or both. With a false positive rate of less than 5% for individual tests, we found that the majority (76%) of previously identified human accelerated regions are best explained without gBGC, whereas a substantial minority (19%) are best explained by the action of gBGC alone. Further, more than half (55%) have substitution rates that significantly exceed local estimates of the neutral rate, suggesting that these regions may have been shaped by positive selection rather than by relaxation of constraint. By distinguishing the effects of gBGC, relaxation of constraint, and positive selection we provide an integrated analysis of the evolutionary forces that shaped the fastest evolving regions of the human genome, which facilitates the design of targeted functional studies of adaptation in humans.
Introduction
Many recent studies have compared the human genome with those of other mammals, with the goal of identifying signatures of adaptive evolution and thereby gaining insight into the genetic basis of human-specific biology (Clark et al. 2003;Pollard, Salama, Lambert, et al. 2006;Prabhakar et al. 2006;Bird et al. 2007;Kim and Pritchard 2007;Kosiol et al. 2008). In protein-coding sequences, a high rate of amino acid-changing substitutions (compared with the rate of synonymous substitutions) is considered to be a hallmark of positive selection. Noncoding regions do not have such a natural partition of substitutions into functional and neutral classes. Instead, researchers have focused on the overall rate of substitutions on a lineage of interest in relation to the expected rate, given the level of conservation between multiple species at the same locus. This method is particularly effective for identifying highly conserved noncoding elements (CNEs) that have experienced a burst of substitutions on a particular lineage. It has been applied to fruit flies (Holloway et al. 2008), specific mammalian lineages (Kim and Pritchard 2007), and humans (Pollard, Salama, Lambert, et al. 2006;Prabhakar et al. 2006;Bird et al. 2007), where the fast-evolving sequences are called human accelerated regions (HARs) or human accelerated conserved noncoding sequences. Experimental investigations have established that some HARs function as RNA genes (Pollard, Salama, Lambert, et al. 2006) and tissue-specific enhancers (Prabhakar et al. 2008). These and other studies currently underway aim to elucidate the impact of human-specific substitutions on the function of HARs.
Focusing on lineage-specific evolution in CNEs has several advantages over analyzing the entire noncoding genome. First, the power to detect substitution rate acceleration is much higher against a background of conservation than when sequences have been evolving close to the neutral rate. Second, power and computational costs are improved by analyzing a small portion (typically 5-10%) of the genome. Finally, the unusually slow rate of evolution (in other species) suggests that CNEs play a functional role across the phylogeny under study. Thus, tests for lineagespecific acceleration in CNEs enable researchers to focus on those parts of the noncoding genome where statistically significant changes in substitution rates can be detected and where these substitutions are most likely to have a functional impact.
This approach, however, detects lineage-specific acceleration but not positive selection per se. This is because substitution rates, while significantly faster than expected given the extreme conservation in other species, might not exceed neutral rates and could therefore reflect the relaxation c The Author(s) 2011. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0), which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Open Access
MBE of purifying selection and not adaptive evolution. Hence, additional post hoc statistical tests (e.g., comparing rates of substitutions to local neutral rates or tests based on polymorphism data) and functional studies are needed before an adaptive interpretation of an HAR or another accelerated element is appropriate.
Acceleration of substitution rates can also result from processes that do not involve changes in the mode of selection, such as GC-biased gene conversion (gBGC) . gBGC is a nonadaptive recombinationdriven process. It is believed to result from a biochemical bias towards guanine or cytosine (GC) alleles in the mismatch repair of heteroduplex DNA during meiotic recombination. An effect of gBGC is to increase the rate of weak (A or T) to strong (G or C) substitutions and to decrease the rate of strong-to-weak substitutions, leading to higher GC-content in affected regions (Duret and Arndt 2008;Duret and Galtier 2009a;Romiguier et al. 2010). Once this process reaches equilibrium, gBGC typically decreases evolutionary rates. However, the initiation of a high rate of gBGC (e.g., because of the origin of a new recombination hot spot) can increase the overall rate of substitutions Duret and Arndt 2008;Duret and Galtier 2009a) and thereby mimic positive selection. Investigations of the relationship between gBGC and accelerated substitution rates have largely focused on proteins, where gBGC can create spurious signals of positive selection nearby recombination hot spots (Berglund et al. 2009;Galtier et al. 2009). gBGC is estimated to have affected as many as ∼20% of genes exhibiting elevated nonsynonymous substitution rates on short branches of the primate phylogeny (Ratnakumar et al. 2010), as well as many of the fastest evolving exons in the human genome (Berglund et al. 2009). GC-biased substitution patterns in some HARs suggest that CNEs may also be the targets of gBGC (Pollard, Salama, King, et al. 2006;Duret and Galtier 2009b;Prabhakar et al. 2009). These findings underscore the need to take gBGC into account in tests for lineage-specific acceleration.
Previous approaches to studying the impact of gBGC on tests for positive selection have generally used one method to identify genomic regions with high substitution rates, followed by a separate method to determine if the observed substitution patterns in these regions are due to gBGC (Dreszer et al. 2007;Berglund et al. 2009;Galtier et al. 2009;Ratnakumar et al. 2010). Duret and Arndt (2008) used a more integrated model to contrast gBGC with neutral evolution but did not investigate the interplay of gBGC and selection. To prioritize HARs for follow-up experiments that assess the functional consequences of human-specific substitutions and to determine the role of gBGC in shaping fastevolving regions of the human genome, it is desirable to explicitly disentangle the effects of selection and gBGC on CNEs.
Motivated by this challenge, we developed a model for the evolution of nucleotide sequences that simultaneously accounts for both gBGC and selection. This approach enables us to capture the effects of gBGC on substitution rates and GC-content under a range of scenarios, from strong negative selection to strong positive selection. We do not attempt to model the complex process of gBGC in detail but instead focus on capturing its main effects on nucleotide substitution rates and patterns in the framework of statistical phylogenetics. We make use of this integrated model to develop a classification framework based on a series of likelihood ratio tests (LRTs). This enables us to annotate CNEs based on evidence of lineage-specific gBGC, relaxation of constraint, positive selection, and combinations of these forces. We demonstrate the performance of the method on simulated data and then apply it to annotate 202 HARs (Pollard, Salama, King, et al. 2006) with respect to their evolutionary histories. The resulting analyses suggest that substitutions in the majority of HARs cannot be explained by gBGC alone.
Modeling DNA Sequence Evolution under the Joint Action of Selection and gBGC
To investigate the interplay between selection and gBGC, we developed a molecular evolutionary model for lineagespecific changes in the rate and pattern of substitutions. Our approach builds upon the body of literature describing the use of continuous-time Markov chains in phylogenetic models of DNA sequence evolution, as reviewed by Liò and Goldman (1998). In those models, extant and extinct species are related via a binary tree (the species tree), and the likelihood of substitutions along the edges of the tree is governed by a 4 × 4 rate matrix Q, which describes the instantaneous rate at which each nucleotide is substituted by others. In contrast, gBGC is typically modeled as an evolutionary force affecting the way certain alleles fix within a population (Gutz and Leslie 1976;Nagylaki 1983a). To combine the two approaches, we use the weak-mutation model (Golding and Felsenstein 1990) and multiply a neutral rate matrix μ (describing how mutations arise within a population) with eventual fixation probabilities f to obtain Q. Following Nagylaki (1983a, 1983b, fixation probabilities under the joint action of selection and gBGC can be expressed in terms of two parameters, a selection coefficient S ∈ (−∞, ∞) and a gene conversion disparity B ∈ [0, ∞), both scaled by population size: . (1) The 4 × 4 matrix I (defined below) determines which fixation probabilities are affected by gene conversion, and N is the number of breeding individuals. By identifying N with the effective population size N e , we can expect the fixation probabilities to hold under more general assumptions than the ones used to derive equation (1) (Nagylaki 1983a). We note that equation (1) implies an exponential decrease in the probability of fixation for strong-to-weak mutations under gBGC but only a linear increase for weak-to-strong mutations. Hence, evidence of B > 0 will ultimately be based at least as much on the absence of strong-to-weak GC-Biased Gene Conversion in Human Accelerated Regions • doi:10.1093/molbev/msr279 MBE substitutions in an alignment as on the presence of weakto-strong substitutions (see below). Because gBGC favors strong (C or G) over weak (A or T) alleles, equally disfavors weak compared with strong alleles, and does not distinguish between A and T or between C and G, we have: 1 for i weak and j strong −1 for i strong and j weak 0 otherwise, such that S < 0 represents purifying selection, S > 0 represents positive selection, and B > 0 represents a conversion bias towards strong versus weak alleles (gBGC). We note that the parameters S and B could in general be time dependent (see supplementary text 1, Supplementary Material online). In our approach, they are treated as constants, reflecting average effects over time. We estimate these parameters by combining data across multiple sites (i.e., alignment columns), and the same fixation model applies to all new alleles. Next, we integrate the fixation probabilities with a continuous-time Markov model for sequence evolution by taking the instantaneous transition rates Q to be the (element-wise) product of mutation rates μ and f(S, B) from equation (1) (Golding and Felsenstein 1990;McVean and Vieira 2001;Nielsen and Yang 2003;Kryazhimskiy and Plotkin 2008): Here, i = j, and the diagonal entries of Q are determined by the constraint that rows of the rate matrix sum to zero. The factor 2N in equation (2) scales the mutation rate μ to the population level and assures that Q = μ in the absence of gBGC and selection (see below). Equation (2) is similar to the model of Nielsen and Yang (2003), which is concerned with the effects of positive selection in protein-coding DNA sequence. It is also close to the approach of Duret and Arndt (2008), except there, the authors restrict themselves to S = 0 (no selection); Harrison and Charlesworth (2011) recently used a similar composite modeling approach to study the effect of biased gene conversion on patterns of codon usage in yeast. For neutral evolution (B → 0 and S → 0), we note that 2Nf ij → 1 for all i = j and Q → μ. For selection alone (S = 0 and B = 0), equation (2) reduces to Q = μ ρ(S), with ρ(S) = S/(1 − exp[−S]). In the case of purifying selection, S < 0 and 0 ρ < 1, resulting in decreased substitution rates. For positive selection, S > 0 and ρ > 1, which implies an increase in substitution rates, as described in Pollard et al. (2010). Because the gBGC parameter B not only affects the rate of substitutions but also their pattern (see eq. 2), the complete model can be used to disentangle effects of selection and gBGC.
Inferring Substitution Rate Acceleration in the Presence of gBGC
We utilize the model in equation (2) to infer from alignments of DNA sequences of multiple species whether a change in the mode of selection, gBGC, or both has acted along a certain lineage in the species tree. Our goal was to assign alignments to one of four classes: gBGC (C b , biased class), change in the mode of selection (C a , accelerated class), both (C ab ), or neither (C 0 , null class).
For each alignment, we assume a neutral model, M N , corresponding to Q = μ on each branch of the species tree. For example, when analyzing the HARs, we estimate M N from multiple sequence alignments of untranscribed flanking sequence (see supplementary text 2, Supplementary Material online). Another possibility would be to utilize 4-fold degenerate sites or ancestral repeats if a sufficient number of such sites are available near the locus of interest. Building on M N , we model various lineage-specific evolutionary histories by defining a seminested collection of models, taking into account the background rate of substitutions at the locus (via a species tree-wide selection coefficient S G ). Each model is subject to different constraints on two parameters (the lineage-specific selection coefficient S and the lineagespecific conversion disparity B), and the entire analysis is performed with respect to a single lineage (branch) of interest in the species tree. The model parameters are used to calculate Q (S G tree wide and S and B on the lineage of interest) according to equation (2) -Acceleration Class (C a ). Lineage-specific increase in substitution rate, without GC bias. In addition to tree-wide MBE rescaling (via S G ), unbiased acceleration acts on the lineage of interest. This model covers the scenarios of acceleration due to relaxation of constraint (S 0) or positive selection (S > 0), and we later disentangle these two cases (see below). Constraints: S S G and B = 0, Model: M a . -Acceleration and gBGC Class (C ab ). GC-biased substitution rate increase with additional unbiased substitutions.
In addition to tree-wide rescaling (via S G ), unbiased acceleration and gBGC act together on the lineage of interest. Constraints: S S G and B 0, Model: M ab .
We note that model M 0 is nested within models M a and M b and that both M a and M b are nested within model M ab . All four models reduce to M N when S G = 0, S = 0, and B = 0. S G < 0 with S = S G corresponds to purifying selection in all aligned species, as expected for CNEs. On the other hand, S > S G and/or B > 0 correspond to lineage-specific changes in substitution rate and/or patterns that cannot be accounted for by globally rescaling the species tree.
A Likelihood Ratio-Based Alignment Classification Procedure
To annotate an alignment D to a certain class, we perform a series of LRTs between the models introduced above. We use the phyloFit routine in RPHAST (Hubisz et al. 2011) to obtain likelihoods L 0 , L a , L b , L ab for models M 0 , M a , M b , M ab , respectively, by maximizing over the parameters S G , S, and B within the constraints of each model: These likelihoods depend on the initial neutral model M N , but this dependence is not indicated in the notation for simplicity. To perform an LRT between model M i and model M j , we compare L i with L j , and we reject We classify each alignment by a rigorous procedure that uniquely maps a series of LRTs to an annotation class, in a manner that is conservative with respect to annotating selection (supplementary text 3, Supplementary Material online). Briefly, we first compare each of the three lineagespecific models with the null model. In order to further disentangle gBGC from selection, we compare the lineagespecific models with each other. We then use the following three rules to classify an alignment: First, for an alignment to get annotated to a certain class, the associated model has to reject M 0 . Second, if the LRTs imply clear preference for one model compared with all others, the corresponding class is chosen. Otherwise (if ties arise), we split them by preferring C b over C a and both C a and C b over C ab . This approach uniquely maps a series of LRTs to annotation classes (supplementary fig. S3, Supplementary Material online), and supplementary text 3, Supplementary Material online, describes the mapping in more detail.
Preferring C b over C a makes our approach conservative with respect to annotating selection because all selectionannotated alignments show a significant preference towards selection as compared with gBGC alone. On the other hand, C b can contain alignments with equal evidence for M a and M b . To identify alignments with strong evidence of gBGC, we split C b into two subclasses: C b+ contains the elements of C b where the LRT rejects M a in favor of M b and C b− contains the other cases.
Critical Regions of the LRTs
In order to apply the LRTs, a critical region needs to be specified for each test. Our classification procedure considers at most seven LRTs, corresponding to all possible comparisons between M 0 , M b , M a , and M ab where the alternative is not nested within the null hypothesis. Instead of relying on asymptotic results for nested model LRTs, we determine the parameters d i|j via simulation in order to directly account for the sequence properties (e.g., GC-content, gap patterns) and the relatively short lengths of alignments we analyze (see below). The critical regions are defined via d a|0 : sup where we take α = 0.05. We approximate the suprema with maxima over a finite grid of parameter values, with the constraints given in equation (3) in effect (see below).
Distinguishing Positive Selection from Relaxation of Purifying Selection
For the acceleration class C a , we know that the rate of substitutions on the lineage of interest exceeds the rate in M 0 (i.e., the most likely rate in the absence of lineage-specific effects). However, S > S G does not distinguish the action of positive selection (S > 0) from relaxation of constraint (S < 0). Note that S = 0 corresponds to the branch of interest in M a having the same length as in the neutral model M N . To annotate an alignment with respect to the specific type of change in mode of selection, we divide the class C a MBE into two subclasses C a− and C a+ . These classes correspond to two models M a− and M a+ , nested within M a , that differ from M a via the parameter constraints S G < S 0 for M a− (relaxation of constraint) and 0 < S for M a+ (positive selection). Note that M a− is only defined for alignments evolving more slowly than the local neutral rate (S G < 0), which is the case in CNEs. We perform an LRT between M a+ and M a− and annotate an alignment to C a+ if we reject M a− in favor of M a+ . Again, we take α = 0.05 and determine the critical region via simulation. C a+ then contains C a alignments with faster than neutral substitution rates, with a false positive rate of α. These are candidates for having experienced positive selection on the lineage of interest.
Classification of HARs
We use our approach to classify 202 HARs (Pollard, Salama, King, et al. 2006 Having derived a local neutral model M N for each HAR, we fit the models M 0 , M a , M b , and M ab to each HAR alignment and compute L 0 , L a , L b , and L ab (see above). Next, we derive the critical regions for the LRTs underlying our annotation procedure. To do so, we use simulations to approximate the suprema in equation (4). First, we calculate the empirical distribution of "gap patterns" G for each HAR alignment. A gap pattern relates to an alignment column and is a binary annotation of which species have gaps in that column. Applying a gap pattern ensures that the likelihoods of parametrically simulated alignments more accurately reflect those of real multiple sequence alignments. We then use G and M N to generate data across an evenly spaced grid of parameter values: S G =Ŝ G , S =Ŝ G , . . . , S max , and B = 0, . . . , B max , whereŜ G is the estimate obtained by fitting M 0 to the HAR alignment (supplementary text 4, Supplementary Material online). For each grid point (S, B), we generate 1,000 alignments as follows. We obtain a model with human-specific acceleration or gBGC by transforming M N using equation (2) and a parameter combination (Ŝ G , S, B). Using this transformed model, we parametrically generate 1,000 ungapped alignments of the same length as the HAR. For each ungapped alignment, we independently sample a gap pattern from G for each alignment column and use these patterns to mask the simulated alignment. We then maximize the likelihood for each of the models M 0 , M a , M b , and M ab corresponding to the null, acceleration, gBGC, and acceleration plus gBGC classes. Aggregating over alignments, this yields estimatesd b|0 ,d a|0 , d ab|0 ,d b|a (S),d a|b (B),d ab|b (B), andd ab|a (S) for each (S, B)parameter combination. Finally, we determine the classification boundaries by taking maxima (corresponding to the suprema in eq. 4) over the finite grid of estimates, which empirically controls the false positive rate of each test to be not more than 5%. We perform similar calculations to refine class C a into C a+ and C a− and class C b into C b+ and C b− . We note that under this approach, each HAR has its own set of critical regions, reflecting its unique properties in terms of alignment length, substitution rate, and gap patterns.
Jointly Modeling gBGC and Selection
We have developed a molecular evolutionary model and classification procedure to investigate the effects of gBGC and selection in a lineage of interest and applied it to make inferences about the recent evolution of HARs, some of the fastest evolving regions of the human genome (Pollard, Salama, King, et al. 2006). As described in the Materials and Methods, our approach uses the weak-mutation model (Liò and Goldman 1998) and multiplies a neutral rate matrix μ by fixation probabilities f (eqs. 1 and 2) to obtain a rate matrix Q that accounts for both unbiased acceleration (via a selection coefficient S) and gBGC (via a gene conversion disparity B). This enables us to study the interplay between the two forces. We find that we can accurately recover selection parameters and gBGC disparities from sequence alignments (supplementary text 5, Supplementary Material online). We have made a software implementation of our approach publicly available as part of the R-package RPHAST, an R programming language interface to the opensource comparative, and evolutionary genomics software package PHAST (Hubisz et al. 2011).
We used this model of substitution processes in the presence of selection and/or gBGC to delineate four biologically relevant scenarios for the lineage-specific evolution of DNA sequence (see Materials and Methods). To identify changes in the rate or pattern of substitutions in the human lineage relative to other mammals, we define four classes of DNA sequence alignments: 1) human-specific acceleration (class C a ), 2) human-specific gBGC (class C b ), 3) both (class C ab ), or 4) neither (class C 0 ) (supplementary fig. S1, Supplementary Material online). Since HARs are CNEs in nonhuman species, we further refined the acceleration class into a subclass corresponding to relaxation of purifying selection (C a− ) and a subclass with faster than neutral substitution rates, suggestive of positive selection (C a+ ). We then designed a classification procedure for assigning alignments to classes, based on a series of LRTs. This leads to a natural refinement of the gBGC class C b into two subclasses C b+ and C b− . Alignments are only assigned to the high-confidence gBGC subclass C b+ if there is strong evidence of GC-biased substitutions (false positive rate 5%, see Materials and Methods). This classification procedure is designed to aid investigators in prioritizing HARs for experimental and bioinformatic follow-up studies. For example, HARs in the positive selection class C a+ are good candidates for functional studies of human-specific adaptation. Those in the relaxation of constraint class C a− may be of interest as examples of loss of function (nonadaptive or adaptive) but would suggest different experiments than putative cases of functional acquisition. HARs in the gBGC class C b+ are interesting because they are instances of CNEs that have experienced excess and potentially deleterious substitutions in the human lineage. The acceleration and gBGC class C ab contains HARs with many human-specific substitutions (not all of which are weak-to-strong), so that both evolutionary forces are needed to explain the differences between the human and chimpanzee versions of their sequences. These HARs may shed light on the interplay between selection and gBGC in the human genome.
Selection and gBGC Interact
To investigate the interaction between selection and gBGC, we used our model to explore the expected number of substitutions and the change in GC-content along a lineage of interest under a range of values for the selection coefficient S and the gene conversion disparity B. We utilized alignment data from 4-fold degenerate sites in the ENCODE regions (ENCODE Project Consortium 2007) to obtain estimates for the neutral mutation rates μ in equation (2) 15. This parameter range yielded expected increases in GC-content up to about 5%, a value that is observed in some of the HARs (table 1 and supplementary table S2, Supplementary Material online). The scaled gBGC disparity for the average recombination hot spot has been reported to be 8.7 (Ratnakumar et al. 2010), which is within the range we investigated. We explored selection coefficients of −4 S 10, corresponding to substitution rates as high as ten times the neutral rate, as observed in substitution hot spots (Duret and Arndt 2008). Negative values of S enabled us to explore the interplay between gBGC and purifying selection.
We found that the quantitative influence of gBGC on GCcontent and substitution rate depends on the presence and level of selection. As expected, higher values of the gBGC disparity B lead to increased GC-content for all values of S ( fig. 2). But this effect is not constant; the interplay between gBGC and selection results in a greater impact of gBGC on GC-content with increasing S. Similarly, the effect of gBGC on the rate of substitutions also depends on the value of S. While in general the number of expected substitutions is higher for larger values of B, the effect of gBGC on the substitution rate is most pronounced for small S and decreases as S increases. This trend includes cases where the selection coefficient S = 0 (no selection) or even S < 0 (negative selection), where we found that gBGC can still lead to substantially increased substitution rates at plausible values for the disparity parameter B. These findings suggest that it is critical to model gBGC and selection together to avoid spurious conclusions about positive selection due to gBGC-induced elevations in substitution rates.
Detecting Substitution Rate Acceleration in the Presence of gBGC
We investigated the accuracy of our classification method for inferring the presence of substitution rate acceleration, gBGC, or a combination of both from alignment data. We aimed to determine if the two processes can be disentangled. If so, we also wanted to know whether our method has sufficient power to be applied to short genomic loci, such as HARs. To address these questions, we simulated alignments with different known combinations of selection coefficients and gBGC disparities on the human branch. The underlying neutral model was the same as in the previous section (supplementary text 2, Supplementary Material online), and we focused on the species tree for human, chimp, and macaque. For this simulation study, we chose S G = 0 because we wanted to study the effects of both purifying and positive selection. We applied our classification method to assign each alignment to one of the four classes based on the pattern of substitutions on the human branch.
We were able to annotate 1,000-bp alignments to the correct class in most of the parameter space ( fig. 3, Panel A). As expected, power is reduced for shorter alignments (more white area in Panels B and C of fig. 3), so that most nearly null 100-bp alignments are classified as coming from the null class C 0 , rather than the correct acceleration or gBGC class. Nevertheless, we could still detect pronounced in-stances of gBGC, rate acceleration, and combinations of the two. Importantly, alignments generated with gBGC only (S = S G ) are almost never falsely annotated as belonging to the selection-only class C a (low number of false positives), which implies that our assignment to annotation classes is conservative with respect to annotating selection. Also, most alignments generated with weak selection (S < 2) and strong gBGC are conservatively classified as C 0 or C b . Both these features of the classification method are consequences of our choice of mapping the results of individual LRTs to annotation classes (see Materials and Methods). By adjusting this mapping (or the critical regions of some or all the LRTs), the same method could be tuned to detect weaker selection coefficients at the cost of more false positives. Overall, we conclude that our model captures the hallmarks of gBGC, and that our classification method is capable of delineating pronounced effects of selection and gBGC in short genomic elements.
gBGC Accounts for the Substitutions in Only a Minority of HARs To explore the relative contributions of positive selection, relaxation of constraint, and gBGC to the fastest evolving regions of the human genome, we applied our classification procedure to the 202 HARs (Pollard, Salama, King, et al. 2006). We were curious to see whether the abundance of weak-to-strong substitutions in the HARs (see table 2) is indicative of gBGC being a dominating force. To that end, we carefully established the critical region for each LRT to be specific for each HAR, taking into account differences in local neutral rate, alignment length, gap pattern, and GCcontent (see Materials and Methods). We find that 154 of the 202 HARs are assigned to the acceleration class C a , 38 to the gBGC class C b (28 to C b+ and 10 to C b− ), and 10 to the acceleration and gBGC class C ab (table 1).
In other words, even when strictly controlling false positives, the majority of HARs (76%) can best be explained by a model with a change in the selection coefficient but no gBGC. Of the 154 HARs in C a , 112 are assigned to the positive selection class C a+ because they have substitution rates that significantly exceed the local neutral rate. The remaining 42 are assigned to the relaxation of constraint class C a− . We compared our method for class assignment with model selection via the Bayesian information criterion (BIC) and via the Akaike information criterion (AIC). All three selection criteria agree rather well (table 3). BIC tends to favor simpler models compared with AIC, as is expected (Hastie et al. 2009), whereas our LRT-based method results in a classification that is intermediate in complexity.
We analyzed the 647 individual substitutions in the 202 HARs to compare substitution rates and GC-content across the six classes. To quantify differences in substitution rates between classes, we calculated odds ratios (ORs) and performed Fisher's exact tests on the individual alignment columns (supplementary text 7, Supplementary Material online). Substitution rates are similar for C a+ and C b+ and they exceed those of the C a− and C b− , respectively (table 1). Since more HARs are assigned to the acceleration classes MBE FIG. 3. Inferring acceleration in the presence of gBGC. We used simulations to determine the frequency with which alignments generated from models with a wide range of levels of selection (S) and gBGC (B) are assigned to each class by our methodology. Increasing brightness corresponds to a decreasing fraction of the null class (C0). For the other three classes, the color representation corresponds to a point on the probability 2-simplex (red = gBGC, green = acceleration, blue = both). Because our classification procedure is conservative with respect to annotating selection, the red area is larger than the green area in each plot. Panel A: 1,000-bp alignments. Power is high and relatively few nonnull alignments are assigned to C0 (white/light grid points). Panel B: 500-bp alignments. Power is slightly reduced. Panel C: 100-bp alignments. Power is significantly lower (more white/light grid points), but we are still able to correctly annotate most of the extreme instances of substitution rate acceleration in the presence of gBGC.
(and these classes have a high average number of substitutions), 74% (476 of 647) of all human-specific substitutions in HARs belong to C a and 57% (394 of 647) belong to C a+ , which is a significant association (OR = 1.26, P = 0.004). As expected, substitutions in class C b also significantly increase GC-content on average (OR = 45.66, P < 1 × 10 −15 ), whereas GC-increasing substitutions are depleted in class C a (OR = 0.07, P < 1 × 10 −15 ). Pollard, Salama, King, et al. (2006) found a positive correlation between acceleration level and the proportion of in- Table 2. Substitutions of Each Type across All HARs.
Type
Number of Substitutions (%) Weak-to-strong 369 (57) Strong-to-weak 187 (29) Weak-to-weak 46 (7) Strong-to-strong 45 (7) ferred human substitutions that were weak-to-strong across the 202 HARs, with the most striking evidence of gBGC in extremely accelerated HARs (HAR1-HAR5). Consistent with those results, we find that the proportion of HARs in class C b+ increases with acceleration level and is equal to the proportion of HARs in class C a+ for HAR1-HAR5 (supplementary table S3, Supplementary Material online). We assign HAR1 and HAR3 to C b+ , reflecting the fact that their weak-to-strong-biased substitution patterns are best explained by the gBGC model. This finding suggests the pursuit of experimental studies that explore possible MBE human-specific losses of function for these two elements. In contrast, HAR4 and HAR5 have unbiased substitution patterns and are assigned to class C a+ . Interestingly, our method assigns HAR2-about which there has been a lively debate regarding evolutionary history (Prabhakar et al. 2008;Prabhakar et al. 2009;Duret and Galtier 2009b)-to the mixed selection and gBGC class C ab because it harbors mostly weak-to-strong substitutions but also one strongto-strong substitution (which is unlikely under M b , because HAR2 is under strong purifying selection with S G 0). Evidence of positive selection is strongest among the highly, but not extremely, accelerated HARs (66% of HAR6-HAR49 are assigned to class C a+ ), suggesting prioritization of these HARs for studies of functional adaptation in humans. Supplementary table S2, Supplementary Material online, gives further details about evolutionary rates and patterns in each HAR.
HAR Classification Is Not Driven by CpG Effects
To account for the effect of high substitution rates in cytosine-phosphate-guanine (CpG) dinucleotides, we repeated our analyses after conservatively dropping all substitution columns that might correspond to human-and chimp-ancestral CpG sites (i.e., masking all substitutions except class 1 sites, as defined by Meunier and Duret 2004). Across all HARs, 264 of the estimated 647 human substitutions are masked. Of the masked substitutions, 173 were inferred to be weak-to-strong and 63 were inferred to be strong-to-weak in our original analysis. Because CpG masking decreases the number of strong-to-weak substitutions and the total number of substitutions, it reduces our power to distinguish selection from gBGC and neutral evolution, substantially affecting the classification of HARs. Fourteen HARs have no substitutions left after masking and cannot be accurately classified. As expected, the proportion of the remaining 188 HARs annotated to the neutral class C 0 is higher after CpG masking (34 versus 0 without masking). There is also an increase in the number of HARs in the gBGC class C b (50 versus 38 without masking). Nonetheless, the majority of HARs (62%; 116 of 188 with substitutions) are still assigned to acceleration classes, 42 to the positive selection class C a+ , and 74 to the relaxation of constraint class C a− . Relative rates of substitutions and changes in GC-content between classes are qualitatively similar to those in the unmasked analysis. Thus, our primary findings cannot be explained by CpG effects (table 1).
Recombination Rates Are Elevated near HARs in the gBGC Class
Our classification method does not explicitly demonstrate the action of selection or gBGC. In particular, HARs in classes C b and C ab may have been shaped by GC-biased fixation pressures other than gBGC, such as selection for higher GC-content. To address this distinction, Duret and Arndt (2008) investigated the association between equilibrium GC-content (GC * ) and recombination rates. They reported a strong correlation between GC * and male recombination across the human genome and concluded that gBGC is a likely explanation for this phenomenon. Therefore, we analyzed male and female population-averaged recombination rates (Kong et al. 2002(Kong et al. , 2010 in the regions around each HAR and compared these rates between different classes of HARs. For each class, we tested for an association with recombination rate by a bootstrap procedure that accounts for size differences between the classes (supplementary text 8, Supplementary Material online). Using the sex-specific recombination maps from Kong et al. (2002), we find that HARs in class C b+ tend to be located in regions with higher recombination rates than other HARs (table 1). Further, this association is more pronounced for male than female recombination (supplementary table S2, Supplementary Material online), consistent with earlier reports (Duret and Arndt 2008). We observe similar patterns with the Kong et al. (2010) recombination map, although the magnitudes of these effects and their sex bias do differ between data sets (supplementary table S1, Supplementary Material online). Thus, we find an association between recombination rate and GC-biased substitution patterns, which is consistent with gBGC playing a role in shaping the HARs assigned to the class C b+ by our methods.
Discussion
This study describes a new approach to disentangling the forces that have shaped the fastest evolving regions of the human genome. To address the question of how many human-specific accelerated CNEs can be explained by gBGC versus by selection, we developed a nucleotide substitution model that jointly accounts for selection and gBGC in an integrated framework. Using this model and criteria based on likelihood ratio statistics, we classified 202 HARs (Pollard, Salama, King, et al. 2006) according to evidence of changes in the mode of selection and/or gBGC. We find that substitution patterns in 76% of HARs are best explained by acceleration alone (class C a ). Further refining our annotation with respect to rate acceleration, we find that 55% of HARs have evolved too rapidly for relaxation of purifying selection to be a likely explanation (class C a+ ) and 21% are consistent with relaxation of constraint (class C a− ). Nonetheless, a substantial minority of HARs are classified as having evolved under gBGC alone (class C b : 19%, class C b+ : 14%). Our classification provides candidates for HARs with particular evolutionary histories, but further functional evidence and/or analyses of polymorphism data are needed before drawing any definitive conclusions about the action of selection or gBGC in a particular HAR.
One line of evidence supporting the hypothesis that gBGC shaped the substitution patterns in HARs in the C b+ class is our finding that male recombination rates are significantly higher than expected by chance near these HARs. Thus, we conclude that a sizable minority of the fastest evolving regions in the human genome may have been MBE shaped by gBGC. However, to directly infer a causal role of gBGC, additional complementary data would be needed to rule out other explanations, such as mutagenic effects of recombination itself or effects related to DNA melting temperature (Duret and Arndt 2008).
In our analysis, we focus on a preannotated set of short genomic regions, as opposed to conducting a genome-wide screen on the megabase scale (Dreszer et al. 2007;Duret and Arndt 2008). All our findings pertain to the 202 HARs (Pollard, Salama, King, et al. 2006) and should not be assumed to necessarily generalize to the whole genome or to other sets of CNEs. In particular, our estimates of the proportion of HARs influenced by selection or gBGC may not represent the genome-wide prevalences of these forces. HARs constitute a highly biased set of CNEs, selected on the basis of unusually high substitution rates on the human branch, and one could expect different results in a more balanced sample of CNEs. For instance, by applying our classification method to a random subset of 1,000 of the candidate chimp-, mouse-, and rat-conserved sequences from which the HARs were identified, we find less evidence of selection, as expected (supplementary text 9, Supplementary Material online). We also annotate a smaller percentage of candidate regions to the gBGC class C b , although the ratio of regions in C b compared with C a is somewhat higher than in the HARs, suggesting that the prevalence of gBGC among CNEs could be higher genome wide than the 19% we estimate from HARs. Although the analyses presented here were not designed to estimate the prevalence of gBGC across the human genome, our approach could potentially be used in a future genome-wide study to address this question. Nonetheless, the fact that some HARs can be explained by gBGC alone does provide new evidence that nonadaptive forces could have important effects on average substitution patterns across the genome and may affect a subset of highly conserved sequences. Our study also does not provide information about the genome-wide association between recombination rates and the evolution of GC-content. But our results are consistent with previous work showing correlations between GC-biased substitutions and recombination rates in the human genome (Dreszer et al. 2007;Duret and Arndt 2008) and the genomes of other Metazoans (Capra and Pollard 2011).
Since HARs are short alignments (on the order of 100 bp) and the human branch is relatively short (approximately 0.5 substitutions per 100 bp under neutrality), phylogenomic inference is inherently limited by the small number of informative sites (i.e., sites with substitutions) present in the alignment data. We therefore consciously used a simple model with a constant gBGC disparity, as in Duret and Arndt (2008), which does not directly model mutagenic effects of gBGC and does not account for variation in substitution rates or patterns across alignment positions. Additionally, we make the common assumption of independence between alignment columns and do not account for dinucleotide effects. Also, our model does not allow for increased substitution rates due to relaxation of purifying selection for specific types of alleles or due to mutational biases other than weak-to-strong (Eyre-Walker 1992; Takano-Shimizu 1999; Lawrie et al. 2011). But we expect that our selection-associated models (M a and M ab ) are not overly sensitive to such effects because they do not contain parameters that can directly account for the resulting biased substitution patterns. We did explore the possibility of relaxing the assumption of constant gBGC disparity by means of a mixture model and found no major impact on our classification results (supplementary text 1, Supplementary Material online). Although more complex models would certainly be more realistic, their use would probably lead to overfitting and parameter estimates with inflated variance. One compelling reason to model dinucleotides is CpG hypermutability, and we addressed this possible confounder by masking potential ancestral CpG sites. All our qualitative findings were robust to CpG masking.
This study provides a statistically motivated classification method based on a series of LRTs. This method enabled us to estimate the proportion of HARs affected by changes to the mode of selection and it is straightforward to generalize. Our classification also lends an intuitive quantitative interpretation of the contributions of selection and/or gBGC to the evolution of individual HARs by enabling us to assign each HAR to an evolutionary class (with or without each force) and to estimate the size of the HAR's selection coefficient and/or gBGC disparity (supplementary table S2, Supplementary Material online). These features of the methodology are especially desirable when prioritizing follow-up analyses and experiments. We note that inferences about selection and gBGC inevitably depend on the assumed neutral model, which we estimated (in the form of M N , see Materials and Methods) from 4fold degenerate sites and untranscribed sequence flanking each HAR. Although estimating true mutation rates is difficult, we believe this approach yields reasonable estimates, accounts for major known biases, and does not confound our findings. Also, our inferences about selection are based on substitution rates between reference sequences of multiple species and are therefore more indirect than inferences based on polymorphism data, which can be used to detect recent positive selection near HARs (Katzman et al. 2010).
We have presented a general approach for modeling substitution patterns that can be combined with many other models in statistical phylogenetics. The models we have discussed are implemented in RPHAST (Hubisz et al. 2011), an open-source software package for comparative and evolutionary genomics, which is publicly available at http://compgen.bscb.cornell.edu/rphast/. One extension of our current method is to integrate the joint effects of selection and gBGC into a codon model. Such a model could be used to develop tests for positive selection in proteincoding regions that account for gBGC or other substitution biases. Work is in progress to support such models in RPHAST. | 2017-04-03T20:24:00.888Z | 2011-11-10T00:00:00.000 | {
"year": 2011,
"sha1": "3e5d6db1f597b16d60c9ef55fc08e3cc209060ab",
"oa_license": "CCBYNC",
"oa_url": "https://academic.oup.com/mbe/article-pdf/29/3/1047/13647395/msr279.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5dd6b4b3bdbf65696630690965ca3659172ce6ce",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
215774835 | pes2o/s2orc | v3-fos-license | CYP1A2 genotype and acute effects of caffeine on resistance exercise, jumping, and sprinting performance
Background It has been suggested that polymorphisms within CYP1A2 impact inter-individual variation in the response to caffeine. The purpose of this study was to explore the acute effects of caffeine on resistance exercise, jumping, and sprinting performance in a sample of resistance-trained men, and to examine the influence of genetic variation of CYP1A2 (rs762551) on the individual variation in responses to caffeine ingestion. Methods Twenty-two men were included as participants (AA homozygotes n = 13; C-allele carriers n = 9) and were tested after the ingestion of caffeine (3 mg/kg of body mass) and a placebo. Exercise performance was assessed with the following outcomes: (a) movement velocity and power output in the bench press exercise with loads of 25, 50, 75, and 90% of one-repetition maximum (1RM); (b) quality and quantity of performed repetitions in the bench press exercise performed to muscular failure with 85% 1RM; (c) vertical jump height in a countermovement jump test; and (d) power output in a Wingate test. Results Compared to placebo, caffeine ingestion enhanced: (a) movement velocity and power output across all loads (effect size [ES]: 0.20–0.61; p < 0.05 for all); (b) the quality and quantity of performed repetitions with 85% of 1RM (ES: 0.27–0.85; p < 0.001 for all); (c) vertical jump height (ES: 0.15; p = 0.017); and (d) power output in the Wingate test (ES: 0.33–0.44; p < 0.05 for all). We did not find a significant genotype × caffeine interaction effect (p-values ranged from 0.094 to 0.994) in any of the analyzed performance outcomes. Conclusions Resistance-trained men may experience acute improvements in resistance exercise, jumping, and sprinting performance following the ingestion of caffeine. The comparisons of the effects of caffeine on exercise performance between individuals with the AA genotype and AC/CC genotypes found no significant differences. Trial registration Australian New Zealand Clinical Trials Registry. ID: ACTRN12619000885190.
data suggest that some individuals experience an increase in performance following caffeine ingestion, whereas others do not [4][5][6]. In order to develop more effective guidelines for caffeine supplementation in sport and exercise settings, the scientific focus has recently been placed on examining and understanding the reasons for the between-individual variation in responses [4,7].
One potential driver of this individual response is inter-individual genetic variation [4]. The gene CYP1A2 encodes cytochrome P450 1A2, an enzyme responsible for up to 95% of caffeine metabolism [8]. The speed of caffeine metabolism is affected by a single nucleotide polymorphism, rs762551, within this gene [8]. Individuals with the AA genotype at rs762551 are commonly classified as "fast caffeine metabolizers", while C allele carriers (AC/CC genotypes) tend to have a slower clearance of caffeine and are, therefore, commonly classified as "slow caffeine metabolizers" [9]. Significantly greater ergogenic effects of caffeine on aerobic endurance have been reported for individuals with the AA genotype, compared with C allele carriers [6,10]. However, for high-intensity exercise tasks of a shorter duration, the evidence is less clear.
In a recent study of 19 basketball players, acute ingestion of 3 mg/kg of caffeine produced similar effects on vertical jump performance in individuals with the AA genotype and AC/CC genotypes [11]. These results are in accord with a study that utilized a 30-s Wingate sprint test, while improvement in peak and mean power output was noted following caffeine ingestion, the researchers did not find differences in responses between genotypes [12]. Based on the results of these two studies, it seems variations in the CYP1A2 genotype may not affect the ergogenic effects of caffeine ingestion on high-intensity exercise performance. However, a recent study reported that caffeine ingestion enhances the number of performed repetitions in a resistance exercise session in individuals with the AA genotype but not AC/CC genotypes [13].
Given the conflicting evidence on this topic, the aim of this randomized, double-blind crossover study was to explore the acute effects of caffeine on resistance exercise, jumping, and cycle ergometer sprint performance in a sample of resistance-trained men and the influence of genetic variation of CYP1A2 (rs762551) on the individual variation in responses. We hypothesized that caffeine ingestion would be ergogenic across all exercise tasks and that individuals with the AA genotype would experience greater improvements in exercise performance following caffeine ingestion than those with AC/CC genotypes.
Experimental design
This study employed a double-blind, randomized, crossover design. All participants attended four laboratory sessions. All trials were performed in the morning hours (between 7 am and noon), and at the same time of the day across the sessions for each participant, to ensure that the results were not affected by circadian variation [14]. The trials took place 4 to 7 days apart. The first and second session included familiarization with the exercise protocol (explained in detail in the "Exercise protocol" section). The two main sessions (i.e., caffeine and placebo sessions) were conducted in a randomized and counterbalanced order. The participants were randomly assigned to the two conditions; half of the participants ingested caffeine in the first session and a placebo in the second session, while the other half ingested a placebo in the first session and caffeine in the second session. Participants were asked not to perform any strenuous exercise for at least 24 hours before the main trials. The participants were also asked to keep a food diary for 24 h using "MyFitnessPal" software, and to match their dietary intakes on the days before the two main sessions as much as possible. The participants were required to refrain from caffeine intake after 6 pm on the day prior to the testing [1]. In order to assist with caffeine restriction, we provided the participants with a list of the most common foods and drinks that contain caffeine. The participants arrived at the laboratory following overnight fasting. Caffeine was administered in capsule form, with a dose of 3 mg/kg of body mass (equivalent to the caffeine dose contained in approximately two cups of coffee). The placebo capsule was identical in appearance to the caffeine capsule, but, instead of caffeine, it contained 3 mg/kg of dextrose. The capsules were ingested 60 min before the start of the exercise session [1]. Genotype was determined using a buccal swab. A validated Food Frequency Questionnaire was used to estimate habitual caffeine intake [15]. Prior to the study, the trial was registered in the Australian New Zealand Clinical Trials Registry ID: ACTRN12619000885190.
Participants
The study involved resistance-trained men as participants. Being resistance-trained was defined in this study as having a minimum of 6 months of resistance training experience with a minimum weekly training frequency of two times on most weeks. All participants were nonsmokers. Based on an a priori power analysis done using G*Power software (version 3.1; Germany, Dusseldorf) for repeated-measures Analysis of Variance (ANOVA) (within-between interaction, i.e., in the context of this study genotype × caffeine interaction), with an assumed true effect size f of 0.25, the alpha error level of 0.05, and the expected correlation between repeated measures of 0.75, the required sample size to achieve the statistical power of 80% for this study was 18 participants. To factor in possible dropouts, we recruited 22 participants. The exclusion criteria were: (i) prior use of anabolic steroids; and (ii) the existence of any health limitations. Ethical approval for this study was granted by the Victoria University Human Research Ethics Committee (HRE19-019). The remaining data of the project are published elsewhere [16]. Before enrolling in the study, every participant signed an informed consent and filled out a Physical Activity Readiness Questionnaire (PAR-Q). Only participants who responded with 'No' to all PAR-Q items were included in the study. In line with previous research [6,[11][12][13], we combined participants with the AC and CC genotypes into one group (AC/CC group) for the analysis.
Exercise protocol
One repetition maximum testing The first two sessions included familiarization with the exercise protocol. These sessions were the same as the main sessions (i.e., placebo and caffeine sessions), with the exception that the first one included one-repetition maximum (1RM) testing in the bench press exercise. For the 1RM test, the participants performed sets of one repetition with progressive increases in load until they reached their estimated 1RM. The load was initially set to 20 kg and subsequently increased by 10 kg increments if the mean concentric velocity of the repetition was 0.4 m/s or higher (as determined by a linear position transducer attached to the barbell). If the mean velocity was lower than 0.4 m/s, the load for the next attempt was adjusted using smaller increases (e.g., 5 kg or 2.5 kg, determined based on consultation with the participants). The participants performed 1RM attempts with progressively increasing loads until the mean velocity was ≤0.2 m/s [17]. When the mean velocity of a successful 1RM attempt reached these values, the load was considered as a valid estimate of the 1RM [17]. Three minutes were allowed between 1RM attempts.
Movement velocity and power in the bench press exercise In the first session, upon determining the 1RM, the participants performed the bench press exercise with loads of 25, 50, 75, and 90% of 1RM [18]. The second, third, and fourth sessions started with the assessment of movement velocity in the bench press exercise with different loads, as the 1RM test was only performed in the first session. The external load was first set at 25% of 1RM and was progressively increased to 90% of 1RM. With each load, the participants performed two sets of one repetition and were instructed to lift the load as fast as possible. The better repetition (in the context of higher movement velocity and power output) was used for the analysis. Each repetition was followed by a 3-min rest interval. During each repetition, a GymAware linear position transducer (GymAware Power Tool, Kinetic Performance Technologies, Canberra, Australia) was attached to the barbell and used to measure mean concentric velocity (m/s), mean power (W), peak concentric velocity (m/s), and peak power (W). Previous research has established that this device has good test-retest reliability for power and velocity outcomes in the bench press [19].
Muscle endurance After the final repetition with 90% of 1RM, participants were provided with 5 min of passive rest. After the rest interval, muscle endurance was assessed with a test that involved performing repetitions to momentary muscle failure with a load corresponding to 85% of 1RM in the bench press exercise, as in the study by Rahimi [13]. Besides the total number of repetitions, we also measured velocity and power output for each repetition using the linear position transducer attached to the barbell. For the purpose of statistical analyses, we compared the total number of repetitions in the placebo and caffeine conditions. We also explored movement velocity and power output of all repetitions by matching the number of repetitions between the placebo and caffeine conditions. For example, if a participant performed eight repetitions following the ingestion of placebo and nine following the ingestion of caffeine, for this part of the analysis, we only considered movement velocity and power output in the first eight repetitions. This approach allowed us to objectively quantify the average quality of the repetitions during the test and examine if caffeine ingestion had an effect on movement velocity and power output when the total number of repetitions was matched.
Countermovement jump After the muscle endurance test, participants rested passively for 3 minutes and then performed 1 minute of light running, followed by 10 bodyweight squats, in order to warm-up for the countermovement jump (CMJ). The participants performed a CMJ on a force platform (400S Isotronic Fitness Technology, Skye, South Australia, Australia). The CMJ was performed without an arm swing. The participants started CMJ testing from an upright standing position on the force platform. The participants positioned themselves in the starting position and then received commands from the software displayed on a computer screen that was in front of the platform. The software counted down, "3, 2, 1" and provided "Set" and "Go" commands. After the "Go" command, the participants had 5 seconds to complete the jump. From the starting position, the participants performed a downward countermovement (i.e., a fast knee flexion) where their lowest position was a semi-squat position (knee~90°and trunk/hips in a flexed position) [20]. Immediately after reaching this point, the participants performed an "explosive" extension of the legs [20]. The participants were given instructions to jump as quickly and "explosively" as possible to achieve maximal vertical jump height [20]. The participants had one warm-up jump and three official attempts. Each attempt was followed by 1 minute of rest. For the analysis, the best jump from three official attempts was used. The outcome in the CMJ test was vertical jump height, determined by an algorithm based on the flight time.
Wingate test After the CMJ test, the participants were provided another 3 minutes of passive rest before starting the Wingate test. The Wingate test was performed using a Lode Excalibur Sport Cycle Ergometer (The Netherlands, Groningen). Individual setup of the cycle ergometer; namely, saddle and handlebar height and length, was determined in the first session and was maintained throughout all subsequent trials. The Wingate test started with a 5-min warm-up (100 W at 60-80 rpm) [21]. After the warm-up, participants performed a 30-s "all-out" sprint while the resistance placed on the flywheel remained constant at 0.75 Nm/kg. The participants remained seated during the 30-s sprint. During the test, peak power, mean power, and minimum power were recorded using the Lode Ergometry Manager 10 software. Peak power was defined as the greatest power value recorded during the 30-s; mean power was the arithmetic mean of power during the test, and minimum power was the lowest power recorded during the sprint.
Side effects
Side effects of caffeine and placebo supplementation were evaluated at two time points: (1) immediately after the completion of the testing sessions; and (2) in the following mornings, upon waking. The participants responded to an 8-item survey regarding the incidence of side effects ("yes/no" response scale). This survey was also used to examine side effects in previous research that explored effects of caffeine on exercise performance [20,22,23].
Assessment of blinding
Both in the caffeine and the placebo trials, before and after the exercise session, participants responded to the following question: "Which supplement do you think you have ingested?" [24]. The question had three possible responses: (a) "caffeine", (b) "placebo" and (c) "I do not know" [24]. In case participants respond with "a" or "b", they were required to state the reason for choosing their response.
Genetic testing
The participants underwent genetic testing using a commercially available testing kit from DNAfit Life Sciences (London, UK), as in other studies [25]. Samples were collected using buccal swab devices, with OCR-100 kits by DNAGenotek (Ottawa, Canada). The participants were required to avoiding eating or drinking for at least 60 min prior to the sample collection. All samples were collected according to the manufacturer guidelines. The samples were sent to IDna Genetics Laboratory (Norwich, UK), where the analysis was performed. DNA was extracted and purified using the Isohelix Buccalyse DNA extraction kit BEK-50 (Cell Projects Ltd., Kent, UK), and amplified through polymerase chain reaction (PCR) on an ABI 7900 real-time thermocycler (Applied Biosystem, Waltham, USA). The samples were analyzed for the CYP1A2 rs762551 single-nucleotide polymorphism. This analysis was performed after the exercise performance data collection; thus, the researchers and participants were blinded to genotype variations of the cohort until the data collection process was finalized.
Statistical analysis
One-way ANOVA was used to test the differences between genotype groups in age, body mass, height, 1RM, and habitual caffeine intake. We used a two-way, repeated-measures ANOVA to test genotype (AA genotype vs. AC/CC genotypes) × caffeine (placebo vs. caffeine) interaction effect on performance data, separately for each performance variable. In the absence of significant genotype × caffeine interaction effects, we conducted no stratified analyses of the effects of caffeine by genotype groups. Relative effect sizes (ES) were calculated as Hedge's g for repeated measures and presented together with their respective 95% confidence intervals (95% CIs). ESs of < 0.20, 0.20 to 0.49, 0.50 to 0.79, and ≥ 0.80 were considered to represent trivial, small, moderate, and large effects, respectively. McNemar's test was used in the comparison of the incidence of side effects between the placebo and caffeine conditions. The blinding data were summarized using the Bang's Blinding Index [26]. The values in this index range from − 1.0 (denoting opposite guessing) to 1.0 (denoting complete unblinding) [26]. For this study, we reported the data from this index as a percentage of individuals who identified the correct treatment condition beyond chance [19,26]. All analyses were performed using the Statistica software (version 13.4.0.14; TIBCO Software Inc., Palo Alto, CA, USA). The significance level was set at p < 0.05.
Study participants
All participants completed all testing procedures and were included in the final analysis. Of the whole sample, 13, 7, and 2 participants were categorized as having the AA, AC, or CC genotype, respectively. The participants' characteristics are presented in Table 1. There were no significant differences between the genotype groups for age, body mass, height, 1RM, or habitual caffeine intake.
Movement velocity and power output in the bench press exercise
We did not find a significant main effect for genotype (p > 0.05 for all) or a genotype × caffeine interaction effect for any of the 16 analyzed variables for movement velocity and power output in the bench press exercise (mean power, mean velocity, peak power, and peak velocity at 25, 50, 75, and 90% 1RM; Table 2). For all variables, except peak power output at 50% 1RM, there was a significant main effect favoring caffeine (p < 0.05). The ESs, favoring caffeine conditions in all outcomes, ranged from 0.20 to 0.29 for all outcomes recorded at 25% 1RM, from 0.21 to 0.23 for all outcomes at 50% 1RM, from 0.31 to 0.50 for all outcomes at 75% 1RM, and from 0.57 to 0.61 for outcomes at 90% 1RM.
Muscle endurance
For the maximum number of repetitions in the bench press exercise with 85% 1RM, we did not find a significant main effect for genotype (p = 0.397) or a genotype × caffeine interaction effect (p = 0.454), while there was a significant main effect favoring caffeine (p < 0.001; ES = 0.53). For peak velocity, mean power output, and peak power output (matched for repetitions between placebo and caffeine conditions), we did not find a significant main effect for genotype (p > 0.05 for all) or a genotype × caffeine interaction effect (p > 0.05 for all), while there was a significant main effect favoring caffeine in all three variables (p < 0.001 for all). The ESs ranged from 0.27 to 0.53. For mean velocity, there was a significant main effect for genotype (p = 0.034), with the AC/CC genotypes producing greater movement velocity than the AA genotype, and a significant main effect favoring caffeine (p < 0.001; ES = 0.85), while we found no significant genotype × caffeine interaction effect (p = 0.094).
Countermovement jump
For vertical jump height in the CMJ test, we did not find a significant main effect for genotype (p = 0.447) or a genotype × caffeine interaction effect (p = 0.752), while there was a significant main effect favoring caffeine (p = 0.017; ES = 0.15).
Wingate test
For peak power in the Wingate test, we did not find a significant main effect for genotype (p = 0.998) or a genotype × caffeine interaction effect (p = 0.542), while there was a significant main effect favoring caffeine (p < 0.001; ES = 0.33). For mean power in the Wingate test, we did not find a significant main effect for genotype (p = 0.517) or a genotype × caffeine interaction effect (p = 0.583), while there was a significant main effect favoring caffeine (p < 0.001; ES = 0.35). For minimum power in the Wingate test, we did not find a significant main effect for genotype (p = 0.505) or a genotype × caffeine interaction effect (p = 0.396), while there was a significant effect favoring caffeine (p = 0.011; ES = 0.44).
Side effects
In the responses recorded immediately post-exercise, we found a significant difference between the placebo and caffeine conditions only in items "Increased vigor/activeness" and "Perception of improved performance" in the AC/CC genotypes (Table 3). In the responses 24-h after capsule ingestion, we did not find any significant differences in the incidence of side effects between the placebo and caffeine conditions.
Assessment of blinding -AA genotype
Before starting the exercise session, in the placebo and caffeine conditions, respectively, 62% and 54% of the participants with the AA genotype correctly guessed the treatment identity beyond chance. After exercise, in the placebo and caffeine conditions, respectively, 85% and 69% of the participants with the AA genotype correctly guessed the treatment identity beyond chance.
Assessment of blinding -AC/CC genotypes
Before starting the exercise session, in both the placebo and caffeine conditions, 55% of the participants with the AC/CC genotypes correctly guessed the treatment identity beyond chance. After exercise, in the placebo and caffeine conditions, respectively, 44% and 78% of the participants with the AC/CC genotypes correctly guessed the treatment identity beyond chance, respectively.
Discussion
The results of the present study demonstrate that the acute ingestion of a moderate dose of caffeine (3 mg/kg) may produce significant improvements in: (a) movement velocity and power output in the bench press using loads ranging from 25 to 90% of 1RM; (b) maximum number of repetitions performed to momentary muscle failure in the bench press exercise, as well as the average quality (i.e., higher movement velocity and power output) of the performed repetitions; (c) vertical jump height; and (d) peak, mean, and minimum power in the 30-s Wingate test. No significant differences in the effects of caffeine were found between the individuals with the AA genotype and the individuals with the AC/CC genotypes in any of the performance tests used in the present study.
Effects of caffeine on exercise performance
In the bench press exercise, caffeine ingestion enhanced peak and mean velocity and consequently, mean and peak power, when exercising with low, moderate, and high loads. These results are generally in line with previous findings [18,20,22]. One of the early studies [18] conducted on this topic reported that high doses of caffeine (9 mg/kg) are required for acute increases in movement velocity when exercising with very high loads (90% 1RM). However, our results suggest that a dose of 3 mg/ kg is effective for enhancing velocity across a wide range of external loads, suggesting that very high doses might not be needed. This is especially relevant to highlight as the ESs in our study are very similar to those reported for the bench press exercise by Pallarés et al. [18]. A recent meta-analysis found that caffeine ingestion enhances mean and peak movement velocity in resistance exercise [27]. The researchers also noted that the effects of caffeine on mean velocity (ES = 0.80) were higher than those for peak velocity (ES = 0.41) [27]. However, the studies included in that meta-analysis assessed either mean or peak velocity; that is, no studies included in the meta-analysis measured both outcomes in the same group of participants [27]. In the present study, we found that the ESs were very similar for both mean and peak velocity, and this was a constant finding across all the employed loads (i.e., 25 to 90% of 1RM).
The muscle endurance test used in this study further confirmed that caffeine ingestion is ergogenic for this fitness component in resistance-trained men. This study adds to the body of evidence showing improvements in muscle endurance following caffeine ingestion [28][29][30][31][32]. However, a more novel finding is that caffeine is ergogenic for power and velocity outputs when the number of repetitions between the caffeine and placebo conditions is matched. Specifically, when matching the number of repetitions between conditions, we found that the effects of caffeine, as compared to placebo, amounted to 0.27 for peak power, 0.51 for peak velocity, 0.53 for mean power, and 0.85 for mean velocity. Several studies that explored the effects of caffeine on muscle endurance did not find a difference in the number of performed repetitions between the caffeine and placebo conditions [13,33,34]. However, as we demonstrated in the present study, even with an equal number of repetitions between conditions, caffeine might have still produced considerable improvements in the quality of the performed repetitions, that is, greater movement velocity and consequently, greater power output (which was not tested in the aforementioned studies). As compared to placebo, caffeine ingestion most commonly produced moderate improvements in the number of performed repetitions (generally one to three additional repetitions) [28,31]. We propose that in some contexts, improvements in the overall quality of the performed repetitions may be more important for training adaptations than simply performing a greater number of repetitions. This hypothesis is in line with recent findings that training at a velocity loss of 20% produced greater improvement in CMJ performance than training at a 40% velocity loss [35]. Improvements in squat strength were similar for both training conditions, even though the group that trained with a velocity loss of 20% performed 40% fewer repetitions.
Caffeine ingestion resulted in increased vertical jump height in the CMJ. The ES magnitude of 0.15 observed in this study is very similar to the pooled ES of 0.17 reported in a recent meta-analysis of 10 studies [36]. This result, therefore, confirms that caffeine ingestion may have a relatively small performance-enhancing effect on vertical jump height [36][37][38]. The acute improvement in vertical jump height following caffeine ingestion is comparable to the improvement in jump height found as a result of 4 weeks of plyometric training [39,40]. Even though the improvement in performance was relatively small (approximately 1 cm), it might still be practically meaningful in sports where jump height directly impacts athletic outcomes.
In the Wingate test, we found a significant ergogenic effect of caffeine on peak, mean, and minimum power. These results are in line with the findings of a recent meta-analysis that reported ergogenic effects of caffeine on mean and peak power in the ES magnitude of 0.18 and 0.27, respectively [41]. Of the 16 studies included in the meta-analysis [41], 12 studies used caffeine doses of 5 or 6 mg/kg. Therefore, it could be argued that the findings of the meta-analysis should primarily be generalized to these doses of caffeine. In the present study, we found that even a lower dose of caffeine (namely, 3 mg/ kg), increases performance in this test and that the ES is very similar to that reported by studies using higher caffeine doses [41].
The influence of the CYP1A2 genotype
We did not find significant genotype × caffeine interaction effects in any of the analyzed performance variables. It might be that the effects of caffeine ingestion are similar between different CYP1A2 genotypes, at least for the performance tests used in the present study. The results reported herein are generally in line with the current body of evidence. Two studies [11,12] that explored the effects of caffeine on jumping and Wingate test performance reported similar improvements in these outcomes following the ingestion of 3 mg/kg of caffeine in groups of participants with the AA and AC/CC genotypes. However, a recent study [13] that used a resistance exercise protocol, found that caffeine is ergogenic only for individuals with the AA genotype. On average, individuals with the AA genotype were able to complete one more repetition with the consumption of caffeine, as compared to placebo, whereas the number of repetitions was the same in the placebo and caffeine conditions among those with the AC/CC genotypes. The main methodological difference between the current studies exploring this topic was the dose of caffeine administered to the participants. Specifically, we and two other studies that reported similar results utilized 3 mg/kg of caffeine. We opted to utilize a lower dose of caffeine as higher doses of caffeine do not seem to produce greater increases in performance [28]. In the study by Rahimi [13], the dose was considerably higher (i.e., 6 mg/kg). It might be that the differences in responses between genotypes become apparent only at higher doses of caffeine. Future dose-response studies might consider exploring this hypothesis further. The effectiveness of the blinding was not explored by Rahimi [13] thus limiting the comparison of the results in this aspect of the study design.
Even though Rahimi [13] reported that caffeine ingestion is ergogenic for AA but not AC/CC genotypes in resistance exercise, the main outcome of that study was the number of performed repetitions in 4 different resistance exercises with 85% 1RM, which can be considered as a somewhat crude test of performance. As mentioned previously, we demonstrated that even when matched for the number of repetitions, caffeine, as compared to placebo, increases the average movement velocity and power output of the performed repetitions (ES range = 0.27 to 0.85). Therefore, even though Rahimi [13] reported that in the AC/CC genotypes the total number of repetitions was the same following the ingestion of caffeine and placebo, caffeine might have still enhanced the average velocity and power of these repetitions. We would suggest that future research in this area explores both the quality and quantity of the performed repetitions, to provide a more comprehensive assessment of possible effects of caffeine.
Strengths and limitations
Some of the key strengths of this study are: (a) the standardization of testing conditions, including nutritional intake, physical activity, and the time of day at which the testing is conducted; (b) the inclusion of trained individuals as study participants; (c) a broad range of exercise performance variables that were assessed as outcomes; (d) assessment of performance across a wide-range of loads in the bench press exercise and both quantity and quality of repetitions, when examining muscle endurance as the outcome variable.
There are several potential limitations of this study that need to be acknowledged. First, due to the low number of individuals with the CC genotype, we combined the AC and CC genotypes into one group. This is fairly common in this line of research, as the number of individuals with the CC genotype in the population is suggested to be~10% [9]. To get around 10 to 12 participants with the CC genotype a study would need to screen from 100 to 120 potential study participants. However, despite the fact this is a common practice, it could have confounded findings, as the effects of caffeine might not be uniform between individuals with the AC vs. CC genotype [10,42]. In the current study, we could not test this further, because the number of individuals with the CC genotype was n = 2. Of note, the exclusion of these two participants from the analysis did not alter the study results.
The second limitation is related to the efficacy of blinding [24]. Previous research has established that correct supplement identification may impact the outcomes of a given exercise test and, therefore, bias the results. In the present study, around 50-60% of the participants were able to correctly identify the placebo and caffeine condition beyond random chance in the pre-exercise assessment. In the post-exercise assessment, this percentage generally stayed the same or slightly increased. We believe that the pre-exercise responses are of greater importance, given that the improvements during the testing session (or lack thereof) may influence the postexercise responses. Tallis and colleagues [43] tested their participants in four conditions: (1) "told caffeine, given caffeine"; (2) "told caffeine, given placebo"; (3) "told placebo, given placebo"; and (4) "told placebo, given caffeine". Equal improvements were found on both occasions when the participants indeed ingested caffeine (i.e., "told caffeine, given caffeine" and "told placebo, given caffeine" conditions), thus suggesting that this limitation of our study might not have greatly affected our findings.
Conclusions
This study found that caffeine is acutely ergogenic for movement velocity, power output, and muscle endurance in resistance exercise, vertical jump height, and peak, mean, and minimum power in a Wingate test. These performance-enhancing effects were observed following the ingestion of using a moderate dose of caffeine (3 mg/kg), which resulted in minimal side effects. The comparisons of the effects of caffeine on exercise performance between individuals with the AA genotype and AC/CC genotypes found no significant differences. | 2020-04-16T09:10:44.660Z | 2020-04-15T00:00:00.000 | {
"year": 2020,
"sha1": "8befcf33b89830dc967f5a2e4d176c8b17e5364c",
"oa_license": "CCBY",
"oa_url": "https://jissn.biomedcentral.com/track/pdf/10.1186/s12970-020-00349-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7f321f64ca5a8243d49e753a3fccc722fc9cdab8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219167848 | pes2o/s2orc | v3-fos-license | Optimal policy for composite sensing with crowdsourcing
The mobile crowdsourcing technology has been widely researched and applied with the wide popularity of smartphones in recent years. In the applications, the smartphone and its user act as a whole, which called as the composite node in this article. Since smartphone is usually under the operation of its user, the user’s participation cannot be excluded out the applications. But there are a few works noticed that humans and their smartphones depend on each other. In this article, we first present the relation between the smartphone and its user as the conditional decision and sensing. Under this relation, the composite node performs the sensing decision of the smartphone which based on its user’s decision. Then, this article studies the performance of the composite sensing process under the scenario which composes of an application server, some objects, and users. In the progress of the composite sensing, users report their sensing results to the server. Then, the server returns rewards to some users to maximize the overall reward. Under this scenario, this article maps the composite sensing process as the partially observable Markov decision process, and designs a composite sensing solution for the process to maximize the overall reward. The solution includes optimal and myopic policies. Besides, we provide necessary theoretical analysis, which ensures the optimality of the optimal algorithm. In the end, we conduct some experiments to evaluate the performance of our two policies in terms of the average quality, the sensing ratio, the success report ratio, and the approximate ratio. In addition, the delay and the progress proportion of optimal policy are analyzed. In all, the experiments show that both policies we provide are obviously superior to the random policy.
Introduction
With the proliferation of personal smart devices, such as smartphone, human is able to capture information/ event from the physical world with smartphones more easily than before. [1][2][3][4] Embedded with a rich set of sensors, the current smartphone can support increasing applications across a wide variety of domains, such as crowdsensing, 1,5-7 environmental monitoring, 8 and social networks. 9 These applications can be classified into two major classes: participatory sensing (user is directly involved) and the opportunistic sensing (user is not involved). 5,10,11 In the participatory sensing, user can act as the preliminary sensor and decision-maker before his or her smartphone implements a certain sensing task. For example, users make decisions whether to take part in an application, and then operate his or her smartphone to implement the application. [12][13][14] Most of the previous works on crowdsensing take the smartphone into consideration, only a small part works suggest that crowdsensing should also include user as the sensor instead of just sensor carrier and operator. [15][16][17] For example, Wang et al. 16 took human as sensor and studied their behavior's affecting the sensing data quality. But there are a few articles noticed that humans and their smartphones depend on each other. There are two questions should be focused on the relationship with humans and their smartphones. The first is how to describe the relation between the smartphone and its user during smartphone sensing. The second is how two improve the performance of the smartphone sensing by exploiting the relation. As we all know, human has more powerful ability of recognition than the smart device and plays a key role before the process of smartphone sensing. In this article, we propose a framework to clarify the relation, and then study the performance improvement of the crowdsensing under a scenario, where users are willing to have good experience to take part in the crowdsensing. Since smartphones are under control of its user, its sensing decision is made after its user's willingness. We design the framework as conditional sensing as shown in Figure 1, where each user takes the action ''sleeping'' if he or she is not willing to taking part in the smartphone sensing. The scenario studied in this article represents a class of common applications in the participatory sensing, where some users are asked to implement a certain task, such as to detect the interesting object/event around them. We further investigate the case where the users have limited cost to implement the task, and hope a certain success implementation probability, denoted by z.
Summary of key contributions
The key contributions of this article are listed as follows: 1. This article studies the relationship between human and smartphone during the smartphone sensing, and proposes the framework: composite sensing. 2. We study the scenario of the object detection, and formulate the composite sensing problem, that is, how to improve the user experience under the framework of composite sensing as the partially observable Markov decision process (POMDP). We also design a new scheme, called composite sensing policy, to solve the composite sensing problem and get the maximal overall sensing quality. 3. We provide the theoretical and experimental analysis for the composite sensing policy. The theoretical optimization of the policy is guaranteed while the experimental results certificate the performance of the optimal and myopic policies we proposed.
Road map
This article is organized as follows. The related works are reviewed in section ''Related work.'' Section ''Preliminaries'' presents the composite sensing and system models. We formulate the composite sensing problem and map it as the POMDP in section ''Composite sensing problem.'' The composite sensing policy for the problem is designed and the theoretical performance of the policy is presented in section ''Composite sensing policy.'' The performance of our solution is also evaluated by the experiment in section ''Experiment results.'' The work of the whole article is summarized in section ''Conclusion.''
Related work
Today's smartphone is embedded in a number of specialized sensors, including camera, global positioning system (GPS), digital compass, and so on. It can sense the environmental information, and share the information with the friend of the smartphone holder or report to a certain server. 13 It has become not only the core communication device in people's daily life but also a smart sensing device for environmental monitoring, smart transportation systems, social networks, and so on. 10 Its applications are thus widely exploited and are extended to many more areas than before. According to the awareness and involvement of the user in the architecture as sensing device custodians, the smartphone applications can be classified into two major classes: participatory sensing (user is directly involved) and the opportunistic sensing (user is not involved). 10 The participatory sensing includes both the smartphone and its holder into the significant decision stages in the sensing application. One type of relation between the smartphone and its holder is the composite sensing proposed in this article.
Participatory sensing
A wide range of environmental information, such as road traffic, can be sensed and disseminated by ordinary citizens with smartphones. It brings a new way for the development of many application areas, such as environmental monitoring and social networks. The interesting examples include road traffic monitoring, 18 SmartPhoto, 17 and Ear-phone. 14 Rana et al. 14 designed an end-to-end participatory urban noise mapping system called Ear-phone. The key idea of Ear-phone is to crowdsource the collection of urban noise to people, who carry smartphones equipped with sensors and location-providing GPS receivers. In the end-to-end system, the urban noise is sent to a central server. A noise map is reconstructed and then is provided to the end user. In VTrack, some participatory drivers with smartphone send its location estimated by WI-FI or GPS to a central server in real time, and the server provides the real-time routes with the minimal travel time to users. 19 Mohan et al. 18 have presented TrafficSense to monitor road and traffic conditions in a setting where there are much more complex varied road conditions (e.g. potholed roads), chaotic traffic (e.g. a lot of braking and honking), and a heterogeneous mix of vehicles (two wheelers, three wheelers, cars, buses, etc.). Wang et al. 17 proposed a framework, called SmartPhoto, to quantify the quality (utility) of crowdsourced photos based on the accessible geographical and geometrical information (called metadata), including the smartphone orientation, position, and all related parameters of the built-in camera. The sensed photos are sent to a server by the participators and different rewards are feedback to them because the smartphone orientation and position cause the different sensing qualities. There are increasingly new applications appearing, such as CrowdAtlas, for generating a high quality map by crowdsourcing. 20 For more details on smartphone sensing, we refer interested readers to the survey articles. 2,10 From the observation from the related works on smartphone applications, we can find the following features: (1) sensing result report: many smartphone applications require the participators to report their sensed information to central servers; and (2) human acts sensor: in the smartphone applications with the participatory sensing, human is a key part of the systems in these applications, and makes key stages of the decision to sense the environmental information. Not all users are willing to be participators and not all of their sensing results have equal value because the smartphone types and sensing conditions may be different. 13,16 Human as sensor Human's decision is the necessary part of the smartphone applications with the participatory sensing, and has great affection on the sensing result. For example, SmartPhoto needs humans to observe the Event of Interesting (EoI) and then take pictures. 17 Most of the current smartphone sensing applications are based on voluntary participation. 13,21 In these applications, 13 humans estimate the incentive reward at first, and then operate their smartphone to participate if satisfied or they observe the EoI at first, and then decide to collect and report the information about the EoI if it is observed and satisfies requirement. 14, 18 Zhao et al. have showed that mobile crowdsourced sensing (MCS) is a new paradigm that takes advantage of pervasive smartphones to efficiently collect data, enabling numerous novel applications. They proposed incentive mechanisms which are necessary to attract more user participation to achieve good service quality for an MCS application. 21 ND Lane et al. have surveyed some existing mobile phone sensing algorithms, applications, and systems. They also discussed the emerging sensing paradigms, and formulated an architectural framework for discussing a number of the open issues and challenges emerging in the new area of mobile phone sensing research. 2 The smartphones' decisions base on their users' observation and decision. It is an underlying phenomenon in the applications of smartphone sensing. Wang et al. 16 used humans as sensors, and studied their decisions affecting the sensing data quality. Although human makes a key decision in the smartphone applications with participatory sensing, most of the previous works make simply an assumption on human's decision or ignore the humans' decision. Furthermore, the participator's decision and its relationship with its smartphone are fairly considered and researched.
Preliminaries
Object, observing, and sensing model This article concerns a set V of composite nodes to sense a set of m objects. The object in this article can be a target, such as the famous building, 17 and the EoI, such as the cellular or the Wi-Fi signal. 22 As shown in Figure 2(a), each object is assumed to have an orientation, and K aspects. Let the parameter u, u 2 f1, . . . , Kg, denotes the aspect that facing one node. For example, u = 2 means that the second aspect of the object o j faces the node. When the node takes the action to sense one u of the object's aspects, the action results in a certain sensing quality q(u), 0 ł q(u) ł 1. In this article, the sensing quality is defined as the function of the aspect as given by the following equation Each user's observing range is modeled as a disk as shown in Figure 2(b) and the smartphone's sensing range is modeled as a fan-shaped sensing area in Figure 2(c). They have the same radius since the user would not notice the object out of the observing range. The smartphone can fix a direction to sense one of the objects in its sensing range as shown in Figure 2(c). Let the object ID denotes the direction that the node chooses. The example in Figure 3 shows that the node has the directions as many as the number of the objects.
Conditional sensing
In the crowdsourcing applications with the participatory sensing, the smartphone must be under the control of its user. Each user acts the preliminary sensor, and implements the composite operation with his or her smartphone as a whole. We call such a whole as a composite node (node in brief) as shown in Figure 1. In each node, the user can make observing decision a 2 f0(sleeping), 1(observing)g to observe the state of the objects in the composite node's sensing range, and then the smartphone can make sensing decision b = f0(non À sensing), 1(sensing)g. The node implements the composite sensing: conditional decision-making. The sensing decision is based on the observing decision as shown in Figure 4. By the observing decisions a = 0, the node sleeps. Otherwise, the user observes the objects' states, and obtains the observation outcome Y j, k (t): u j (t) = k, where t is the time slot in the period T . Given the observation outcome Y(t), the smartphone makes the sensing decision. If the sensing decision is b j, k (t) = 1, the smartphone chooses the direction o j object to sense. Otherwise, the node turns to sleep. The observing and sensing decisions compose the decision space A, that is, A = fa, bg.
In this following context, we present the composite sensing from the view of an arbitrary node. The objects refer to these in the sensing range of the node.
System model
This article studies the scenario where the nodes and objects are static and uniformly randomly deployed in the interested area. With an additional server, these nodes and objects compose the composite sensing system. In each time slot, each object o j is in either of two states: disappear and appear. The object state is clarified by the following two concepts: object state and system state.
Definition 1
Object state. The object state indicates the appearance of an object o j in each time slot t, and is denoted by z j (t), where z j (t) 2 f0(disappear), 1(appear)g.
The design of the optimal observing and sensing decision uses the definition of the object state. When an object is in the state: disappear, that is, z j (t) = 0, it cannot be observed by any node. When the object is in the state: appear, that is, z j (t) = 1, it can be observed and one of its K aspects faces one node. Assume that each object has the equal transition probability among the disappear state and the K aspects, that is, p(u 0 ju) = p(uju 0 ), and p(z = 0ju) = p(ujz = 0), and its state transition is independent of other objects. Suppose that there are m objects around the node. The definition of the system state is given as below.
Definition 2 System state. The system state is the collection of the states of the m objects, and is denoted by Given a sequence of time slots t 2 T, this article assumes that the system states s(t) form a Markov chain with the state space P = f0, 1g m . To achieve reward, each node observes and senses the objects around it, and then reports the sensing results to the server. Let g i j (u = k) denotes the report of the node v i for the object o j when o j 's kth aspect faces the node v i . The sensing quality of the report g i j (u = k) is thus q i j (u = k). If the report is accepted by the server, it returns the acknowledgment of the node with a certain reward. In this article, the server adopts the non-separable sensing quality rule in equation (2) as the rule to choose the reporting from the nodes. By the function, the server accepts the maximal sensing quality for the same object among the nodes' reporting for the same object where q i j is the sensing quality reported by the node v i for the object o j , and there may be more than one node sensing the same object o j simultaneously. By the sensing quality rule in equation (2), the report g i j (u j = k) can be successful if any other report g i 0 j (u j = k 0 ) for the same object o j has no aspect with higher quality, that is, k 0 ł k. Let g j (u j = k) 2 f0, 1g denotes the reported state of the object o j , which means that there is no report with the aspect higher than k if g j (u = k) = 1. Otherwise, g j (u = k) = 0. Most of symbols and their meaning are summarized in Table 1.
Composite sensing problem
This section presents the composite sensing process with the goal to maximize the overall sensing quality, and then maps it as a POMDP.
Compound sensing system
The structure of the composite sensing system, illustrated in Figure 5, implements the crowdsourcing task, which is implemented including four parts: task broadcast, composite sensing process, report, and reward.
Task broadcast. The application server broadcasts some advertisements to the users and to attract them to participate in the task: to sense the objects in their sensing ranges. After the node accepts the task, it implements the composite sensing process to maximize the reward returned from the server.
Composite sensing process. Each node implements the composite sensing process, which is composed of conditional decisions made in a series of time slots. In each time slot, the observing decision a(t) is first made according to the historical observation and decisions, stored in the historical information vector H(t). Based on its outcome, the sensing decision b(t) is then made.
Observing decision. At the beginning of each time slot t, the node makes the observing decision. If the observing decision is made to be sleeping, the smartphone has to choose the sleeping sensing decision either in this slot. Otherwise, the user chooses one direction, that is, one object o j , to observe. If the object's state is appearance, that is, z j (t) = 1, the node can observe its orientation as shown in Figure 2(b). After the observation, the node obtains the observation outcome: the object state z j (t) and its orientation u j (t). Given the system state s(t) = s and the observing decision a = 1, the conditional PMF (probability mass function) of observation outcome, u j (t) = k, for the object o j is given by where k = 0 indicates that no aspect cannot be observed when the object o j disappears, and p(u j (t) = kjz j = 1) is the conditional probability that the observing outcome is u j (t) = k when the object state is z j = 1.
Sensing decision. After the observing decision for the object o j , the node makes the sensing decision b j (t) in the slot. If the observing decision a(t) = 0, the sensing decision must be sleeping, that is, b j (t) = 0. Otherwise, the smartphone makes the sensing decision according to the observation outcome: u j (t) = k. The node makes the sensing decision to sense the object o j , that is, b j (t) = 1, with the following probability where the conditional probability p(b j (t) = 1ju j (t) = k) 2 ½0, 1.
Report. After the sensing decision is made to achieve the sensing result q(u j (t)), the result is reported to the server. The server chooses the report with maximal sensing quality for the same object by the rule given in equation (2), and the server feedbacks the reward to the reporting node. In this case, the node's report is called a successful report. Denoted the successful report for the object o j by c j (k) when the observation outcome is u j (t) = k and k.0. The node with the successful report can thus obtain some reward from the server, and counts its successful report probability, denoted by p r (g j (k)). Recall that the successful report can be obtained only after the observing decision, sensing decision, and report are taken. So the successful report probability p r (g j (k)) can be formulated as the following equation where the last equality is obtained by equations (3) and (4) Reward. The reward, denoted by r(t), for the successful report is defined to be a monotonically increasing function with the aspect. This article uses the sensing quality as the reward, which means that the successful report with higher sensing quality obtains higher reward according to equation (1). Recall the definition of the composite sensing process in section ''Conditional sensing,'' the reward can be obtained only after the observing decision a(t) = 1 and the sensing decision b j (t) = 1. Then, the immediate reward r(t) in slot t can be given by Notice that the node chooses only one object to sense each time if its sensing decision b.0. It is willing to choose the object that can result in the sensing quality and probability of the successful reporting as high as possible. The objects have their own states: appear or disappear, which compose of the state space P. They switch between the states from one time slot t to next time slot t + 1 with some probabilities p P .
Convert to POMDP
The composite sensing process can be mapped as the POMDP. In the process, the node observes only a part of the objects around it, and the report result cannot be directly known after it reports the sensing result to the server. The system states thus cannot be fully observable. In the following, this article formulates the composite sensing process as the POMDP by a tuple hP, Y, A, P, qi: P is a set of objects' states in the node's sensing range. Y is a finite set of sensing and report results, that is, u, g 2 Y.
A is the decision space, that is, A = fa, bg, 8a 2 f0, 1g, b = f0, 1, . . . , Kg. P is a set of the system state transition probabilities: P = fp(s 0 js)g, 8 s, s 0 2 P. q(u): A 3 P ! (0, 1 is the sensing quality function. Belief vector. In the composite sensing process, the node makes the decision according to the historical information H(t) at the beginning of each time slot. The historical information vector H(t) is updated in each time slot t. As time goes on, the size of H(t) grows quite big. Smallwood et al. 23 showed that the conditional probability, denoted by B(t), of the system states of the objects around the node based on its decision and observation history H(t) can be a sufficient statistic of these objects' historical states. B(t) is named as the belief vector of the node for the states of the objects around it at the end of each time slot t À 1, and is defined as called belief state, is the conditional probability (given the observing and sensing history) that the objects' state is s at the beginning of slot t + 1 prior to the state transition. B(t) can be updated based on B(t À 1) and the decisions and report results in the slot t. We introduce an updating function T to implement the updating of the belief vector, that is, B(t) = T (B(t À 1)jY(t), A(t)). This article adopts a reward-based updating function T : B(t + 1) = T (B(t), Y(t), A(t), C(t)). Based on the Bayes' rule, the update of B(t + 1) is calculated in two cases. When the observing decision makes the node to sleep, that is, a = 0, the belief vector is updated based solely on the underlying Markovian model of the object state, that is, B(t + 1) = T (B(t)ja = 0). The belief element is updated by the following equation When the user takes the observing decision a(t) = 1, it can observe the system state s(t) = z(t) with the probability as equation (3). The information state can be updated by the Bayes' rule: 24 when the node is in the state s 0 at slot t, the belief state is the probability that the state is in the state s at slot t + 1 where the denominator is a normalizing constant and is given by the sum of the numerator overall values of s 2 P as the following equation where p s (kjs) is given according to equation (4).
Objective. The composite sensing policy is a sequence of decision couples: ha(t), b(t)i, t 2 T . The optimal policy, denoted by ha(t), b(t)i à , t 2 T, is to maximize the expected overall sensing quality in T under the constraint of the successful reporting probability threshold z. It is equivalent to finding the optimal policy for the finite constrained POMDP. Recalling the immediate reward given in equation (7), the goal of the optimal policy is given by where B(0) is the initial belief vector for the object states, and z is the threshold for the success report probability.
Composite sensing policy
Some previous works, such as the one-pass algorithm, 23 can carry out the sequence of the optimal decision. The computation complexity required to obtain the optimal decision increases exponentially with the size of the state space, and can be very high for the general POMDP. 25 One of the alternative methods for addressing this problem is to design the myopic policy. 25 Myopic policy focuses on the immediate reward and ignores the impact of current policy on future rewards. Generally, the myopic policy is suboptimal. In this section, we explore some specific properties of the composite sensing system: monotonicity and the independence between the action and object states. With these properties, the computation for the optimal policy given in this section can be simplified.
Value function
The key step of making the composite decision is to measure how good the previous decision is. Value function can express the objective in equation (11) explicitly as functions of the belief vector B and the observing and sensing decision ha, bi. Let F(B(t), A) denotes the value function, which is the maximum expected total reward that can be accumulated starting from t given the belief state B(t). To make the decision ha, bi in each time slot t can accumulate the reward started from t with two parts: the immediate reward given in equation (7) and the maximum expected future reward F(B(t + 1), A). Considering all possible system states s 2 P and the successful report probability in equation (5), and then maximize over all possible decisions in A, we can arrive the value function in the following equation where the first term in the right of the equation denotes the expected immediate reward r(B, A), and the future reward F t (B(t + 1)) can be calculated by the future belief vector B(t + 1) with the Bayes' rule. 26,27 The immediate reward r(B, A) is achieved in current time slot by taking the sensing action, and is given as r(B, A) = c(gjs)q j .
Optimal composite sensing policy
This section analyzes the properties of the composite sensing process, which includes: (1) monotonicity of value function and (2) monotonicity of success report probability. With these properties, we can obtain an explicit optimal design for the composite sensing process and a deterministic optimal sensing policy in Lemma 2, and observing policy in Lemma 3.
Lemma 1
Monotonicity of success report probability. Given the sensing decision b = 1, the success report probability p r (g(k)) increases with the observing outcome u(t) = k, that is, p r (c j (k 0 )) ø p r (c j (k)) for k 0 ø k.
The proof of Lemma 1 is referred to Appendix 1.
Theorem 1
Monotonicity of value function. The value function F(B, u) is monotonically increasing with the aspect u, that is, F(B, u 0 ) ø F(B, u) for u 0 ø u. The proof of Theorem 1 is referred to Appendix 1.
Recall that the object of the composite sensing process is to maximize the overall reward under the constraint of the successful sensing probability as given in equation (11). If there is no constraint, the node would always make the composite sensing to wake up in each time slot so as to maximize the overall outcome. With the constraint given in equation (12), the composite sensing must be decided carefully. Since the successful report possibility increases monotonically with the aspect u as claimed in Lemma 1, there must be an aspect, denoted by u(t) = k, such that the following condition is satisfied given the observing outcome u(t).0 According to equation (6), the successful sensing probability is affected by both the observing and sensing decisions. By Lemma 1, the sensing decision b = 1 with higher the observing outcome u(t) = k can result in higher success report probability p(c j (k 0 )). According to Theorem 1, the value function monotonically increases with the success report probability p(c j (k 0 )). Therefore, we can make a threshold-structured optimal sensing decision, which is given by the below lemma.
Lemma 2
Optimal sensing decision. Given the observing outcome u(t) = k, the optimal sensing decision b is given as follows where the threshold aspect k is defined in equation (14). The next is to design the optimal observing decision, which chooses the best object to observe in each time slot since there are m objects. It is easy to find that there is definitively no chance to obtain the reward if the object is in the state of disappearance, that is, z = 0. Lemma 2 shows the optimal sensing decision, that is, the sensing decision must be taken only if the observing outcome is u(k) = 1, k ø k in order to satisfy the constraint in equation (12). For the constraint composite sensing process, the observing decision has to choose the object, whose state is z = 1 and the aspect u(k) = 1, k ø k. The threshold of the aspect u( k) = 1 divides the object states into two groups denoted by e z = 1 and e z = 0. In the first group e z = 0, the object states includes z = 0 or z = 1 and the aspect u(k) = 1, k\ k. In the second group e z = 1, the object states includes z = 1 and the aspect u(k) = 1, k ø k. For each object o j , we also define two transition probabilities: s j and f j , between the two group states, as follows The above two probabilities can be calculated and updated from the transition probability of the system states given in equation (9) or (10) Because one object's states are independent of others', the probability that the object o j is in the group state e z(t + 1) = 1 can be updated according to the observing outcome in previous time slot by the following equation We have the following lemma to determine the optimal observing decision.
Lemma 3
Optimal observing decision. Suppose that there are m objects. Given the observing outcome in the previous slot t À 1, the optimal observing decision is to observe the object with the ½minfs, fg, maxfs, fg. The optimal observing decision in time slot t is to choose the object o j to observe, where o j = arg o j max c j (k), 8k = 1, . . . , K.
Proof. According to the definition of the group state e z(t + 1), the observing decision lets the node active when the object state is z(t) = 1 and u(t) = k, k ø k. Thus, the constraint is satisfied by the observing decision. Thus, the object state, which results in the maximal value function, must be contained in the group state.
Next, we prove by induction that the value of the observing decision given in Lemma 3 is maximized. According to the system model in section ''System model,'' the object states have the equal transition probability among its states. The transition probability does not change with time. When the observing decision makes the node to sleep, that is, a(t) = 0, there is no chance to outcome any observing result by equation (3), that is, p o (kjs) = 0. Thus, p = 0 given in equation (6). So F(p, s) = 0) = 0 in this case. When the observing decision makes the node to observe the object, that is, a(t) = 1, the belief vector can be updated by equation (9). Thus, we have B(t + 1) = T (B(t)ja = 1). Since the observing decision in Lemma 3, the probability p(z j = 1js(t) = s) in equation (6) for the object state in the group statez = 1 can be maximized. With F(p, s) = 1. For each object, the transition probabilities between any two states are equal, that is, p(z i jz j ) = p(z j jz i ). Therefore, the observing decision given in Lemma 3 is optimal.
Optimality of myopic policy
A myopic policy does not consider the impact of the current action on the future or long-term reward, and focuses solely on maximizing the expected immediate reward. It is usually suboptimal for the general POMDP. The myopic policy need not estimate the future reward so that the computation complexity can be reduced. In this article, the myopic policy only cares the impact on the next time slot so we modify the value function as the following equation The description of the myopic policy is quite similar to the optimal one except that equation (13) in step 5 of Algorithm 1 is replaced by equation (19).
Experiment results
In this section, we conduct numerical and simulation to verify the performance of our optimal and myopic policy by comparing it with a randomized algorithm, which is just to select some objects in each round randomly. We numerically analyze the impact of various parameters such as the average quality, sensing ratio, Algorithm 1. Optimal policy.
Input: Initial belief vector B(0) and z. Output: Overall quality q(T). 1: List all possible information states (B, p r (g j (k))), v j 2 V, k = 1, Á Á Á , K, that each node may go through. Let B include all such states such that the constraint in inequality (12). 2: Let = 0 for all states (B, p r (g j (k))) with p r (g j (k)).z, v j 2 V, k = 1, Á Á Á , K; 3: while t\= T do 4: if B is nonempty then 5: Compute the value function for the state (B, p r (g j (k))) 2 B with equations (9) and (13); 6: Get the maximal quality of all object and remove its state from set B; 7: end if 8: t = t + 1 9: end while success report ratio, and the algorithm approximate ratio under proposed algorithms in terms of the number of iterations and different thresholds. Besides, we give the progress proportion and delay analysis of the optimal policy.
Evaluation setup
To better validate the performance of our proposed algorithms, we build a test bed and conduct field experiments. Our evaluation field is divided into three disks according to composite node v 1 , v 2 , v 3 with its observing range. Seven objects are uniformly and randomly deployed in the field. The possible states of the seven objects in each time slot are: appear or disappear. The state in different time slots has no effect on each other. If an object appears, the orientation is also randomly distributed, and the orientation in different time slots is also independent of each other. In the following Figures 6 and 7, we consider the average quality, sensing ratio, success report ratio, and the algorithm approximate ratio as metrics for evaluation under various parameters: the number of iterations and thresholds.
Performance comparison Average quality. Figure 6(a) shows the average quality obtained by the optimal, myopic, and randomized policies, respectively, under the different number of iterations and fixed threshold value z = 0:1. After almost 200 iterations, the optimal policy gets a stable average quality, about 1:19. Besides this, the average quality achieved by the myopic policy is about 0:88 after nearly 500 iterations. In contrast, the average quality of the random policy is about 0:73 after 500 iterations, which is much lower than other policies as shown in Figure 6(a).
As shown in Figure 7(a), we evaluate the average qualities obtained by the optimal and myopic policies compared with the random policy when we set various thresholds and keep the fixed 1500 iteration times. With the threshold increasing, the optimal strategy always maintains a good expectation value of the average quality about 1:2. In contrast, myopic and random policies show insufficient performance. When the threshold is between ½0, 0:3, the myopic policy gets the average quality about 0:92 and the random policy gets it about 0:79. When the threshold is greater than 0:3, their average qualities drop badly.
Sensing ratio. As mentioned in equation (12), when the success report probability is less than the threshold z, we will not take the sensing action in the optimal and myopic policies. Figure 6(b) counts the ratio of the number of sensing actions to the number of observing actions with the number of iterations increasing from 0 to 1700. It reflects the sensing probability obtained by the optimal, myopic, and randomized policies after observing objects. Again, we set the threshold with the fixed value z = 0:1. After almost 300 iterations, the optimal policy gets a stable sensing ratio, about 84%. Besides this, the sensing ratio obtained with the myopic policy is about 80% after nearly 20k iterations. In contrast, the sensing ratio of the random policy is about 68% after 300 iterations, which is much lower than other policies as shown in Figure 6(a).
As shown in Figure 7(b), we evaluate the sensing ratio obtained by the optimal and myopic policies compared to the random policy when we set various threshold values and keep the fixed number of iterations, that is, 1500. The optimal policy always maintains a good sensing ratio with about 80%. In contrast, the myopic policy and random policy show insufficient performance. When the threshold is between ½0, 0:4, the sensing ratio gets by the myopic policy is about 65% and the sensing ratio gets by the random policy is about 54%. When the threshold is greater than 0:4, their sensing ratio performance drops badly.
Success report ratio. As mentioned in equation (2), the server only accepts the maximal sensing quality for a same object among the nodes' reporting for it. Figure 6. Convergence of the optimal, myopic, and random policies: (a) average quality, (b) sensing ratio, (c) success report ratio, and (d) approximate ratio.
Therefore, the success report ratio is also one of the criteria to evaluate how good a strategy is. Figure 6(c) counts the ratio of the number of success reports to that of observing actions with the number of iterations increasing from 0 to 1700. We set the threshold value to be 0:1. It reflects the success report probability obtained by the optimal, myopic, and randomized policies after observing objects. After almost 300 iterations, the optimal policy gets a stable success report ratio about 82%. Besides this, the success report ratio by the myopic policy is about 79% after nearly 400 iterations. In contrast, the success report ratio of the random policy is about 67% after 300 iterations, which is obviously lower than other policies as shown in Figure 6(c). As shown in Figure 7(c), we evaluate the success report ratio obtained by the optimal and myopic policies compared to the random policy when we set various threshold values and keep the fixed 1500 iterations. The optimal strategy always maintains a good success report ratio about 84% and the myopic policy shows insufficient performance with 79% success report ratio. However, the success report ratio of the random policy is only about 68%.
Approximate ratio. The approximation ratio can measure the performance difference of our policies. It reflects the performance of the optimal, myopic, and randomized policies clearly. Again, we set the threshold value of z to be 0:1. In Figure 6(d), the blue curve shows the approximate ratio between the myopic and optimal policies with the number of iterations increasing from 0 to 1700. It is obvious that the performance of the optimal and myopic policies goes stable after nearly 200 iterations. The approximation ratio of the myopic and optimal policies is about 78% finally. The orange curve shows the approximation ratio between the random and optimal policies with the number of iterations increasing from 0 to 1700. It is obvious that the performance of the random and optimal policies gets stable after nearly 200 iterations. The approximation ratios of the myopic and optimal policies are about 73%. The green curve shows the approximation ratio between the random and myopic policies with the number of iterations increasing from 0 to 1700. After nearly 150 iterations, the performance of the random and myopic policies gets stable. The final approximation ratio of the myopic and optimal policies is about 77%.
As shown in Figure 7(d), we evaluate the approximation ratio among the optimal, myopic, and randomized policies when we set various thresholds and keep the fixed number of iterations, that is, 1500. In Figure 7(d), the blue curve shows the approximate ratio between the myopic and optimal policies with the threshold varies between ½0, 0:5. The approximation ratios of the myopic and optimal policies are about 78% and relatively stable in the interval ½0, 0:35. When the threshold is greater than 0:35, the approximation ratio suddenly drops to around 40%. The orange curve shows the approximation ratio between the random and optimal policies with the threshold varies between ½0, 0:5. The approximation ratio of the random and optimal policies is about 60% and relatively stable in the interval ½0, 0:4. When the threshold is greater than 0:4, the performance suddenly drops to around 20%. The green curve shows the approximation ratio between the random and myopic policies with the threshold varies between ½0, 0:5. The approximation ratios of the random and myopic policies are about 70% and relatively stable in the interval ½0, 0:45. When the threshold is greater than 0:45, the performance suddenly drops to around 30%.
Delay and progress proportion. To review the complex perceptual system in Figure 5, the server goes through five steps from the start of broadcasting to feedback rewards to the object. In this experiment, we use delay to represent the time from the beginning of the broadcast to the end of the feedback. As shown in Figure 8, we observe that the delay of the optimal policy increases significantly as the number of objects increases. In addition, after several hundred iterations, the delay of the optimal policy is basically stable. In this experiment, it is assumed that we need the optimal strategy to complete the calculation of 1500 iterations. The progress proportion represents the percentage of the number of completed iterations to the total 1500 iterations under a particular timestamp. As shown in Figure 9, we observe that with the increase in the number of objects, the time to complete the fixed 1500 iterations of the optimal policy is significantly extended. The main trends in the results are summarized as follows: The average quality, sensing ratio, success report ratio, and other indicators obtained by the optimal policy and myopic policy tend to be stable. Compared with the myopic policy, some indicators of the optimal policy reached the stability earlier.
The effect of threshold setting on myopic policy and random policy is much greater than that of optimal policy. With the increase in objects' number in the experimental scene, the delay increases significantly and the progress proportion slows down significantly.
Conclusion
This article observed the phenomenon of composite sensing with user as sensor in crowdsourcing. The phenomenon usually happens and has not been well studied. We thus proposed the framework: composite sensing, and then map it as a POMDP problem. The composite sensing policy is proposed and analyzed theoretically and experimentally. The theoretical optimization of the policy is guaranteed. In this article, we discuss the case where the smartphone can choose one direction to sense in each time slot. We take another case as a future work, where the smartphone may choose one or more directions to sense in each time slot. Compared with traditional methods, the use of this method in large-scale environmental data has yet to be verified and optimized.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article. | 2020-06-02T21:04:29.431Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "9823bd358b9195d815adfe1e42699a129cb5f371",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1550147720927331",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "9823bd358b9195d815adfe1e42699a129cb5f371",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
1249361 | pes2o/s2orc | v3-fos-license | Nucleation and Collapse of the Superconducting Phase in Type-I Superconducting Films
The phase transition between the intermediate and normal states in type-I superconducting films is investigated using magneto-optical imaging. Magnetic hysteresis with different transition fields for collapse and nucleation of superconducting domains is found. This is accompanied by topological hysteresis characterized by the collapse of circular domains and the appearance of lamellar domains. Magnetic hysteresis is shown to arise from supercooled and superheated states. Domain-shape instability resulting from long-range magnetic interaction accounts well for topological hysteresis. Connection with similar effects in systems with long-range magnetic interactions is emphasized.
When submitted to a magnetic field, a type-I superconductor undergoes a first-order phase transition between the superconducting (SC) and the normal-state (NS) homogeneous phases (HP). In the case of films in a perpendicular field, the transition proceeds through the onset of a modulated phase (MP), the so-called intermediate state (IS), which consists of an intricate pattern of SC and NS domains [1,2,3,4]. Such a transition is encountered in a variety of quasi-two dimensional systems including ferromagnetic thin films with strong uniaxial anisotropy [5,6], Langmuir polarized monomolecular layers at airwater interface [7], magnetic fluids in Hele-Shaw cells [8]. MPs arise from the competition between short-range interactions associated with positive interface energy and long-range magnetic or dielectric interactions. Although these interactions have been recognized as a major ingredient in the description of the dynamics of pattern formation [4,9,10,11], they were not taken into account in former studies of interface motion [12,13]. An important issue concerning MP systems is the role of long-range interactions in the nucleation process of one phase into the other. In a closely related field, the ion-induced nucleation of the liquid phase in the gas phase of a polar fluid, a problem which dates back to the invention of Wilson's cloud chamber, this question is still under active debate [14].
In type-I SC films, it is well known that the SC-NS transition occurs at a magnetic field smaller than the bulk thermodynamical critical field H c . This is due to the SC-NS interface energy and the magnetic stray field energy of the NS domains. The transition field was estimated using an approximate expression of this magnetic energy [15]. A more accurate prediction of the transition field should be obtained in the framework of the recently developed current-loop (CL) models [4,10,16] that made possible the calculation of the magnetic en-ergy for various domain patterns. However, in SC films [1,17], as well as in magnetic systems [6], a hysteresis loop opens up very close to the MP-HP boundary. Two distinct transition fields are found for the appearance and collapse of domains. Surprinsingly, the origin of this magnetic hysteresis still remains an open question. Does it arise from the existence of supercooled (SCL) and superheated (SH) metastable states? What is precisely the role of pinning centers and defects? Metastable states were clearly identified in dispersions of micron size SC spheres where the small volume reduces the probability of heterogeneous nucleation at defects [18,19]. On the opposite, in large size systems like films, SCL and SH states are not expected to be observed [2]. In addition to magnetic hysteresis, domain-shape hysteresis is found: domain shape and pattern are different for rising and decreasing field. The interplay between this topological hysteresis and the magnetic hysteresis at the MP-HP boundary is not well understood. Valuable insights into this question are expected from the study of the stability range of different domain shapes, a task that recent models have made possible [4,10].
In this Letter, we discuss the physical origin of magnetic and topological hysteresis close to the MP-HP boundary. The two transition fields are shown to correspond to the rupture of metastable states. They are used to determine the critical radius for the nucleation and collapse of the SC phase. We show that topological hysteresis very likely originates from domain-shape instability arising from long-range interactions. The theoretical analysis of metastable states and domain-shape instabilities is carried out in the framework of the constrained current-loop (CCL) model which was successfully developed to take into account screening by superconducting currents in SC films [4].
The IS domain pattern in SC films was studied with the high-resolution Faraday microscopy imaging technique [20]. The samples consisted of indium films with thicknesses 0.6, 1.1, 1.5, 2.2, 10.0 ± 0.1 µm and 33 ± 3 µm. They were placed in an immersion-type cryostat in pumped liquid helium. The optical setup is similar to a reflection polarizing microscope [4]. The samples were zero-field cooled then subjected to a perpendicular magnetic field.
The SC-NS phase transition is found to be hysteretic. Distinct transition fields are found in rising and decreasing applied field. They are associated with different morphologies of the SC domains. Figure 1 shows typical IS patterns in the 10 µm thick film near the MP-HP boundary. In rising field lamellar and bubble-shape SC domains are observed ( Fig. 1(a)). The length of SC lamellae decreases until they are reduced to SC bubbles. The transition to the homogeneous NS phase results from the collapse of these bubbles whose diameter is 6-7 µm. In contrast, with decreasing field, the lamellar pattern appears abruptly in a very narrow range of field ( Fig. 1(b)). The associated magnetic hysteresis is displayed in Fig. 2. The area fraction of the NS phase ρ n , determined from magneto-optical images, is plotted as a function of the reduced applied field h = H/H c . The magnetic field H n in the NS domains is related to ρ n by flux conservation ρ n H n = H. There is a clear deviation from ρ n = H/H c shown by the dashed line, which means that H n < H c . The transition to the NS (ρ n =1) is thereby completed at a lower field than H c . However, the transition is not characterized by a unique transition field but by two fields h up and h down . h up (h down ) are the fields at which the SC domains collapse (appear) when h is increased (decreased).
This magnetic hysteresis was observed in all the studied samples. In Fig. 3, h up and h down are reported as a function of the magnetic Bond number Bm = d/2π∆(T ) [16]. d is the sample thickness and ∆(T ) is the interface wall parameter [21]. In order to determine whether h up and h down are related to superheating and supercooling, they were compared to the transition field deduced from the free energy of the system. In the framework of the CCL model, we calculated the free energy associated with the formation of an isolated SC cylindrical domain where σ SN = ∆H 2 c /8π is the interfacial tension between the SC and NS phases, p = 2R/d is the reduced bubble diameter. The first term in Eq. 1 is the interface energy. The second term contains the bulk magnetic energy and the condensation energy. It is negative since h < 1. The third term represents the interaction energy of the screening current circulating within the bubble wall. The bubble energy is plotted in Fig. 4 as a function of p. The set of curves obtained for different applied fields presents the typical behavior of a metastable system. The critical field h crit = −x 1/2 + (1 + x) 1/2 , x = 8/ 3π 2 Bm , is the field at which the free energies with and without a SC bubble are equal. An energy barrier impedes the nucleation or the collapse of a bubble at h = h crit . Starting from the NS phase (p = 0) and decreasing H, the system may stay in a metastable state. Nucleation of a SC bubble occurs if p > p nucl with h stands here for the nucleation field. The expansion of the SC phase is however limited by the long-range Biot-Savart interaction of the screening current (p 3 term in Eq. 1) leading to an equilibrium bubble diameter p 0 = y 1 + (1 − 8/z) 1/2 . In the following we assume that the first step of nucleation yields circular domains. Their evolution towards the laminar shape will be discussed later.
Starting from the IS state with SC bubbles and rising the field, the system may remain in a metastable state above h crit , up to the collapse field corresponding to the disappearance of the energy barrier: h coll = −w 1/2 + (1 + w) 1/2 with w = 2/(π 2 Bm). The corresponding collapse diameter is p coll = 1 + (1 + 1/w) 1/2 /(πBm).
h crit and h coll are compared to h down and h up in Fig. 3. h down and h up -values lie below and above h crit , respectively. This is consistent with the existence of barriers for nucleation and collapse. SCL and SH states are indeed observed. Since the h down fields remain much larger than the spinodal limit H c2 /H c ≈ 0.12 for indium, the onset of the SC phase proceeds through heterogeneous nucleation. Thermally activated nucleation can be ruled out since the barrier is larger than the thermal energy kT by many orders of magnitude. Defects likely act as potential wells that locally cancel the nucleation barrier when its amplitude is sufficiently lowered by the applied field. The nucleation radius R nucl = p nucl d/2 is obtained from the nucleation field h down and plotted in Fig. 5(a) in units of ∆. R nucl is of the order of 2 ∆ showing a variation by only a factor 1.5 while Bm is changed by a factor 30. The nucleation volume is thereby ≈ π∆ 2 d. This is quite reasonable since ∆ is of the order of the coherence length ξ, which is the typical dimension of perturbation of the order parameter when a SC domain nucleates. A more accurate description of nucleation should take into account the spatial variation of the order parameter at the SC-NS interface, but this is beyond the scope of the CCL model.
Considering the collapse of SC bubbles, let us note that the h up -values lie even above h coll . As the movement of SC bubbles is frozen close to h up , this shift of the SC-NS transition beyond the collapse field likely originates from the existence of pinning centers that form local potential wells. For all samples studied they are found to decrease the energy of the system by almost the same quantity as the potential wells that cancel the nucleation barrier. This suggests a common nature for nucleation and pinning centers. Let us examine now whether the CCL model, which describes well the magnetic hysteresis, also provides a good agreement for domain sizes. In Fig. 5(b) the average diameter of SC bubbles measured close to h up is compared to the calculated diameters p crit and p coll and to the diameter p up . p up corresponds to the p-value for which ∂ 2 F/∂ 2 p=0 at h = h up . The measured diameters are in quite good agreement with p coll and p up , thereby indicating that the CCL model accurately describes domain sizes.
Let us now address the question of domain-shape hysteresis: why does nucleation of the SC phase give rise to the lamellar pattern even though the ground state of the system close to the critical field is the bubble phase? It was suggested that the ramification of the SC phase propagating in the NS phase originates from dynamical instabilities driven by magnetic field diffusion [12,13]. However the role of long-range interactions in branching instabilities was later emphasized [9,10]. We consider here only the onset of domain formation. We show that shape instability arising from long-range magnetic interactions very likely accounts for topological hysteresis. From linear stability analysis [23] the critical diameter for the bubble elliptical instability is obtained as If the bubble diameter after nucleation p 0 is larger than p inst a bubble evolves into an elongated-shape domain. p 0 and p inst are plotted as a function of the field in Fig. 6(a) for the 10 µm indium sample (Bm=3.2). At the nucleation field shown by the vertical bar the nucleated bubble diameter is very close to the instability limit, being smaller by only 10 %.
Moreover our theoretical predictions also provide excellent agreement with experimental data obtained by muon spin rotation spectroscopy on white tin [17] as shown in Fig. 6(b). The magnetic Bond number Bm = 387 is much larger than for indium. The field of disappearance of the bubble phase (h up ) is slightly larger than the calculated collapse field h coll . The crossing between p 0 and p inst coincides with the field h trans at which a transition from the lamellar to the bubble phase is observed in rising field. In decreasing field, nucleation occurs at h down , below the crossing point. Nucleated bubbles are unstable and therefore the lamellar phase is observed.
We propose the following description of the hysteretic SC-NS transition. In rising field, the SC lamellar phase evolves towards the bubble phase which can remain in a metastable state above the critical field. The complete transition to the normal phase occurs with the collapse of finite-size bubbles at the collapse field or slightly above if SC domains are trapped in local potential wells. In decreasing field, the NS phase is a metastable state. Nucleation occurs below the critical field due to the existence of a nucleation barrier. Depending on the magnetic Bond number and the value of the nucleation field, nucleation may yield unstable bubbles with respect to elliptical deformation. They evolve into lamellae with subsequent growth of the lamellar pattern in order to reach the equilibrium state corresponding to the applied field.
In conclusion, the SC-NS phase transition in type-I SC films exhibits magnetic hysteresis and domain-shape hysteresis, which are shown to arise from different physical phenomena. Magnetic hysteresis, characterized by different values of the collapse and nucleation fields of SC domains, is found to be the signature of metastable states. Domain-shape hysteresis manifests itself as the collapse of bubble domains and nucleation of lamellar domains. Bubble-shape elliptical instability provides a very likely explanation for this topological hysteresis for a broad range of values of the magnetic Bond number. An analysis along the same lines would be of valuable interest for other systems with long-range interactions that exhibit similar domain patterns and hysteretic behavior. | 2017-09-14T04:15:58.260Z | 2005-11-03T00:00:00.000 | {
"year": 2005,
"sha1": "9f4ccf7574665c7a9fe30fe73b712f2f617dc7af",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0511083",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4d689d03bb17cd8f966ee717173d52d1045b084f",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
240565851 | pes2o/s2orc | v3-fos-license | Analytical Modeling of Sensitivity Parameters Influenced by Practically Feasible Arrangement of Bio-Molecules in Dielectric Modulated FET Biosensor
In this paper, analytical modeling of a Dielectric Modulated Double Gate Field Effect Transistor (DM-DGFET) for biosensing application is presented with extensive data analysis. Firstly, the size of the nanogaps and arrangements of biomolecules in those gaps are optimized with respect to the sensitivity of the above sensor. The optimized DM-DGFET is next analyzed on the basis of its modeling and simulation. This paper addresses novel issues arising from arrangements of biomolecules, especially from practical point of view. Effect of probe placement due to steric hindrance and random nature of biomolecules, are also considered. The capacitances associated with the nanogaps occupied by biomolecules, following various arrangements, are modeled. Expressions of the threshold voltage, drain current and its sensitivity in terms of variations are also derived using the capacitance model. A comparative study of the proposed and the existing architectures is made. The influence of process variation on the sensitivity of the sensor is also studied. The results from the proposed analytical model are validated with the simulated data obtained from TCAD device simulator. In conclusion, the proposed DM-DGFET based biosensor architecture will emerge as an optimal model, very useful for the study on this field in future.
Introduction
Biosensor research is one of the most developing fields in last few decades. Biosensor works on the principle of detection of biomolecules and gets significant importance for clinical diagnostic, defense, and in several environmental applications. Apart from the above mentioned fields, biosensor is also having a wide range of application areas, viz., in the field of optics [1], electrochemical sensors [2], nanomechanics [3] etc. However such biosensors suffer from some serious shortcomings, like requirements of costly equipment, high manufacturing cost, and low sensitivity. Semiconductor based biosensor has evolved in order to overcome these issues. Field effect transistor (FET) based biosensor has several advantages, such as, cost effectiveness, better sensing mechanism, greater precision along with higher sensitivity. Ion Sensitive Field Effect Transistor (ISFET) was invented in around 1970s. Initially, it was widely used for the detection of pH level in any type of solution [6][7][8]. The concentration of the ions in the analytes controls the performance of the device. From the deviation of the threshold voltage the concentration of ion or the pH value was detected [9][10][11]. In spite of its good performance for the detection of ionic solution or charged molecules, the sensitivity of ISFET degrades for the neutral biomolecules [12]. Therefore, researchers felt the need to design biosensor that can be equally sensitive for charged as well as neutral biomolecules. Dielectric Modulated FET (DMFET) appears as the perfect solution for this issue. In DMFET, the change in the gate capacitance occurs due to the change in dielectric constant of the materials to be tested, and (that) results in (the) deviation in the threshold voltage and drain current of (from) the device w.r.t. those with no material i.e. with air filled nanogaps [12]. On the basis of such deviations, the target material can be characterized. The basic 3D structure of DMFET was proposed by Lm et al. in [12]. In this type of biosensor devices, formation of a nanogap involves a thin film deposition and subsequent wet etching processes. This reduces the complications in lithographic process associated with the planar nanogap FETs [13][14][15][16][17]. As sensitivity is the prime concern of biosensors, search for highly sensitive biosensor is still in progress. Various unique structures have been proposed to improve the sensitivity. Narang et al. [18] proposed a DG-MOSFET based Dielectric modulated structure for better response. As this structure comprises of two nanogaps for trapping biomolecules, its sensing range is expected to be much wider.
As far as the modeling is concerned, significant developments took place in last few years. Narang et al. presented a tunnel FET based bio-sensor considering dielectric-modulation [19]. They also reported a comparative analysis between dielectric modulated FET and TFET in the context of biosensors [20]. Choi et al. modeled the nanogap-embedded FET for bio-sensor applications [21]. Bhattacharyya et al. assessed the performance of a dual-pocket vertical heterostructure tunnel FET-based biosensor [22].
In all the above articles, dielectric modulation has been considered. But, the various possible filling patterns generated by the bio-molecules trapped in nanogap are believed to influence the sensitivity of the biosensor. Unfortunately, this problem is not yet addressed.
In this work, some practical surface profiles of organic fluids appeared in biomolecule-filled nanogaps in DG MOSFET will be considered for analytical modeling, for the first time. The silanization process of surface functionalization will be considered in this work, in which the hybridization of bio-molecules with SiO 2 takes place [22]. The structure of the nanogap and its surrounding material will be optimized in order to improve the sensitivity. A detail analytical model of surface potential, threshold voltage, drain current and sensitivity will be presented considering various surface profiles of biomolecules filling the nanogaps created in both sides of the gates. Considering such four patterns (increasing/decreasing inward pattern, concave and convex pattern), the threshold voltage and ON-current sensitivity will be determined. The deviations in threshold voltage and ON-current sensitivity will also be studied. In addition, the effects of process variations on bio-sensing parameters will be analyzed in detail. The analytical modeling of the proposed device structure will be validated with the simulated results obtained from Silvaco TCAD.
Device Description
The device structure is based on traditional n-channel DGMOSFET with a dielectric modulated nanogap embedded gate insulator as shown in Fig. 1. SiO 2 layer underneath the gate has divided into hydroxyl (-OH) bond and acted as a biomolecule receptor. For a better operation and sensitivity, a thin layer of 1 nm SiO 2 on the top of the nanogap is also kept untouched during the etching process. T ox , T G and t si denote the thickness of the oxide layer, gate and Si-film respectively.
Material Optimization
In past few decades, researchers realised nanogaps by partly etching out various materials used in the dielectric layer. Amongst several materials, Silicon Di-oxide (SiO 2 ), Hafnium Di-oxide (HfO 2 ), and Poly-Si are used most frequently. In this literature, a comparative analysis of these three materials, considering the biomolecules of distinct permittivities, has been performed based on the threshold voltage and current sensitivity. Most popular biomolecular materials used nowadays are APTES ( k = 3.57 ), biotin ( k = 2.63 ), Keratin ( k = 5 − 10 ), amino acids ( k = 11 − 25 ), in which the permittivity ( k ) varies in the range of 2 − 25 . Here, the materials of three different permittivity values ( k = 2, 11, 22 ) are examined in detail. Figures 2 and 3 depict that the device with SiO 2 as the nanogap surrounding material yields the best performance judged on the basis of both type of sensitivities. The sensitivity increases 2 to 4 times with k = 11, 22 respectively for the change in surrounding materials of nanogap, i.e., SiO 2 as a replacement of Poly − Si . The value of V th in case of SiO 2 increases with the increment of permittivity of the biomolecules due to the increase in associated field lines. On the other hand, Poly-Si being a good conductor, does not cause any significant change in V th due to the change of biomolecules ; it has thus similar V th for all k(= 2, 11, 22) . Therefore, the simulation results firmly establish the fact that the device with SiO 2 as nanogap lining material is the best performer among the three materials for bio sensing applications. Therefore, the device with SiO 2 is chosen for further optimization.
Structural Optimization
Optimization of nanogap structure, based on its height and width, is performed in this literature for the first time. The maximum length of the nanogap in each side is chosen as one third of the gate length ( L = 75 nm ). in order to maintain proper functionality. Thus a minimum of 25 nm I-shaped layer is maintained. The width of the nanogap in each side is considered as 25 nm, 22.5 nm and 20 nm. Along with that, the height of the nanogap is also optimized with four trial values −25 nm, 30 nm, 35 nm and 40 nm. Larger the nanogap height, greater the gate-metal separation from the channel yields poorer gate control. The ON-current and the ON-current sensitivity ( I ON ) are found to increase with the increase in permittivity of biomolecules.
The present study particularly aimed at the detection of DNA structure ( k ∼ 11 ) [23]. Table 1 and Fig. 4 present the current and threshold voltage sensitivity of the biosensor with different combinations of width and height of nanogap structure fully occupied by the biomolecules. It is observed that with the increase in nanogap height, higher voltage is needed to switch ON the device and the threshold voltage sensitivity also increases ( V th ). Around 15% decrement in I ON is observed for the increase in the height of the nanogap from 25 nm to 40 nm. It is due to the narrowness of the gate dielectric and increment of the distance between channels and gate-metal, which in turn reduces the gate control over the channel. For the similar change in nanogap height, V th increases upto 2 times (width = 20nm ) and 2.5 times (width = 25nm ). Hence, it will be a healthy tradeoff, in which the device with 3 times higher V th is chosen, ignoring the negligible reduction in I ON . Thus, nanogap
Modeling of Surface Potential
The 2D potential profile ( (x, y) ) of the channel underneath the gate is determined from the Poisson's Equation, given by, where, q, N a and si represent charge of an electron, doping concentration of the channel and permittivity of Si-film respectively. The solution of Eq. 1 is a parabolic function of y, [24] and is expressed as follows, The expressions of P 1 (x) , P 2 (x) and P 3 (x) are derived using the boundary conditions, as follows, i) Putting y = 0 in Eq. 2 yields surface potential of the top surface i.e., ii) Electric field at the top channel is expressed as, iii) In the bottom channel, the electric field is calculated as, Due to the symmetrical nature of the device with respect to both the gates, top and bottom channels are having same potential while operating in shorted gate mode, i.e., Hence, from Eqs. 4, 5 and 6, P 3 (x) can be expressed as, Therefore, the generalized expression of potential function becomes, Putting y = 0 in Eq. 8 yields, Replacing the expressions of second order partial fractions from Eqs. 9a, 9b in Eq. 1 yields, . The solution of Eq. 10 is,
Capacitance Associated with Various Filling Patterns in Nano-Gaps
In DM-DGFET, various bio-fluids are characterized through variations in capacitance associated with the nano-gaps filled with those fluids. In real situation, nano-gaps are hardly completely filled rather bio-fluids containing long biomolecules may follow different surface profiles within the nanogaps,and therefore offer different capacitances. Here, we will consider four basic filling patterns that ma be generated by the fluids confined in nanogaps. These are schematically presented in Fig. 5.
In order to estimate the capacitance associated with the nanogaps, we first concentrate on the smallest column containing minimum numbers of bio-molecules, as depicted in Fig. 6(a). The part of the bio-molecules occupies nanogap contribute capacitance C BM , whereas the rest of the nanogaps contributes capacitance C VA . Thus, the equivalent capacitance of the smallest column ( C 1 = C VA ||C BM ) can be expressed as, Similarly, capacitance associated with other columns are calculated for various patterns of bio-molecules in the nano-gap. Few of them are analyzed in the following sub-sections.
Patterns Increasing and Decreasing Inward Patterns
Increasing and decreasing inward patterns are shown in Fig. 5(a). These patterns comprise of two symmetrical sections of dielectric modulated nano-gaps. Each section consists of five columns in which the height of the bio-molecule filled portion of the individual column increases with a step size of T ox ∕5 . C i , i ( i = 1 to 5 ) denote the capacitance and area associated with the i th column respectively. The smallest value of i corresponds to the column that consists of minimum number of bio-molecules. Therefore, the capacitances are calculated as follows, Two sections of dielectric modulated nano-gaps are separated by a column and enclosed by two thin layers (top and bottom) of dielectric. The capacitance associated with the column is mathematically expressed as, C col = ox1 col ∕T ox ; where, ox1 , col are the permittivity of the dielectric and area of the column respectively. Therefore, the total capacitance ( C z ) per unit area associated with two nano-gaps and column is expressed as, t is the total area associated with the nano-gaps and column. Capacitances per unit area associated with the top and bottom layers that enclose the nao-gaps, are denoted by C x and C y , respectively (as shown in Fig. 6); and expressed mathematically as, C x = ox1 ∕T oxpad1 , C y = ox1 ∕T oxpad2 ; where, T oxpad1 and T oxpad2 are the thicknesses of the two layers. Therefore, the total oxide capacitance per unit area ( C ox ) is expressed as,
Concave and Convex Patterns
In concave and convex pattern of dielectric modulated nanogap (as shown in Fig. 5(b)), each side consists of five columns assumed to be partly or fully filled with bio-molecules. The smallest height of bio-molecules filled column is assumed to be T ox ∕3 and the rest portion of height 2T ox ∕3 is vacant (filled by air). The height of bio-molecules increases/decreases by a step height of T ox ∕3 . Assuming, the capacitance and area (13)
Expressions of Coefficients
Expressions of constants such as, A 1 , A 2 can be derived from the continuity equations and built-in potential developed at the interface of two different regions. Built-in potentials developed at the source-channel and drain-channel interfaces are denoted by P 4 and P 5 respectively. The expressions of P 4 and P 5 are given by, The continuity equations of potential at the source/drainchannel interface are expressed as, From Eqs. 11, 19a and 19b, the expressions of A 1 and A 2 are derived as,
Threshold Voltage ( V th )
Threshold voltage is defined as the minimum gate-to-source voltage to make the surface potential equal to the twice of the Fermi potential ( f ,si ) of the channel. It is expressed mathematically as [25], where, x min is the location of the minimum surface potential from the source-channel interface [26], obtained from the following equation,
Sensitivity ( S bio )
Sensitivity(S) has been defined as the change in the threshold voltage with biomolecules trapped in nanogap cavities with respect to the empty cavity [18], given as, It is found that the S bio increases with the increase in the height of the cavity as its permit larger number of molecules to get into it. The literature [18] concludes that larger molecules offer better sensitivity. Streptavidin and other larger molecule of size around 6-7 nm is the best molecule to fit in this device. It is also reported that the device with completely full cavity offers better sensitivity. However, in the full cavity structure as the gate is floating, it should have fabrication issues. It requires body support during fabrication, that in turn nullify the effect of down-scaling.
Drain Current ( I DS )
Drain current model of the DG MOSFET based structure can be represented for its various operating regions, i.e., subthreshold, linear and saturation. Liang et al. in [27] had presented the subthreshold current model which can be used to obtain the I DS,sub . In linear and saturation regions of operation, the drain current model proposed in [28] is used in the present study as follows, where, a and c represent the effect of velocity overshoot [29] and channel length modulation factor [30] respectively. The saturation voltage ( V Dsat ) is calculated from the formula, where, v sat denotes the saturation velocity of electron [28]. Values of , are given in [31,32].
Model Validation
In this section, the proposed model is compared with the simulated data. The transfer characteristics of cocave convex and increasing decreasing patterns considering both the model and simulated data are shown in Fig. 7. Perfect (28) S bio = (V T(empty) − V T(bio) )∕V T(empty) (29) matching between the two is observed, which validates the proposed model.
Results and Discussions
The results obtained from the modeling and simulations (using SILVACO TCAD) of the DM DGFET with Fully Filled Nanogap considering the effects of different biomolecular patterns have been thoroughly analyzed. The threshold voltage and the ON-current sensitivity have been calculated for four different patterns of bio-molecules confined in the nano-gaps. The deviation of the above parameters from the fully filled structure has also been calculated.
In the mirror image pattern, the top and bottom nanogaps are assumed to be exactly same.
In Fig. 8, the transfer characteristics of four different patterns have been depicted. It is clearly observed from the figure that the concave pattern exhibits the highest ON-current leading to a deviation margin of 13%. It also produces the lowest OFF-current. The other three patterns generate 6-9 times larger OFF-current than the concave counterpart.
Although, the fully filled structure of DG-MOSFET based bio-sensor has been analyzed in various literatures, partially (half and quarterly) filled nano-gap may describe the real situation better, which needs special attention. In partially filled nano-gap structure, mirror-like pattern of top and bottom nanogaps has been studied in various reports. However, this pattern is totally dependent on the orientation of biosensor arising from its placing. It may vary depending on placement of the sensor in the bio-molecular solution. For this reason, along with conventional mirror pattern in the bottom nanogap, the same pattern both in the top and bottom nanogaps has also been tested. The mirror type pattern architecture is termed as Dielectric modulated DGMOSFET with Mirror Structure (DM DGMOSFET-MS), whereas the same type architecture is termed as Dielectric modulated DGMOSFET with Same Structure (DM DGMOSFET-SS); these are depicted in Fig. 9(a), (b), respectively. Both of these nano-gap architectures are analyzed in detail and the values of various parameters are given in Tables 2 and 3, as follows.
In Fig. 10, the transfer characteristic of the DM-DGMOS-FET-SS is presented. It also exhibits a similar trend as that of the DM-DGMOSFET-MS. The concave pattern yields the lowest OFF-current and highest ON-current in both the device architectures. ON-Current Sensitivity ( I ON ) for the same type of devices with similar patterns is depicted in Fig. 12. It is found that both the architecture offer almost the same sensitivity. However, in between four patterns, the concave pattern for both DM-DGMOSFET-MS and DM-DGMOSFET-SS shows almost 26-35% higher sensitivity compared to other three patterns.
Researches had mostly considered the ideal situation where the nano-gap is fully filled. However, in reality due to the steric hindrance, probe placement and other fabrication disorders, the nanogap may not be completely filled. The deviation of the threshold voltage ( V th ) in reference to the device with a full-filled cavity is termed as "error", which is The biomolecular pattern having the least error is supposed to be the most perfect i.e., closest to the ideal case. Depending upon the highest sensitivity and lowest error values, a trade-off should be done, which can be used in future. Figure 13 illustrates the V th,Error of DM-DGMOS-FET-MS and DM-DGMOSFET-SS devices for four different patterns. From the figure, it is quite clear that the DM-DGMOSFET-SS follows the same trend as that of the DM-DGMOSFET-MS, but the magnitude of error is almost 60-65% lower for the former device, which is highly appreciable. Among the four patterns, decreasing and concave patterns yield lower V th,Error values in both the cases. The concave pattern showcases the minimum value for DM-DGMOSFET-MS, whereas the decreasing pattern exhibits the minimum value for DM-DGMOSFET-SS. However, their difference is almost negligible. The concave pattern shows a maximum of 33% and 60% lower V th,Error in case of DM-DGMOSFET-MS and DM-DGMOSFET-SS, respectively. However, the decreasing pattern shows a negligible 4 mV lower error than the concave pattern when used in DM-DGMOSFET-SS configuration.
In Fig. 14, I ON Tables 2 and 3 respectively.
It has been found from the study that DM-DGMOSFET-SS depicts better performance than DM-DGMOSFET-MS in every aspect, and the concave pattern emerges as the preferable one in most of the cases. Therefore, DM-DGMOSFET-SS with concave pattern has been chosen for further studies.
Effect of Process Variation on Biosensing Parameters
So far, the study has been carried out assuming the symmetrical device structure. It is evident that the DM-DGMOSFET-SS with concave biomolecular pattern and optimised nanogap offers the best performance in terms of sensitivity and error analysis. In contemporary researches, mainly in H. Im et al. [12], it is stated that the nanogap used in the biosensing process, is an outcome of sensible wet etching procedure. Although the fabrication process are performed with extreme precision, the influence of process variation can not be made totally absent. In this section, a structure which may appear due to the variation in the wet etching process has been studied. Four types of structural variations due to the variation in etching process are considered below.
(a) Nanogaps wider in drain side under both top and bottom gates, (b) Nanogaps wider in source side under both top and bottom gates, (c) Nanogaps wider in drain side under only one gate, (d) Nanogaps wider in source side under only one gate.
The structures are schematically presented in Fig. 15. All these structures are considered to be Asymmetric as they deviate from the ideal symmetric structure. In case Firstly, the sensitivity parameters of the above structures are investigated with the optimized nanogaps, each filled with biomolecules following the concave pattern as described earlier. As both the gates are shorted, variation in gate bias will equally influence the nanogaps under top and bottom gates.
In Fig. 16, the threshold voltage sensitivity ( V th ) of the four devices is exhibited. The figure suggests that the device in Fig. 15(a) and (c), i.e. the devices with wider nanogap in drain side offer ∼ 100% and ∼ 40% higher V th for aligned structure and misaligned structures respectively than its source side counterpart. Therefore, more the vertical SiO 2 layers (shaded Blue in Fig. 15) shift towards source end, stronger gate control and in turn, a higher sensitivity for a biosensor are achieved. Figure 17 depicts the variation in I ON for all the above structures. However, the variations in this biosensing parameter are less than 10%, and thus not much significant. It may be concluded that I ON is quite insensitive to process variation related to formation of the nanogap cavities. Results related to Figs. 16 and 17 are presented in Table 4.
Conclusion
In this paper, an extensive study has been carried out on Dielectric Modulated DGFET to view its suitability as a biosensing device by modifying the structure based on the orientation of the biomolecules within the nanogaps. A novel approach has been made by optimizing the device with respect to the nanogap forming material along with the height and width of it. SiO 2 emerges as the best one for creating the nanogaps by obtaining 200-400% ΔV th than its nearest contender. The largest nanogap height (40 nm) and width (25 nm) taken for optimization, have been selected based on the sensitivity. The effect of different profiles arising from distribution of biomolecules in nanogap cavities has also been studied in this paper. A mirror type profile (DM-DGFET-MS) and the same type profile (DM-DGFET-SS) have been studied. Results show that for both DM-DGFET-MS and DM-DGFET-SS, concave surface profile of biomolecular solution in the nanogap is having respectively 26% and 35% better sensitivity than the other cases studied here. Its preformation deviation ( ΔV th,Error , ΔI ON,Error ) from ideal sensor with fully-filled nanogaps has been also calculated and concave profile yields 33% lower error value than its nearest counterpart for both DM-DGFET-MS and DM-DGFET-SS configurations. Therefore, the study suggests that this type of FET based biosensor can be a good option for the solution, which follows a concave profile in a confined region due to its liquid property like viscosity and surface tension. Influence of changing the degree of asymmetry on the biosensor performance and finding the solutions that can offer concave profiles has also been studied. The devices with wider drain side nanogap offers ∼ 100% and ∼ 40% higher ΔV th for aligned structure and misaligned structures respectively than its source side counterpart. Therefore, more the vertical SiO 2 layers shift towards source end, stronger gate control and in turn, a higher sensitivity for a biosensor. In case of presently found SARS COV-2 virus (Coronavirus) detection, the nanogap structure and the dielectric constant should be modified and optimized according to the size of the Coronavirus (60-140 nm) for better sensitivity and efficient detection with the same structure. si + fb − P 6 P 9 =1 − P 7 ; P 10 = 2 qNa si + fb P 11 =4P 7 P 9 − 1 P 12 =2P 10 + 4 f ,si − 4P 6 P 9 − 4P 7 P 8 P 13 =4P 6 P 8 − 4 2 f ,si − P 2 10 − 4 f ,si P 10 Conceptualization, Methodology, Supervision, Correction of manuscript, Reviewing and Editing.
Funding No funding is available for this work.
Availability of data and materials Authors are not willing to disclose the data and materials related to this research, as this work is a part of thesis.
Declarations
Competing interests The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Ethics Approval For this type of study formal consent is not required.
Consent to participate
Not applicable for this type of study.
Consent to publication
Not applicable for this type of study.
Research involving Human Participants and/or Animals
Not applicable for this type of study.
Informed consent Not applicable for this type of study. | 2021-10-19T16:42:20.865Z | 2021-09-20T00:00:00.000 | {
"year": 2022,
"sha1": "62cec814e1eec780a026c2494048fced774e935e",
"oa_license": null,
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12633-021-01617-z.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "55f964a88d32446902467d3b90db75f215b2be11",
"s2fieldsofstudy": [
"Engineering",
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
88521664 | pes2o/s2orc | v3-fos-license | Spatial Regression and the Bayesian Filter
Regression for spatially dependent outcomes poses many challenges, for inference and for computation. Non-spatial models and traditional spatial mixed-effects models each have their advantages and disadvantages, making it difficult for practitioners to determine how to carry out a spatial regression analysis. We discuss the data-generating mechanisms implicitly assumed by various popular spatial regression models, and discuss the implications of these assumptions. We propose Bayesian spatial filtering as an approximate middle way between non-spatial models and traditional spatial mixed models. We show by simulation that our Bayesian spatial filtering model has several desirable properties and hence may be a useful addition to a spatial statistician's toolkit.
Introduction
Spatially referenced data arise in sundry fields of inquiry, e.g., radiology, neuroscience, epidemiology, marketing, ecology, agriculture, forestry, geography, and climatology. Because spatial data tend to exhibit spatial dependence (usually attractive but sometimes repulsive or even a combination of the two), a number of statistical models, collectively referred to as spatial models, have been developed for analyzing such data (Banerjee et al., 2014). Since dependence is customarily considered to be a second-moment phenomenon, nearly all spatial models are second-moment models. In fact, second-moment methods so dominate the field that allowing "second-moment" to be a defining characteristic of spatial models would not be unreasonable. Here we revisit this important assumption, and discuss what the assumption implies regarding the data-generating process. Our goals are to (i) provide an appreciation of the assumptions underpinning our models, and (ii) understand how these assumptions may impact the results of a spatial regression analysis.
Often, the aim of a spatial analysis is to do inference regarding the effects β = (β 1 , . . . , β p ) of a number of spatially structured covariates X = (x 1 · · · x p ). By accounting for spatial dependence in excess of that explained by Xβ, it is claimed, spatial regression models permit more reliable inference for β, and better prediction, than do non-spatial models. But whether a given spatial model yields improved regression inference and/or prediction depends on the posited datagenerating mechanism (i.e., the "true" model from which the data arose) as well as the properties of said spatial model.
The rest of this manuscript is organized as follows. In Section 2 we review the class of spatial models and discuss them as data-generating mechanisms. In Section 3 we discuss how our modeling assumptions impact spatial regression inference and prediction. In Section 4 we discuss computing for spatial regression. In Section 5 we apply six regression models to simulated outcomes in an effort to assess their performance in a challenging, but realistic, setting informed by the discussion in Sections 2 and 3. We develop Bayesian spatial filtering, a new approach to spatial regression, in Section 6. We then conclude in Section 7.
Spatial Data: Ontology versus Phenomenology
In this section we will examine spatial models as data-generating mechanisms. We begin by reviewing the most commonly applied spatial regression modelspartly to introduce useful notation, and partly to highlight the models' secondorder components. Then we will discuss what sort of generating mechanism we are assuming when we apply each of these models.
2.1. A Brief Review of Spatial Regression Models. Let Z = (Z 1 , . . . , Z n ) be the response vector, where Z i is observed at spatial location s i . If said locations are points residing in a continuous spatial domain (e.g., a Borel subset of R 2 or near the surface of a biaxial ellipsoid), the outcomes are said to be point-level or geostatistical. If s i instead refers to an area over which measurements have been aggregated (e.g., county, voxel, Census tract) to produce Z i , the outcomes are said to be areal.
Along with Z we have p covariates x 1 , . . . , x p , where x j = (x 1j , . . . , x nj ) and x ij , like Z i , was measured at spatial location s i . Presumably, each of x 1 , . . . , x p is spatially structured and so may be useful for explaining a significant portion of the spatial variation exhibited by Z.
It is often the case that Z exhibits additional spatial structure, i.e., spatial structure that cannot be explained by Xβ alone. The most common means of accounting for this additional structure is to augment the linear predictor Xβ with spatially dependent random effects. This leads to the spatial generalized linear mixed model (SGLMM), for which the transformed conditional mean vector is given by where g(µ) = g(µ 1 ), . . . , g(µ n ) , g is a link function, µ i = E(Z i | ψ i ), and ψ = (ψ 1 , . . . , ψ n ) are latent spatially dependent random effects. Conditional on ψ, the outcomes are assumed to be independent draws from a suitable distribution (common choices are binomial, Gaussian, and Poisson). Whether the spatial domain is continuous (Diggle et al., 1998) or discrete (Besag et al., 1991), the spatial random effects are nearly always assumed to be multinormal with mean 0 (Haran, 2011), and so variants of the SGLMM are distinguished by alternative specifications of ψ's covariance matrix Σ, which is usually structured to accommodate (or induce) spatial clustering.
For areal data, spatial proximity is defined in terms of an undirected n-graph G = (V, E), where V = {1, . . . , n} are the vertices and E ⊂ V × V are the edges. The vertices of G represent the areal units, and the edges of G represent adjacencies among the units (usually, a pair of vertices share an edge iff their corresponding areal units share a boundary). In this setting Σ is typically a function of G's adjacency matrix-A = (A uv = 1{(u, v) ∈ E})-and perhaps one or more dependence parameters. A famous possibility is the proper conditional autoregressive (CAR) model, in which Σ is equal to (τ Q) −1 , where τ > 0 is a smoothing parameter and Q = diag(A1) − ρA, with ρ ∈ [0, 1) behaving like a range parameter. This implies that ψ is a Gaussian Markov random field (GMRF) (Rue and Held, 2005), which implies that ψ u and ψ v are independent conditional on their neighbors iff areal units u and v are not adjacent. That G's adjacency structure corresponds to a conditional independency structure for ψ is widely considered to be an appealing characteristic of this and similar definitions of Σ. Unfortunately, the resulting marginal dependence structure for ψ may be counterintuitive or even pathological (Wall, 2004;Assunção and Krainski, 2009).
For point-level observations, the elements of Σ are given by a spatial covariance function: Σ uv = k(s u , s v ). A common choice for k is the Mátern covariance function, which is given by where s u − s v is the distance between s u and s v , σ 2 is the common variance, ν > 0 is a smoothness parameter, Γ denotes the gamma function, ρ > 0 is a range parameter (often referred to as the characteristic length scale), and K ν is the modified Bessel function of the second kind. This defines a Gaussian process (Rasmussen and Williams, 2006). Since k depends only on distances between locations, the process is stationary, i.e., translation invariant. If the norm is the Euclidean norm, the process is also isotropic, which is to say that the variability is the same in all directions.
A second approach to accommodating/inducing extra-Xβ spatial structure in areal outcomes is to augment Xβ with an autocovariate in place of the SGLMM's random effects, in which case the linear predictor is given by where µ i = E(Z i | {Z j : (i, j) ∈ E}) and dependence parameter κ captures the "reactivity" of the outcomes to their neighbors, conditional on the independence expectations E(Z | κ = 0) = g −1 (Xβ). (A positive value of κ implies spatial attraction while a negative value implies repulsion, and larger |κ| produces/indicates stronger dependence.) This defines the automodel (Besag, 1974), a type of Markov random field (MRF) model (Kindermann and Snell, 1980;Clifford, 1990). The proper CAR model described above is a special case. Another noteworthy example is the autologistic model (Caragea and Kaiser, 2009;Hughes et al., 2011) for binary data, for which (2) takes the form for i = 1, . . . , n.
A third, and newer, type of spatial regression model is the spatial copula regression model (SCRM) (Kazianka and Pilz, 2010;Hughes, 2015). Unlike the SGLMM and automodel, the SCRM is a marginal model, which is to say the regression coefficients have the same interpretation as in the classical GLM (McCullagh and Nelder, 1983). A common choice for the joint component of the spatial CRM is the spatial Gaussian copula where the u i are standard uniform, Φ 0,R denotes the cdf of the multinormal distribution with mean vector 0 and spatial correlation matrix R, and Φ −1 is the standard normal quantile function. See Joe (2014) for an extensive treatment of copula models, and Kolev and Paiva (2009) for a review of copula-based regression models.
The copula can be applied to the outcomes directly, or be employed in a hierarchical fashion. The gamma-Poisson model provides an intuitive example of the latter: where P denotes the Poisson distribution, G denotes the gamma distribution, µ i = g −1 (x i β), and F i is the G(νµ i , ν) cdf. In this formulation the copula is applied to the λ i (which are marginally gamma and exhibit Gaussian dependence), and so the outcomes are dependent because the λ i are dependent.
Two additional spatial regression models are the simultaneous autoregressive model (Cressie, 1993) and the clipped random field (De Oliveira, 2000). Although interesting, these models are not applied as often as the models described above, and so, in the interest of brevity, we will not consider them further in this work.
2.2.
Interpreting Spatial Regression Models. What do the above mentioned models-the SGLMM, the automodel, and the SCRM-mean if we attempt to grant ontological status to their second-order components? This is clearly not an issue for X since we are in possession of it and believe it to be more fundamental than the outcomes (in the sense that much of the spatial variation exhibited by the response can be attributed to X). What we seek are equally fundamental interpretations of the models' dependence components.
Let us first consider the SGLMM, which induces extra-Xβ spatial variation by augmenting the classical linear predictor with spatially dependent random effects ψ. To what aspect of reality does ψ refer? A prima facie interpretation of ψ would lead us to conclude that ψ is an unobservable realization of some spatial process (just as each column of X is an observable realization of some spatial process) and that said process acts on the outcomes on link scale and in an additive fashion. But this interpretation of ψ does not explain the extra-Xβ spatial variation in the outcomes. This interpretation merely accommodates, i.e., reveals the pattern of, that additional variation but cannot describe its origin. That is, this apparently ontological interpretation of ψ is, in fact, phenomenological-is, in fact, no more fundamental than the outcomes themselves.
It is perhaps just as difficult to tie the automodel's autocovariate term κA{Z − g −1 (Xβ)} to (non-mathematical) reality. Since the autocovariate, unlike ψ, involves Xβ, one might argue that the autocovariate is more fundamental than ψ. But the autocovariate is also self-referential, i.e., it contains the response we aim to explain. And so it is not clear how one might arrive at a sensible realist interpretation of the autocovariate term. The term does admit an intuitive phenomenological interpretation, however: for the automodel, extra-Xβ spatial variation is defined, quite explicitly, as localized departures from the independence expectations g −1 (Xβ). We might attach this same interpretation to the SGLMM, although there the mechanism of departure from the independence expectations is less explicit and is not self-referential.
The copula-based model, whether the copula is applied directly or hierarchically, is a rather different sort of model since it does not induce/accommodate extra-Xβ spatial variation on the scale of the link function. Instead, the copula acts by way of quantile transformations. To see this, consider the stochastic form of the copula model, where we apply the copula to the outcomes (in contrast to the hierarchical formulation given above): Here, extra-Xβ spatial variation originates in the ψ i , carries over to the U i (which are marginally standard uniform and exhibit Gaussian dependence), and finally influences the outcomes through the quantile transformations F −1 i (which also incorporate Xβ). That is, the copula does not induce extra-Xβ variation by additively perturbing Xβ (or perturbing the λ i in any fashion) but instead pushes the Z i away from the λ i by inducing a spatial pattern among the U i .
Does the copula represent some real-world mechanism? The answer must be no since ψ in the copula model serves precisely the same role, conceptually, as does ψ in the SGLMM. Both models can be viewed as latent Gaussian models, and what distinguishes them is merely the way in which the latent Gaussian random variable ψ obscures g −1 (Xβ).
And so it appears that the dependence components of commonly applied spatial regression models do not lend themselves to realist interpretations but are instead merely instrumental. The dependence components of these models may be capable of generating what we have termed extra-Xβ spatial variation, but the models are unable to explain spatial variation in the response in the same sense that Xβ can.
2.3. Extra-Xβ Spatial Variation as the Result of Model Underspecification. Model underspecification offers a plausible realist explanation for extra-Xβ spatial variation. Specifically, we might suppose that where the columns of X are unmeasured spatial predictors, γ their effects. This implies that extra-Xβ spatial variation is a first-moment phenomenon, i.e., the spatial dependence among the outcomes is due entirely to spatial structure among the predictors X and X . This view demystifies the spatial regression problem and allows us to analyze the problem using intuitive and well-understood ideas regarding ordinary regression modeling (i.e., regression modeling for independent outcomes).
Spatial Regression Models as Data-Analytic Tools
In the setting of ordinary regression, consider four possibilities for a given model: (A) the model is correct; (B) the model is underspecified, i.e., one or more important predictors is missing; (C) the model is overspecified, i.e., one or more predictors is redundant; or (D) the model contains extraneous predictors, i.e., one or more predictors is not related to the response or to any other predictor.
If the true model is linear with spherical Gaussian errors, say, (A) permits unbiased estimation of the regression coefficients and unbiased prediction, and yields accurate standard errors; (B) permits unbiased estimation of β only if X is not correlated with X, and leads to biased prediction and inflated standard errors; (C) permits unbiased estimation of the regression coefficients and unbiased prediction, but standard errors may be inflated dramatically due to collinearity; and (D) permits unbiased estimation of the regression coefficients and unbiased prediction, but standard errors may be inflated dramatically if the number of extraneous predictors is large.
We mentioned in Section 1 that employing a spatial regression model to account for extra-Xβ spatial variation in the response can allegedly permit more reliable inference for β than a non-spatial model can. Assuming (3), and in light of (B), this claim implies that some spatial model(s) can remedy the absence of X , resulting in (i) more accurate estimation of β, better (ii) coverage and (iii) type II error rates, and (iv) more accurate prediction. Can any spatial regression model accomplish all of these tasks? That is, if the data-generating mechanism is (3), can any spatial regression model, when employed not as data-generating mechanism but as dataanalytic tool, accomplish (i-iv)? Regarding (i), estimation of β will be biased, perhaps badly so, unless the unmeasured predictors X are not correlated with the measured predictors X. Some spatial models may be able to provide a surrogate for X γ, but that is not the same as revealing X , for it is the relationship between X and X , not the structure of X γ, that matters when estimating β. In other words, no spatial model can remedy unmeasured confounding.
The absence of X need not lead to poor prediction, however. Recall that the SGLMM and the automodel augment Xβ with, respectively, spatial random effects ψ or the autocovariate κA{Z − g −1 (Xβ)}. Presumably, each of these terms aids prediction by acting as a surrogate for X γ. The SCRM (in the form described above, at least) does not augment Xβ, and so we should expect that model to offer poorer predictive performance than the SGLMM and automodel.
Although the SGLMM offers better prediction than a non-spatial model or a copula-based model, the improvement is costly. To see this, it will prove useful to rewrite the SGLMM's linear predictor as where P x = X(X X) −1 X is the orthogonal projection onto C(X), and I denotes the n × n identity matrix. This form of the linear predictor allows us to see that the SGLMM is overspecified as well as underspecified: since C(P x ) = C(X), the model is perfectly collinear. This trait of the SGLMM-which inflates the variance ofβ, often dramatically, as per (C) above-is called spatial confounding (Clayton et al., 1993;Reich et al., 2006;Paciorek, 2010;Hodges and Reich, 2010).
The confounding evident in (4) can be eliminated by removing P x ψ, thereby constraining smoothing to the residual space C(X) ⊥ . This technique is called restricted spatial regression (RSR) (Hodges and Reich, 2010). RSR not only obviates spatial confounding but can also permit considerable dimension reduction and much more time-and space-efficient computation (Hughes and Haran, 2013;Hughes, 2014). Hanks et al. (2015) acknowledged the potential computational benefits of RSR but cautioned that RSR may lead to erroneous inference for β if (1) is the true model. According to Hanks et al. (2015), the RSR model, which has linear predictor implicitly assumes that all variation in the direction of X can be explained by Xβ, whereas the traditional SGLMM can accommodate additional variation in the direction of X.
To support the latter claim they rewrite (4) as Similarly, we can rewrite our posited data-generating model (3) as (3) can generate additional variation in the direction of X. Hence, (6) and (7) show that the RSR model- † and ‡-can, in fact must, accommodate extra variation in the direction of X. That is, when we fit an RSR model, we are estimating δ, not β, and this is true whether the "true" linear predictor is Xβ + ψ or Xβ + X γ (assuming X is correlated with X).
In any case, the crux of the matter is the absence of X . It is the absence of X that prevents accurate estimation of β (if X is correlated with X), and neither the traditional SGLMM nor the RSR model provides a remedy. What both models do provide is more accurate prediction (by furnishing a stand-in for X γ). The traditional SGLMM accomplishes this at the cost of spatial confounding and a large (with respect to both time and storage) computational burden. RSR successfully addresses these problems and, if applied properly, yields significantly better predictive performance than the traditional model (see Section 5 below).
Although unmeasured spatial confounding cannot be remedied (in general, or entirely, at least), Hanks et al. (2015) suggest another avenue by which inference for β might be improved. They note that the RSR model may suffer from a low coverage rate for β, and they recommend the larger credible region that results from posterior predictive inference (Gelman et al., 2013) according tõ where δ (k) is the kth sample from δ's posterior, and Σ (k) = Σ(ξ (k) ) is the value of Σ produced from the kth update of the covariance parameters ξ (k) . In Section 5 we study how this approach performs in practice.
The spatial confounding caused by adding ψ to Xβ may lead us to suspect that the automodel, which adds κA{Z − g −1 (Xβ)} to Xβ, is likewise confounded. This is, in fact, the case for the traditional automodel, which has linear predictor g(µ) = Xβ + κAZ. Caragea and Kaiser (2009) studied this problem in the context of the autologistic model and showed that centering the autocovariate alleviates spatial confounding for the automodel: Since the SCRM does not augment Xβ, the SCRM is not spatially confounded. But the SCRM has no way of fitting extra-Xβ spatial variation, and so we should expect the SCRM's predictive performance to be no better than that of the ordinary GLM.
Some Computational Aspects of Spatial Regression
Now we turn our attention to computational issues involved in spatial regression. This topic could easily fill a book, and so our goal is not to provide a thorough treatment. We aim to describe only the most important aspects of computing for spatial regression, and, in so doing, to set the stage for the simulation study that is the subject of Section 5. We will focus on models for binary areal data, for four reasons: (1) binary spatial data are common; (2) binary outcomes, being relatively uninformative, present the most challenging case; (3) the automodel is an areal model; and (4) although spatial counts are common, the auto-Poisson and autonegative binomial models permit only negative spatial dependence. (This limitation of the auto-Poisson and autonegative binomial models can be overcome through Winzorization (Kaiser and Cressie, 1997), but the resulting models are, perhaps surprisingly, not often applied.) 4.1. Computing for the Autologistic Model. Maximum likelihood and Bayesian inference for the autologistic model are complicated by an intractable normalizing function. To see this, assume the underlying graph has clique number 2, in which case the joint pmf of the centered model is where θ = (β , κ) and is the normalizing function (Hughes et al., 2011). The normalizing function is intractable for all but the smallest datasets because the sample space {0, 1} n contains 2 n points. There are many techniques for doing inference in the presence of intractable normalizing functions (see, e.g., Park and Haran, 2017). One way is to avoid the normalizing function altogether. For the autologistic model, this can be accomplished by considering the so called pseudolikelihood (PL), which is a composite likelihood (Lindsay, 1988) of the conditional type. Each of the n factors in the pseudolikelihood is the likelihood of a single observation, conditional on said observation's neighbors: where z i is the observed value of Z i , and a i is the ith row of A. Since the p i are free of the normalizing function, so is the log pseudolikelihood, which is given by Although (8) is not the true log likelihood unless κ = 0, Besag (1975) showed that the maximum pseudolikelihood estimator (MPLE) converges almost surely to the maximum likelihood estimator (MLE) as the lattice size goes to ∞ (under an infill, as opposed to increasing domain, regime). For small samples the MPLE is less precise than the MLE (and the Bayes estimator), but point estimation of β is generally so poor for small samples that precision is unimportant. When the sample size is large enough to permit accurate estimation of β, the MPLE is nearly as precise as the MLE (Hughes et al., 2011).
We find the MPLEθ by optimizing pl (θ). This is computationally efficient even for larger samples. To speed computation even further, we can use a quasi-Newton (Byrd et al., 1995) or conjugate-gradient algorithm and supply the score function where p = (p 1 , . . . , p n ) and D = diag{ζ i (1 − ζ i )}.
Confidence intervals can be obtained using a parametric bootstrap (Efron and Tibshirani, 1994) or sandwich estimation. For the former we generate b samples from π(Z |θ) and compute the MPLE for each sample, thus obtaining the bootstrap sampleθ (1) , . . . ,θ (b) . Appropriate quantiles of the bootstrap sample are then used to construct approximate confidence intervals for the elements of θ.
The second approach for computing confidence intervals is based on (Varin et al., 2011) √ where I −1 pl (θ)J pl (θ)I −1 pl (θ) is the Godambe information matrix (Godambe, 1960). The "bread" in this sandwich is the inverse of the information matrix I pl (θ) = −E∇ 2 pl (θ), and the "filling" is the variance of the score: J pl (θ) = E∇∇ pl (θ). We use the observed information (computed during optimization) in place of I pl and estimate J pl using a parametric bootstrap. For the bootstrap we simulate b samples Z (1) , . . . , Z (b) from π(Z |θ) and estimate J pl aŝ Because the bootstrap sample can be generated in parallel and little subsequent processing is required, these approaches to inference are very efficient computationally, even for large datasets. We note that sandwich estimation tends to be much faster than the full bootstrap. Moreover, asymptotic inference and bootstrap inference yield comparable results for practically all sample sizes because (9) is not, in fact, an asymptotic result. This is because the log pseudolikelihood is approximately quadratic with Hessian approximately invariant in law, which implies that the MPLE is approximately normally distributed irrespective of sample size (Geyer, 2013).
4.2.
Computing for the Traditional SGLMM. The traditional SGLMM is typically applied using MCMC for Bayesian inference, in which case the model for ψ might be considered a prior distribution. Whether the model is viewed from a Bayesian or a classical point of view, or is applied to areal data or point-level data, the computational bottleneck is the handling of ψ's precision matrix Σ −1 .
For point-level outcomes the customary approach to this problem is to avoid inversion of Σ in favor of Cholesky decomposition followed by a linear solve. Since Σ is typically dense, its Cholesky decomposition is in O(n 3 ), and so the time complexity of the overall fitting algorithm is in O(n 3 ). This considerable computational expense makes the analyses of large point-level datasets time consuming or infeasible. Consequently, efforts to reduce the computational burden have resulted in an extensive literature detailing many approaches, e.g., process convolution (Higdon, 2002), fixed-rank kriging (Cressie and Johannesson, 2008), Gaussian predictive process models (Banerjee et al., 2008), covariance tapering (Furrer et al., 2006), approximation by a Gaussian Markov random field (Rue and Tjelmeland, 2002;Lindgren et al., 2011), integrated nested Laplace approximations (Rue et al., 2009), and nearest-neighbor Gaussian process models (Datta et al., 2016).
Fitting the areal version of the model can also be burdensome even though the areal model is parameterized in terms of Σ −1 and Σ −1 is sparse. It is well known that a univariate Metropolis-Hastings algorithm for sampling from the posterior distribution of ψ leads to a slow mixing Markov chain because the components of ψ exhibit strong a posteriori dependence. This has led to a number of methods for updating the random effects in a block(s). Constructing proposals for these block updates is challenging, and the improved mixing comes at the cost of increased running time per iteration (see, for instance, Knorr-Held and Haran et al., 2003;Haran and Tierney, 2010).
The large dimension of ψ and the slowness of mixing together imply a large storage requirement too. If RAM capacity is insufficient the samples can be stored in a file-backed structure, but this solution is hardly ideal since accessing secondary storage is many orders of magnitude slower than accessing RAM. . This operator appears in a generalized form of Moran's I (a popular nonparametric measure of spatial dependence for areal data (Moran, 1950)), which is given by (This becomes Moran's I when P x is replaced with n −1 11 , i.e., when X = 1.) Boots and Tiefelsdorf (2000) showed that (1) the (standardized) spectrum of M x comprises the possible values for I x , and (2) the eigenvectors comprise all possible mutually distinct patterns of clustering residual to C(X) and accounting for G. The positive (negative) eigenvalues of M x correspond to varying degrees of positive (negative) spatial dependence, and the eigenvectors associated with a given eigenvalue (ω i , say) are the patterns of spatial clustering that data exhibit when the dependence among them is of degree ω i . In other words, the eigenvectors of M x form a multiresolutional spatial basis for C(X) ⊥ that exhausts all possible patterns that can arise on G. Three Moran basis vectors are shown in Figure 1. Since we do not expect to observe repulsion in the phenomena to which these models are usually applied, we can use the spectrum of the operator to discard all repulsive patterns, retaining only attractive patterns for our analysis (although it can be advantageous to accommodate repulsion (Griffith, 2006)). By retaining only eigenvectors that exhibit positive spatial dependence, we can usually reduce the model dimension by at least half a priori. And Hughes and Haran (2013) showed that a much greater reduction is possible in practice, with 50-100 eigenvectors being sufficient for most datasets. Moreover, a simple spherical Gaussian proposal distribution for η performs well because the elements of η are approximately a posteriori uncorrelated owing to the orthogonality of the Moran basis.
Although using a truncated Moran basis dramatically reduces the time required to draw samples from the posterior, and the space required to store those samples, this approach does incur the substantial up-front burden of computing and eigendecomposing M x . The efficiency of the former can be increased by storing A in a sparse format (Furrer and Sain, 2010) and parallelizing the matrix multiplications. And we can more efficiently obtain the desired basis vectors by computing only the first q eigenvectors of M x instead of doing the full eigendecomposition. This can be done using the Spectra library (Qiu, 2017), for example. We note that Guan and Haran (2016) recently developed an approach to RSR for point-level data. Their approach is based on random projections (Sarlos, 2006;Halko et al., 2011;Banerjee et al., 2013).
4.4.
Computing for the SCRM. The hierarchical copula model and the direct copula model pose rather different computing challenges. And that is not the only important difference between the two models. A sufficiently substantive discussion of this issue is beyond the scope of this article, but it is worth mentioning that the hierarchical SCRM may be more appealing from a modeling point of view (Musgrove et al., 2016) but suffers from certain limitations when employed as a data-analytic tool (Han and De Oliveira, 2016). For this reason we will focus on copCAR (Hughes, 2015), a form of the direct copula model, here and in Section 5. copCAR employs the CAR copula, a Gaussian copula (or other suitable copula) based on the proper CAR described above. Recall that the proper CAR has precision matrix τ Q, where Q = diag(A1) − ρA. Since a copula is scale free, we do not need τ , but omitting τ does not leave us with an inverse correlation matrix because the variances σ 2 = (σ 2 1 , . . . , σ 2 n ) = vecdiag(Q −1 ) are not equal to 1. We could rescale Q so that its inverse is a correlation matrix, i.e., we could construct a Gaussian copula using Λ 1/2 QΛ 1/2 , where Λ = diag(σ 2 ). In fact, rescaling is necessary in the general case lest the model be unidentifiable with respect to the variances. For copCAR, however, rescaling is unnecessary because the variances σ 2 are not free parameters; the variances are entirely determined by Q's only dependence parameter, ρ, which is not a scale parameter. Since using Q itself leads to an identifiable model, rescaling would merely slow computation. Thus copCAR employs the CAR correlation structure indirectly, by using Q along with the variances σ 2 . This leads to the CAR copula: where Φ σi denotes the distribution function of the normal distribution with mean 0 and variance σ 2 i . The model specification can be completed by pairing the CAR copula with a set of suitable marginal distributions for the outcomes. The copula and the marginals are linked by way of the probability integral transform. Specifically, if Z = (Z 1 , . . . , Z n ) are the observations, and F 1 , . . . , F n are the desired marginal distribution functions, we have Z i = F −1 i (U i ), where U = (U 1 , . . . , U n ) is a realization of the copula. We will assume Bernoulli marginal distributions with expectations Unless n is quite small, computation of the copCAR likelihood is infeasible (when the marginals are discrete) because the multinormal cdf is unstable in high dimensions and because the likelihood contains a sum of 2 n terms. For Bernoulli marginals, a composite marginal likelihood approach (Varin, 2008) performs well. The objective function is a product of pairwise likelihoods: where H ij denotes the bivariate Gaussian copula with covariance matrix This implies the log composite likelihood where Optimization of (11) yieldŝ θ cml .
Whileβ cml tends to be approximately normally distributed,ρ cml tends to be left skewed when ρ is close to 1. This implies that asymptotic inference for ρ tends to result in poor coverage rates. This can be avoided by using a parametric bootstrap, but a parametric bootstrap is rather burdensome computationally. Luckily, a simple reparameterization yields an approximately normally distributed estimator because the objective function for the reparameterized model is approximately quadratic with constant Hessian (Geyer, 2013). Specifically, for θ = (β , Φ −1 (ρ)) , we have √ n(θ cml − θ) ⇒ N {0, I −1 cml (θ)J cml (θ)I −1 cml (θ)}, where I cml is the Fisher information matrix and J cml is the variance of the score: Note that the asymptotic covariance matrix for the CML estimator is a Godambe information matrix (Godambe, 1960) because cml is misspecified. The matrix can be estimated in the same manner as we described above for the autologistic model.
The form of cml given in (11) requires four evaluations of the bivariate normal cdf for each of the n(n − 1)/2 pairs of observations. This computation is rather expensive even for fairly small samples.
In a spatial setting we can expect a pair of nearby observations to carry more information about dependence than a pair of more distant observations. Others have found, in a variety of contexts, that retaining the contributions to the CML made by more distant pairs of observations decreases not only the computational efficiency of the procedure but also the statistical efficiency of the estimator (Varin and Vidoni, 2009;Apanasovich et al., 2008). Hence, we allow only pairs of adjacent observations to contribute to the copCAR CML. This means replacing (11) with If thoughtfully implemented, optimization of (12) is efficient enough to permit analysis of larger areal datasets.
Application of Various Spatial Regression Models to Simulated Binary Data
Our simulation study focused on binary areal outcomes, for the reasons given above. We simulated those outcomes on the 30 × 30 square lattice. This data size kept the computational burden manageable while giving all of the approaches a fighting chance at performing well. Our mean surface was a function of the x and y coordinates of the lattice points, x = (x 1 , . . . , x n ) and y = (y 1 , . . . , y n ) , respectively, which we restricted to the unit square centered at the origin. While simulating data we used linear predictor β 0 + β 1 x 1 + β 2 x 2 , where x 1 = x and x 2 = x + y + 3s. Vector s exhibits a smaller-scale spatial pattern than do x and y; this lends more interesting spatial structure to the mean surface and ensures that x 1 and x 2 are substantially, but not strongly, correlated (cor(x 1 , x 2 ) = 0.45 rather than 0.71). We let β = (0.2, 1, 1) , which implies a mean vector equal to These means are shown in Figure 2. Predictor x 2 was our unmeasured confounder and source of extra-Xβ spatial variation. That is, we analyzed the data using X = (1 x 1 ), which implies that X = x 2 . More specifically, to each of 100 simulated datasets we applied six models: (1) the ordinary logistic regression model with linear predictor β 0 + β 1 x 1 ; (2) the centered autologistic model having regression component β 0 + β 1 x 1 ; Table 1. We see that the RSR approach of Hughes and Haran performed better than the other approaches. The RSR estimator of β 1 has the smallest bias and mean squared error, and strikes the best balance between coverage rate and type II error rate. The RSR model also offers the most accurate prediction. The traditional CAR model, along with the RSR approach of Hanks et al., resulted in very high coverage rates at the cost of very high type II error rates. The other three models performed poorly with respect to coverage rate and prediction. Predictions for a single dataset are shown in Figure 3. The autologistic model and the CAR model clearly undersmooth. The CAR model's undersmoothing is less dramatic, but it is perhaps surprising that the CAR model undersmoothes at all given that it has n spatial random effects. (Note that we could forceψ to be smoother by using Q k (k ≥ 2) in place of Q (Rue and Held, 2005).)
Bayesian Spatial Filtering
In this section we will develop, and assess the performance of, Bayesian spatial filtering, which possesses the computational advantages and good predictive performance of RSR while allowing for some advantages in regression inference. We begin by describing classical spatial filtering.
6.1. Classical Spatial Filtering. In developing the SAMM, Hughes and Haran (2013) drew inspiration from spatial filtering (Griffith, 2003), which uses a basis expansion to accommodate any extra-Xβ spatial pattern exhibited by the response vector, resulting in conventional residuals, i.e., residuals having at most trace spatial dependence. This implies that spatial filtering can reveal extra-Xβ structure while permitting the analyst to apply ordinary, well-understood diagnostic techniques to the residuals.
The basis used most often in spatial filtering are eigenvectors of the Moran operator for 1: M 1 = (I − n −1 11 )A(I − n −1 11 ). This yields vectors that reside in C(1) ⊥ . (Recall that the SAMM employs basis vectors from C(X) ⊥ , where X typically contains 1 along with one or more spatially structured predictors.) Considerable dimension reduction can be achieved by using only q n basis vectors. If we store said vectors as the columns of matrix F n×q , say, the filtering linear predictor can be written as where η is once again a q-vector of coefficients.
Since constructing F requires that M 1 be computed and eigendecomposed, it is clear that spatial filtering and the SAMM have much in common from a computational point of view. There are two key differences, however. First, to our knowledge, there are no Bayesian approaches for spatial filtering; practitioners estimate η by optimizing a likelihood or a composite likelihood. The choice of objective pp RSR (q = 100) function of course has a substantial impact on computational complexity. And second, the columns of F can be chosen using any of a number of methods (three will be discussed shortly). Those methods vary greatly in sophistication and computational complexity. It is not clear how they compare to one another with respect to quality of regression inference or quality of prediction, however. Chun et al. (2016) recommend that the first q 0 = n + 1 + exp[2.148 − {6.1808 (z mi + 0.6) 0.1742 }/n 0.1298 eigenvectors be included initially in a stepwise, ordinary GLM analysis with a significance level of 0.2. Here, n + is the number of positive eigenvalues of M 1 and z mi is the z score of Moran's I for the response. For a binary response, another possibility is to do a two-sample t test for each of the first few hundred eigenvectors, where the eigenvector of interest is treated as the response and Z is used as the grouping variable. Any eigenvector that yields a p-value smaller than 0.1, say, is then included in the analysis. In this scheme, the number of variables may or may not be further reduced using a stepwise procedure.
A third approach to spatial filtering is to include Moran eigenvectors in a spatial model and use that model's dependence component to decide which eigenvectors to retain. For example, one might use some procedure to choose the Moran eigenvectors that lead toκ ≈ 0 for an appropriate automodel, orρ ≈ 0 for a model that employs the proper CAR. It is this technique for which spatial filtering is named, since here an explicit aim is to remove (filter) spatial dependence from the response (Griffith, 2004).
6.2. A Bayesian Approach to Spatial Filtering. We can develop a Bayesian approach to spatial filtering by replacing M in the SAMM specification with a filtering design matrix F. Specifically, the Bayesian spatial filtering (BSF) model has the same transformed conditional mean as the classical spatial filtering model, namely, where F n×q contains the q principle eigenvectors of M 1 , and η is a q-vector of coefficients. Borrowing from the SAMM, the prior distribution for η is where Q is once again the Laplacian of G. The BSF model, like the SAMM, assigns β a spherical Gaussian prior with a large variance, and assigns the smoothing parameter τ a gamma prior with shape parameters 0.5 and 2,000. Note that the latter prior, having a large mean, discourages artifactual spatial structure in the posterior (Kelsall and Wakefield, 1999).
Since η are regression coefficients, one may be tempted to assign η a spherical Gaussian prior instead of the above mentioned prior. This would be a mistake, however, for (13) is not arbitrary (see Reich et al. (2006) and/or Hughes and Haran (2013) for derivations) but is, in fact, very well suited to the task at hand. Specifically, two characteristics of (13)-along with the above mentioned prior for τ -discourage overfitting even when q is too large for the dataset being analyzed. First, the prior variances are commensurate with the spatial scales of the predictors in F (Figure 4). This shrinks toward zero the coefficients corresponding to predictors that exhibit small-scale spatial variation. Additionally, the correlation structure of (13) effectively reduces the degrees of freedom in the smoothing component of the model.
If the response is non-Gaussian, β and η are updated using Metropolis-Hastings random walks with Gaussian proposals. The proposal covariance matrix for β is the estimated asymptotic covariance matrix from an ordinary GLM fit to the data, which generally yields an acceptance rate around 50%. The proposal for η is spherical Gaussian-with standard deviation σ η , say. A sensible default value for σ η is 0.1, but a smaller value may be required to achieve large enough acceptance rates for larger datasets. The update for τ is a Gibbs update irrespective of the response distribution. If the response is Gaussian distributed, all updates are Gibbs updates. Note that the BSF MCMC sampler is very easy to tune since σ η is the only tuning parameter (unless the outcomes are Gaussian, in which case no tuning is required).
Bayesian spatial filtering for point-level outcomes can be accomplished analogously by adapting Guan and Haran's (2016) random-projection framework. The resulting BSF model for point-level data is different from the areal BSF model in q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q qq q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q q . Prior variances (on the log scale and for τ = 1) for the elements of η ∼ N {0, (τ F QF) −1 } from the second simulation study. The variances decrease rapidly as the spatial scale decreases, which prevents overfitting. a potentially important way, however. Since Guan and Haran obtain their basis vectors by eigendecomposing Σ, they, quite naturally, assign a spherical Gaussian prior to η. Because a spherical Gaussian prior lacks the appealing attributes of (13), Guan and Haran recommend that q = rank(F) be chosen in a pre-processing step. It is not clear how well this approach performs compared to the use of a prior similar to (13). 6.3. Application of BSF to Simulated Binary Data. As a followup to the simulation study described in Section 5, we applied our BSF model to the simulated datasets. We used four different values for q = rank(F): 50, 100, 200, and 400. The results are given in Table 2.
For smaller values of q,β bsf has smaller bias and MSE than any of the other estimators considered here, and yields a higher coverage rate while keeping the type II rate very low. As q becomes large, the bias and MSE ofβ bsf grow. Eventually the coverage rate begins to decrease, the type II rate to increase. The BSF model also performs very well at prediction.
The BSF model accomplishes all of this through judicious use of multicollinearity. Recall that the basis vectors used in RSR are (at least nearly) uncorrelated with the columns of X. This is not the case for the BSF model since spatial filtering employs eigenvectors of M 1 . As we increase q, we introduce more and more multicollinearity between X and F. Up to a point, this alleviates unmeasured confounding to some extent (hence the reduced bias), and adjusts the variance upward by a modest amount (hence the increased coverage rate). As q gets large, the linear predictor becomes rather redundant, causing the BSF model to perform much like the CAR model (increased bias and type II error rate). Table 2. The performance of Bayesian spatial filtering.
Conclusion
When unmeasured confounding is the source of extra-Xβ spatial variation in a response variable, spatial regression models struggle to perform well. In Section 5 we saw that the autologistic model and the copCAR model (examples of the automodel and the spatial copula regression model, respectively) perform rather poorly, about as poorly as a non-spatial model. Spatial mixed-effects models perform better but still have weaknesses. The traditional SGLMM, for example, is badly spatially confounded and computationally burdensome, and often undersmoothes. Restricted spatial regression offers an appealing alternative, for RSR reduces bias and mean squared error, provides a more sensible balance between coverage rate and type II rate, smoothes very effectively, and permits efficient computation. Yet there is room for improvement.
In the latter part of this article we developed Bayesian spatial filtering, which performs as well as RSR with respect to prediction and computational complexity while besting RSR in terms of bias, mean squared error, and coverage rate. BSF does this by using an expansion in a well-chosen basis to introduce an appropriate amount of spatial confounding. This situates the BSF model on a continuum of spatial confounding, with the non-spatial model and the CAR model at either end: | 2017-07-31T21:33:45.000Z | 2017-06-14T00:00:00.000 | {
"year": 2017,
"sha1": "0d71a7724ec31aa45edcdf16e37ebfa358dd94f9",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "f532d10e42c2a30d52e16d8e5a011b458878debe",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
35340869 | pes2o/s2orc | v3-fos-license | Long time evolution of concentrated Euler flows with planar symmetry
We study the time evolution of an incompressible Euler fluid with planar symmetry when the vorticity is initially concentrated in small disks. We discuss how long this concentration persists, showing that in some cases this happens for quite long times. Moreover, we analyze a toy model that shows a similar behavior and gives some hints on the original problem.
Introduction
This paper focuses on the long time behavior of an incompressible inviscid fluid, with planar symmetry and constant density, whose time evolution is governed by the two-dimensional Euler equations. If the system is confined in a domain Γ ⊆ R 2 , the Euler equations expressed in term of the vorticity read, where ω := ∂ 1 u 2 − ∂ 2 u 1 is the vorticity and u = (u 1 , u 2 ) denotes the velocity field. By assuming that u vanishes at infinity and that its normal component vanishes on ∂Γ, the velocity is reconstructed from the vorticity as u(x, t) = dy K Γ (x, y) ω(y, t) , (1.4) with K Γ = ∇ ⊥ G Γ , ∇ ⊥ = (∂ 2 , −∂ 1 ) , (1.5) where G Γ is the fundamental solution of the Laplace operator in Γ vanishing on the boundary (and at infinity if Γ is unbounded). In particular, if Γ = R 2 , (1.6) Equation (1.1) means that the vorticity remains constant along the particle paths, which are the characteristics of the Euler equations. Otherwise stated, where x(x 0 , t) is the trajectory of the fluid particle initially in x 0 , i.e., d dt x(x 0 , t) = u(x(x 0 , t), t) , x(x 0 , 0) = x 0 . (1.8) It is possible to consider non smooth initial data, by assuming directly (1.4), (1.7), and (1.8) as a weak formulation of the Euler equations. Indeed, see, e.g., [31], given ω 0 ∈ L 1 (Γ) ∩ L ∞ (Γ) and T > 0, there exists a unique triple (x(·, ·), u, ω) solution to (1.4), (1.7), and (1.8) with ω ∈ L ∞ ([0, T ]; L 1 ∩ L ∞ ). Moreover, since u is divergence-free, the time evolution preserves the Lebesgue measure in Γ. In particular, given any smooth function f (x, t) with compact support in Γ × [0, T ], if then t → ω t [f ] belongs to C 1 ([0, T ]) and (1. 10) We remark that if ω 0 has compact support then (1.9) and (1.10) are valid for any smooth function f (x, t) (also with noncompact support).
In this paper we consider initial data in which the vorticity is supported in N blobs, i.e., initial data of the form, where ω i,ε (x, 0), i = 1, . . . , N , are functions with definite sign such that, denoting by Σ(z|r) the open disk of center z and radius r, with ε ∈ (0, 1) a small parameter and the points z i ∈ Γ such that the closure of Σ(z i |ε) does not intersect the boundary of Γ for any i = 1, . . . , N . In general, the signs of the functions ω i,ε (x, 0) can be different among each other.
As is well known in the literature, for such initial data the dynamics can be approximated by the following system of N differential equations in Γ, known as the point vortex model, where a i = Γ dx ω i,ε (x, 0) (1.14) is called the "intensity" of the vortex and it is assumed independent of ε, while γ Γ (x) = γ Γ (x, x) with γ Γ (x, y) := G Γ (x, y) − G(x, y), the "regular part" of the Green function G Γ . In particular, it has been proved [6,22,30,31] that the time evolution of these states has, for small ε, a similar form, (1.15) where ω i,ε (x, t), i = 1, . . . , N , are functions with definite sign such that Λ i,ε (t) := supp ω i,ε (·, t) ⊂ Σ(z i (t)|r t (ε)) , Σ(z i (t)|r t (ε)) ∩ Σ(z j (t)|r t (ε)) = ∅ ∀ i = j , (1. 16) with {z i (t)} N i=1 satisfying (1.13) and r t (ε) a nonnegative function such that the closure of Σ(z i |r t (ε)) does not intersect the boundary of Γ for any i = 1, . . . , N . The point vortex model (1.13) was introduced in the eighteenth century by Helmholtz [10], as a particular "solution" of the Euler equations, and investigated by several authors [13,14,34], see [31] and references quoted therein for a review on this subject. This model approximates reasonably a state with very large vorticity concentrations.
In general, the point vortex model admits a global solution, but in some cases collapses can happen [2]. However, it can be shown that the set of initial data and vortex intensities that produce a blow-up is exceptional; see, respectively, [8,29,31] for the proof in the case of the torus, the disk, and the plane. Moreover, there are initial data for which the vortices move away from each other indefinitely (we will return to this point later on).
In any case, for each time t chosen before a possible collapse, it can be proved that r t (ε) → 0 for ε → 0 and the fluid converges to the point vortex system [6,22,30,31]. For the connection between the Euler flow and the point vortices, see also [9, 18, 19, 23-25, 27, 28, 36].
Suppose now that the initial datum (1.11)-(1.12) is chosen in such a way that the corresponding Cauchy problem (1.13) admits a global solution. For what stated before, the fluid converges to the point vortex system for any fixed positive time. On the other hand, in a realistic situation the parameter ε is not zero, hence a natural question is to characterize the larger time intervals on which this approximation is valid for small but positive values of the parameter ε.
As time goes by, small filaments of fluid could move away. We fix β ∈ (0, 1/2) (we will see the technical reason for this limitation) and we denote by T ε,β the first time in which a filament reaches the boundary of i Σ(z i (t)|ε β ). Clearly, T ε,β gives a lower bound of the time horizon where the point vortex approximation is valid. In the general case, by adapting the strategies given [6,22,30,31], we can show that T ε,β ≥ (const.) | log ε| for small ε.
This bound is poor, and perhaps the result is too naive. Let us discuss this point. When Γ = R 2 and there is a vortex alone, the center of the vorticity blob remains fixed and the spread of vorticity grows in time slowly, see [12,16,20,21,35], where it is shown that there is c > 0 such that T ε,β ≥ ε −c for ε small, that is T ε,β admits a power-law lower bound (on this point see Section 3). The presence of the interaction with other blobs of vorticity and/or with the boundary of Γ produces a priori a larger spread. A possible bound is T ε,β ≥ (const.) | log ε|, but we conjecture that it could be improved. In some particular cases, a power-law upper bound for T ε,β can be obtained rigorously by a direct analysis of the problem. In more general cases, it could be obtained by making an average on time of the interaction with the other blobs. Indeed, when ε is small the fluid in a blob turns very quickly and so the effects of the other blobs depend on the time average of the interaction. This problem appears very challenging and it is not rigorously analyzed. To have some hints in this direction, we introduce a very schematic toy model and we investigate, by using a second order averaging method, the long time behavior of its solutions.
Another mechanism that should improve the convergence of the Euler flow to the point vortex model could be a very careful preparation of the initial data. Actually, in some textbooks in fluid mechanics (see for instance [3]), the spread of a blob of vorticity due to the interaction with other blobs is neglected for symmetry reasons, assuming the initial data with a radial symmetry. Of course, the time evolution destroys this symmetry, but we can hope that until some time this property remains almost valid. We will show (at the end of Section 4) that the analysis of the aforementioned toy model allows us to strengthen this conjecture.
We end this introduction observing that there are some sequences (in ε) that imply better estimates on T ε,β . A trivial example is given by a vortex alone in the plane with a vorticity with radial symmetry: of course, this is a stationary state of the Euler equations and so T ε,β = ∞. We could look for different and less trivial situations, with many blobs of vorticity that are stationary in time, but here we are interested in not so exceptional cases.
Actually, the relation between special dynamical systems (like the point vortex model) and the fluid mechanics is always related to some a priori assumptions. In the present case, we study situations with planar symmetry. In other cases, we assume other symmetries, often renouncing to study only compact blobs of vorticity. For instance, let us consider cylindrical symmetry without swirl: using cylindrical coordinates (z, r, θ), the motion does not depend on θ and it can be described in the (z, r) plane, where a point represents a ring in the whole space. An approximated ring converges as ε → 0 to a ring that performs a rigid translation in the z-direction [4] (ε is the size of the tube of vorticity around the ring). In this case, we must renormalize the total vorticity by a factor | log ε| −1 . The same problem has been studied for r ≈ | log ε|. With this choice, the tubes of vorticity are expected to converge to rings whose evolution is governed by a dynamical system similar but not equal to the point vortex system. This has been rigorously proved in the case of one ring alone in [26], while the case of many rings remains an open problem.
For other examples of dynamical systems related to fluid mechanics, see for instance [15,17] and the references quoted in the recent paper [7]. Their connection with the fluid physics, proved in some particular cases, is in general an open issue.
We conclude with a final remark. Here, we discuss the fluid mechanics with planar symmetry, i.e., when a point vortex in R 2 represents an infinite straight line in R 3 . We could also study the aforementioned case with cylindrical symmetry without swirl, in which the straight line becomes a circle of radius r, and consider the case of N blobs of vorticity in the plane (z, r) of size ε and centered around the points (z i , r i ). Let us make the change of variable z = x, r = r 0 + y. We increase r 0 as ε decreases choosing r 0 = ε −b , b > 0. It has been proved in [24] that in the limit ε → 0 the Euler flow converges to the point vortex system (1.13). Hence, we could apply also in this case our investigation.
The plan of the paper is the following. In the next section we discuss how to obtain, in general, the logarithmic lower bound on T ε,β . In Section 3, we give examples, in the whole plane and in a disk, where a power-law lower bound on T ε,β holds true. In Section 4, we introduce the toy model whose long-time behavior suggests similar features of the fluid dynamics.
Persistence of vortices on logarithmic time scales
In this section we consider the general case of initial data of the form (1.11), (1.12), with the only requirement that the associated Cauchy problem (1.13) of the point vortex dynamics admits a global solution such that (2.1) To each initial data (1.11), (1.12) satisfying the above assumption and β > 0 we associate the variable T ε,β defined in the Introduction, i.e., Our task is a lower bound on T ε,β . For simplicity, we analyze here the case Γ = R 2 , but the proof can be easily adapted to the case of a general domain.
We recall that for vortices with intensities of the same sign, (2.1) is a well known property of the dynamics. In the general case, the existence of a unique global solution to the Cauchy problem (1.13) is proved for any choice of initial data and intensities {z i , a i } N i=1 , outside a set of Lebesgue measure zero [31]. This fact does not implies (2.1), but it makes this assumption very reasonable.
Theorem 2.1. Let Γ = R 2 and assume that the initial data of the Euler equations verify the above assumptions. Suppose also that there are M, ν > 0 such that Then, for each β ∈ (0, 1/2) there exist ε 0 > 0 and ζ 0 > 0 such that We split the proof into two steps. First, in the next subsection, we prove an analogous result for a reduced system: the motion of a single blob of vorticity in an external time-dependent divergence-free vector field. The original problem is then solved by using the reduced system to simulate the force acting on a given blob of vorticity due to its interaction with the other blobs.
2.1. The reduced system. We consider a single blob of vorticity which evolves in R 2 in presence of an external time-dependent divergence-free vector field F (x, t). This means that (i) the initial configuration ω ε (x, 0) is a function of definite sign such that Λ ε (0) := supp ω ε (·, 0) ⊂ Σ(z * |ε) for some z * ∈ R 2 and (ii) the evolved configuration ω(x, t) = ω ε (x, t) satisfies (1.7), and with in this case x(x 0 , t) solution to d dt where u(x, t) = dy K(x − y) ω ε (y, t) with K as in (1.6). As a consequence, the weak formulation (1.10) is replaced by d dt Since the auxiliary field F (x, t) will be used to simulate the action of the other blobs of vorticity, we can assume that it is bounded and, with respect to the spatial variable, divergence-free and Lipschitz, The point vortex dynamics associated to the reduced system is defined by the planar motion B(t), solution to the following equation, (2.8) Without loss of generality, we also assume that initially, and hence at any time, the blob has intensity one, For this reduced system we prove the following result.
and define T * ε,β := sup{t > 0 : Then, for each β ∈ (0, 1/2) there exist ε 1 > 0 and ζ 1 > 0 such that The proof is similar to that in [6] and it is based on a bootstrap argument. For later purposes, it is useful to separate the principal estimates in different lemmas, giving the proof of the theorem at the end of the subsection.
We denote by B ε (t) the center of vorticity of the blob, defined by and by I ε (t) the moment of inertia with respect of B ε , i.e., (2.14) Lemma 2.3. For any t ≥ 0, the following estimates hold, where D t is the Lipschitz constant introduced in (2.7).
Proof. From (1.7), (2.5), and since u + F is divergence-free we have, Therefore, by (2.5) and using the identities the time derivatives of B ε (t) and I ε (t) are easily computed, By (2.7) and the obvious identity, we have, which can be integrated, giving which implies (2.15) because of the (not optimal) estimate I ε (0) ≤ 4ε 2 , following immediately from the fact that Λ ε (0) ⊂ Σ(z * |ε) and in view of (2.9).
Remark 2.4. In the proofs of Theorems 2.1 and 2.2, the estimates of Lemmas 2.3 and 2.5 will be used with D as in (2.7) instead of D t . Nevertheless, we keep the formulation involving the integral of D t because this will be used later in the proof of Theorem 3.1. We also remark that the identities (2.17) follow from the antisymmetry of K = K R 2 . We observe that these are no longer true for a general domain Γ. On the other hand, in this case K Γ = K +K withK a kernel which is regular away from the boundary, so that its contribution can be treated as an external (bounded, divergence-free, and Lipschitz) field, see also the Remark 2.7 at the end of this section. Now, we introduce a positive parameter α, to be chosen small enough, and study the system for times 0 ≤ t ≤ α| log ε|. Recalling the definition of D in (2.7), by (2.15) and (2.16) we have, The bound (2.20) implies that for small ε the main part of the vorticity remains concentrated around B ε (t), which, in turn, remains close to B(t) in view of (2.21). We now prove that not only the main part but all the filaments of vorticity remain close to B ε (t).
To this purpose, we study the growth in time of the distance from B ε (t) of a fluid particle. The key point is to show that this growth is very small for the particles sufficiently away from the center of vorticity. This is a consequence of the following two lemmas.
Lemma 2.5. Recall Λ ε (t) = supp ω ε (·, t) and define Then, at this time t, with M, ν as in (2.10) and the function m t (·) on R + defined by Proof. We observe that part of the proof is similar to that given in [12] (with B ε (t) = 0). Letting x = x(x 0 , t), by (2.5), (2.18), and (2.9) we have, (2.26) The first term in the second line, due the external field, is easily bounded by using (2.7), (2.9), and (2.22), For the second term, we split the integration region into two parts, the disk A 1 = Σ(B ε (t)|R t /2) and the annulus where and We first evaluate the contribution of the integration on A 1 . Recalling (1.6) and the notation , .
To bound H ′′ 1 we note that, in view of (2.22), the integration is restricted to |y ′ | ≤ R t , so that where in the last inequality we used the Chebyshev's inequality. By (2.32) and the previous estimates we conclude that (2.33) We now evaluate H 2 . Recalling the definition of K, The integrand is monotonically unbounded as y → x, and so the maximum of the integral is obtained when we rearrange the vorticity mass as close as possible to the singularity. In view of the assumption (2.10) and since, by (2.25), m t (R t /2) is equal to the total amount of vorticity in A 2 , this rearrangement gives, 1 where the radius r is such that M ε −ν πr 2 = m t (R t /2). A warning on the notation. Hereafter in the paper, we shall denote by C i , i an integer index, positive constants which are independent of the parameter ε and the time t. Lemma 2.6. Let m t be defined as in (2.25). For each β ∈ (0, 1/2) and ℓ > 0 there exists α > 0 such that , be a nonnegative smooth function, depending only on |x|, such that and we explicitly find the distribution of vorticity that realizes this maximum. By rearrangement, this is the piecewise constant function given by the maximum value M ε −ν on the disk Σ(0|r) and zero otherwise, with r such that the mass constraint is satisfied. Alternatively, we could use [12, Lemma 2.1], getting a little bit worst estimate for the integral in the left-hand side, giving rise to a constant greater than one in front of the third term in the right-hand side of (2.24). and, for some C 1 > 0, We define the quantity which is a mollified version of m t , satisfying In particular, it is enough to prove (2.35) with µ t instead of m t .
To this purpose, we study the time derivative of µ t (h). By applying (2.6) with where the second expression of H 3 is due to the antisymmetry of K.
The main difference in repeating the analysis of Subsection 2.1 is that here the external fields depend on ε, and are only close to the fields appearing in the right-hand side of the vortex model (1.13) in the case Γ = R 2 . This modifies the estimation of the nearness between the centers of vorticity B i,ε (t) := a −1 i dx x ω i,ε (x, t) and the corresponding vortices z i (t), so we discuss only this point. By (1.13) and (2.51), for any i = 1, . . . , N and t ∈ [0, T ε,β ], where we used that K(z i (t) − z j (t)) = K 1 (z i (t), z j (t)) for j = i and t ∈ [0, T ε,β ]. Integrating the above identity and arguing as in Lemma 2.3, we now obtain, for some constant C > 0 and any t ∈ [0, T ε,β ], t). This estimate, together with an a priori bound on the moments of inertia I j,ε (t), gives an estimate like (2.21) for ∆(t).
Remark 2.7. In the case of a generic domain, for ε small enough the system remains far from the boundary until the time T ε,β , so that also the effect of the boundary can be treated as a regular external (bounded, divergence-free, and Lipschitz) field. A few words on the meaning of the assumption (2.1) are due in this case. Unlike the case Γ = R 2 , explicit cases where (2.1) is true are not present in the literature, but it seems very reasonable that this assumption is "generic", i.e., it holds almost everywhere. Actually, in presence of boundaries, the global existence of solutions for any choice of initial data and intensities {z i , a i } N i=1 , outside a set of Lebesgue measure zero, has been proved only in the case of a circular domain [29], but this lack of results does not appear substantial (any regular boundary looks locally like a circle).
Examples of persistence of vortices on power-law time scales
In this section we provide examples in which the results of the previous section can be improved, proving that the maximal time for which the blobs of vorticity remain concentrated is not less than an inverse power of the initial size ε of the blobs.
3.1.
Examples of flow in R 2 . The simplest example in R 2 is given by a blob of vorticity with compact support and alone in the plane. The time goes by and the support could increase. Bounds on the growth are given in [12,16,20,21,35]. In this case, when the vorticity is concentrated around a point, we can obtain a power-law lower bound on the maximal time quoted above. An explicit proof follows by the analysis of the example that we discuss next.
The dynamical system (1.13) admits particular choices of intensities and initial data for which the system evolves in a self-similar configuration, i.e., the polygon with vertices formed by the point vortices rotates and changes its size but remains similar in shape. Denoting by N the number of point vortices, this property has been known for a long time for N ≤ 3, and more recently for N = 4, 5, and it has been conjectured for larger N , see [2,32,33]; some properties have been recently discussed in [11].
For concreteness, we study the case N = 3, but similar considerations can be made for every N . Consider three point vortices of intensities a i posed in z i (t). As well known, there exist intensities and initial data for which the bodies approach each other and collide, while for other initial conditions they move away from each other. More precisely, there are intensities and positions for which the three vortices, initially posed on the vertices on a triangle of sides of length L ij , in the future remain posed in the vertices of a triangle of side of length L ij (t), where that is, the triangle grows in the future (and shows a collapse for t = −g −1 ), but remains similar in form. For the time evolution of three point vortices see [1] and also [2,31]. 2 Theorem 3.1. Under the same hypothesis and notation of Theorem 2.1 with N = 3, we further assume that the three point vortices evolve according to (3.1). Then, for each β ∈ (0, 1/2) there exist ε 0 > 0 and ζ 0 > 0 such that The proof is achieved like that of Theorem 2.1, i.e., through the analysis of a reduced problem: a single blob of unitary vorticity moving in an external time dependent vector field F (x, t), that simulates the action of the other two blobs of vorticity. Therefore, it is a divergence-free field, with norm and Lipschitz constant decreasing in time, i.e., for suitable constants b, L > 0, The main point is the decreasing in time of the Lipschitz constant D t , which allows one to improve the content of Theorem 2.2 in the following way. 2 We recall the main conditions for a system of three point vortices to go to infinity [1,2]. We denote by (a 1 , a 2 , a 3 ) the intensities of the vortices and by L ij the distance between vortices i and j. There are conditions under which the triangle whose vertexes are given by the positions of the vortices changes size but remains similar in form. These conditions can be easily expressed in terms of the intensities and the reciprocal distances: a 1 a 2 + a 1 a 3 + a 2 a 3 = 0 and a 1 a 2 L 2 12 + a 1 a 3 L 2 13 +a 2 a 3 L 2 23 = 0. The dynamical system collapses in the past at a critical time and increases its size in the future as the square root of t. More precisely, the equations of motion imply where A is the area of the triangle determined by the positions of the three vortices with orientation, i.e., reckoned positive if (i, j, k) appear counterclockwise and negative if (i, j, k) appears clockwise. The previous equation implies (3.1).
We omit the proof of how to deduce Theorem 3.1 from Theorem 3.2: it can be easily obtained by adapting to the present context the proof of Theorem 2.1 discussed in Subsection 2.2.
3.2. An example of flow in a bounded region. Consider a single blob of vorticity in a bounded domain Γ, initially concentrated around a point z 0 ∈ Γ. In this case, denoting by m the total mass of the blob, the corresponding point vortex system (1.13) reads,ż where γ(x) = γ Γ (x, x), with γ Γ (x, y) = G Γ (x, y) + 1 2π log |x − y| and G Γ the fundamental solution of the Laplace operator in Γ vanishing on the boundary.
In what follows, we assume that Γ = Σ(0, 1), the disk of unitary radius centered in the origin. In this case, G Γ is the sum of the Green function in the whole plane plus a contribution given by a negative mirror charge posed inȳ = y/|y| 2 , and hence γ Γ (x, y) = 1 2π log |x −ȳ|. If initially the blob of vorticity is contained in Σ(0, ε), the time evolution is similar to that of a blob which moves in the whole plane and it is subjected to an external field which is vanishing as ε → 0. The origin is an equilibrium for the vortex dynamics (3.11), and the states with radial symmetric vorticity distribution are stationary for the Euler dynamics. Instead, a blob of vorticity without such symmetry is not stationary in general, and small filaments of vorticity can move away. Nonetheless, by adapting the techniques of previous sections, we can prove that the blob remains concentrated up to times of the order of an inverse power of ε.
Theorem 3.3. Let Γ = Σ(0, 1) and assume that the initial datum ω ε (x, 0) of the Euler equations is given by a single blob of vorticity with support Λ ε (0) ⊂ Σ(0|ε). Suppose also that there are M, ν > 0 such that Then, for each β ∈ (0, 1/2) there exist ε 0 > 0 and ζ 0 > 0 such that with T ε,β as defined in (2.2), which in this case reduces to Proof. Without lack of generality we assume that the total mass of vorticity equals one.
Each fluid particle moves in the velocity field produced by the vorticity via K Γ (x, y), withȳ = y/|y| 2 as previously defined. Therefore, as long as Λ ε (t) does not intersect the boundary of the disk Γ (in particular, up to time T ε,β ), the vorticity ω ε (x, t) evolves as in the reduced problem in R 2 defined by (2.5), (2.6), with the role of external field played by where (3.15) Sinceȳ = y/|y| 2 , a straightforward computation shows that there is κ > 0 such that, for any δ ∈ (0, 1/2), which is the key property of F 1 (x, y) that will be used in the subsequent analysis.
Analysis of a toy model
As discussed before, it is too difficult to improve rigorously the estimate (2.4) in general. The difficult task is to control the self-energy of a blob. Perhaps, some better results could be obtained by well-preparing the initial state: blobs of vorticity with a radial symmetry. In this case, the self energy is exactly zero and this trick is used in [3] to justify the point vortex model. Of course, the time evolution destroys this symmetry and the justification is weak. However, we can hope that these bad effects, as the initial concentration is large, become important only after a long time, but there is not rigorous proof of this.
To have some hint in the study of the problem, we introduce a very schematic toy model: a point x in the plane moving under the action of two velocity fields F (x, t) and g(x, B(t)), where B(t) is a solution of the equation, (4.1) The field F simulates the action of other vortices and it is assumed smooth and divergence-free, the field g simulates the action of the vorticity belonging to the blob itself, where, to simplify the notation, we assumed the intensity of the blob equals to 2π. We refer the reader to Remark 4.4 below for more details on the justification of the choice (4.2).
Remark 4.2. It is worthwhile to remark that the condition div F (x, t) = 0 plays a crucial role for the validity of the above result. Indeed, if the matrix A(t) is not traceless, the averaging effect does not cancel, in general, the lowest order of the expansion of̺, due to the possible presence of secular terms in the averaged system. Moreover, the not trivial fact is that such cancellation turns out to be valid up to the second order and not only to the first one as expected in general. As we shall see in the next remarks, this fact allow one to enhance the justification of the point vortex model via radially symmetry given in textbooks such as [3]. Remark 4.4. To better understand the connection between the toy model and the underlying Euler dynamics, we come back to the situation depicted at the beginning of the section. More precisely, we consider the reduced model of Subsection 2.1 with initial configuration, ω ε (x, 0) = ε −2 γ(|x − z * |/ε) , (4.9) where γ(r), r ≥ 0, is a nonnegative smooth function with support [0, 1] such that dx γ(|x|) = 2π 1 0 dr γ(r) = 2π. Consider the trajectory x(x 0 , t) of the fluid particle initially in x 0 , i.e., the solution to (2.5) with u(x, t) = dy K(x − y) ω ε (y, t). We next show that the toy model is consistent with (2.5) if the vorticity remains radially distributed around the moving center B(t).
This shows that the field u(x, t) produced by a symmetric blob is approximately equals to g(x, B(t)), see (4.2), provided x is far away from the center of vorticity.
In absence of the external field F , the symmetry assumption is rigorously true at any time, since the vorticity distribution is stationary, ω ε (x, t) = ε −2 γ(|x − z * |/ε), and each fluid particle performs a uniform circular motion around z * .
In presence of an external field F , each fluid particle departs, in general, from a uniform circular motion, thus destroying the initial symmetry of the vorticity distribution; this, in turn, produces a velocity u(x, t) with a non-zero radial component, that enhances this effect.
On the other hand, assuming the vorticity "frozen" in a symmetric distribution, Theorem 4.1 establishes that each fluid particle remains very close to a circular motion for a very long time, due to an averaging mechanism that reduces the effect of the external field. Of course, this is only a toy model, because in the real case the vorticity distribution does not remain radially symmetric, and the problem becomes much more difficult.
However, the analysis of the toy model suggests that this averaging mechanism could decrease the development of a non-symmetric component of the vorticity, thus preserving its concentration on long time scales. In Remark 4.5 below, we strengthen this conjecture by giving a more quantitative argument in the special case of the vortex patch dynamics.
Remark 4.5. The name vortex patch dynamics refers to the evolution of piecewiseconstant vorticity configurations. It is well known that such configurations preserve their structure in time, and the problem of their time evolution is reduced to the so-called contour dynamics, which governs the evolution of the boundaries of the regions with constant vorticity.
Let us evaluate the velocity u(x, t) of a fluid particle located at x ∈ Σ + ε (t)\Σ − ε (t). We decompose u(x, t) = u − (x, t) + u + (x, t) with The component u − (x, t) is directed along (x − B(t)) ⊥ by symmetry, and its intensity can be computed by arguing as in Remark 4.4, getting with g as in in (4.2). The component u + (x, t), due to the fluid particles in the annulus Σ + ε (t) \ Σ − ε (t), is the "dangerous part" of u(x, t), since it has nonzero component along x − B(t), but we claim that it is negligible as ε → 0. To this purpose, we decompose the annulus into the union of 2N + 1 disjoint annulus sectors B j , j = −N, . . . , N , of equal length 2πε/(2N + 1) ≃ ε 3−β and such that x ∈ B 0 . We then estimate, The first integral can be bounded by a rearrangement as done in (2.34), while the other integrals are easily controlled noticing that |y − x| ≥ (const.) ε 3−β |j| for y ∈ B j and |j| = 2, . . . , N . Therefore, denoting by |B j | the area of B j , having used that |B j | ≤ (const.) ε 6−2β and N ≤ (const.) ε −2+β .
In conclusion, we have shown that the difference δu(x, t) = u(x, t) − g(x, B(t)) is negligible in the limit ε → 0, validating the initial assumption of neglecting its effect with respect to that of the external force F . Clearly, this remains a heuristic argument, to make a rigorous proof we should get also a good control on the smoothness properties of δu(x, t). | 2018-01-21T17:28:04.000Z | 2016-11-15T00:00:00.000 | {
"year": 2016,
"sha1": "580e554668d8276ebb96f82cadf2f4c37879d9a5",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1611.04914",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "580e554668d8276ebb96f82cadf2f4c37879d9a5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Computer Science",
"Mathematics"
]
} |
16180687 | pes2o/s2orc | v3-fos-license | Enzymatic production of defined chitosan oligomers with a specific pattern of acetylation using a combination of chitin oligosaccharide deacetylases
Chitin and chitosan oligomers have diverse biological activities with potentially valuable applications in fields like medicine, cosmetics, or agriculture. These properties may depend not only on the degrees of polymerization and acetylation, but also on a specific pattern of acetylation (PA) that cannot be controlled when the oligomers are produced by chemical hydrolysis. To determine the influence of the PA on the biological activities, defined chitosan oligomers in sufficient amounts are needed. Chitosan oligomers with specific PA can be produced by enzymatic deacetylation of chitin oligomers, but the diversity is limited by the low number of chitin deacetylases available. We have produced specific chitosan oligomers which are deacetylated at the first two units starting from the non-reducing end by the combined use of two different chitin deacetylases, namely NodB from Rhizobium sp. GRH2 that deacetylates the first unit and COD from Vibrio cholerae that deacetylates the second unit starting from the non-reducing end. Both chitin deacetylases accept the product of each other resulting in production of chitosan oligomers with a novel and defined PA. When extended to further chitin deacetylases, this approach has the potential to yield a large range of novel chitosan oligomers with a fully defined architecture.
heterogeneous mixtures of chitosan oligomers. Still, due to the cleavage specificities of the enzymes, the resulting mixture will be better defined than the chitosan oligomer mixtures obtained by chemical or physical depolymerisation 2,12 . A fully controlled method potentially leading to a broad range of fully defined products is chemical synthesis of chitosan oligomers from monomeric building blocks. However, this attempt is time and labour intensive and the yields are rather low 14,15 . Alternatively, chitin oligomers can be efficiently converted into defined chitosan oligomers by the help of specific chitin deacetylases (EC 3.5.1.41) and chitin oligosaccharide deacetylases (EC 3.5.1.-) 9,16 . A number of different chitin deacetylases from different sources have been described [17][18][19][20][21][22] . Among these, two highly interesting candidates for the production of fully defined chitosan oligomers are the highly specific chitin oligosaccharide deacetylases NodB from Rhizobium spp. and COD from Vibrio cholerae. NodB deacetylates exclusively the GlcNAc unit at the non-reducing end, whereas COD deacetylates the second unit from the non-reducing end 18,20,23 . The limitation of this technique lies in the rather low number of recombinant chitin deacetylases available today with a known and fully defined mode of action, leading to a very limited number of defined chitosan oligosaccharides that can be obtained in this way.
To at least partially overcome this limitation, we have expressed cod from V. cholerae as well as nodB from Rhizobium sp. GRH2 in Escherichia coli and purified the recombinant enzymes. Starting with chitin oligomers of defined DP (A n ), we used the single enzymes for the in vitro generation of two different, fully defined mono-deacetylated chitosan oligomers (DA (n-1) and ADA (n-2) ). As a general agreement the first letter of the oligomers mentioned here represents the residue at the non-reducing end, while the last one represents that of the reducing end. As a proof-of-principle study, we then combined the two deacetylases in a single reaction and generated the expected doubly deacetylated oligomers (DDA (n-2) ), showing that this approach opens a way to produce novel chitosan oligomers with hitherto unreported PA.
Results and Discussion
Isolation and analysis of Rhizobium sp. GRH2 nodB. We designed consensus-degenerate hybrid oligonucleotide primers (CODEHOP) 24 based on the sequences of multiple nodB genes from different Rhizobium strains, and used these to identify and subsequently clone the chitin deacetylase gene from Rhizobium sp. GRH2. The analysis of 16S rDNA sequences by BLASTn revealed that Rhizobium sp. GRH2 is most closely related to Rhizobium leguminosarum bv. trifolii (99% identity).
Production of enzymes in E. coli and characterization of purified enzymes. The GRH2 nodB gene was expressed in E. coli BL21 (DE3), whereas the cod gene was expressed in E. coli Rosetta 2 (DE3) [pLysSRARE2] essentially as previously reported 23,25 . Both genes contained a downstream located Strep-tag II encoding sequence for the purification by streptactin affinity chromatography. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and western blot analysis revealed single bands for both enzymes, with apparent molecular masses close to the expected ones of 24.4 and 45.5 kDa for NodB and COD, respectively (Fig. 1).
The chitin deacetylase activity of NodB was tested against GlcNAc 1-6 . The hydrolysis products were characterized by ultra high performance liquid chromatography -evaporative light scattering detection -electrospray ionization -mass spectrometry (UHPLC-ELSD-ESI-MS), revealing that NodB was not active towards GlcNAc 1 , but converted GlcNAc 2-6 completely into mono-deacetylated chitosan oligomers (Table 1). To our knowledge, this is the first report of obtaining enzymatically active, recombinant NodB without the need of refolding it from insoluble inclusion bodies 20 . COD from V. cholerae is active on short chitin oligosaccharides 18,25 , and we tested COD with GlcNAc 1-6 under the same conditions and analysed the hydrolysis products in the same way as we did for NodB. As expected, COD converted GlcNAc 2-6 completely into mono-deacetylated chitosan oligomers, but not GlcNAc 1 ( Table 1). The optimal pH for NodB was 9 and the optimal hydrolysis temperature was 37uC ( Supplementary Fig. S1 online) as tested with GlcNAc 5 . According to literature, the optimal pH of COD is 8 and the optimal hydrolysis temperature is 45uC 18 . For COD no unspecific hydrolysis products were detected after prelonged incubation, as opposed to NodB, where negligible amounts of double deacetylated chitosan oligomers were detected after prolonged incubation at high enzyme concentrations.
Enzymatic sequencing 23 in combination with UHPLC-ELSD-ESI-MS analysis showed that recombinant NodB deacetylated the first unit from the non-reducing end generating DA 1-5 chitosan-oligomers ( Figure 2), which is in agreement with previous reports on recombinant NodB, refolded from the pellet fraction 20 . Likewise, the same sequencing methodology showed that COD deacetylated the second unit from the non-reducing end generating ADA 1-3 chitosan-oligomers, in agreement with previous reports 18 .
Production and analysis of defined chitosan oligomers. To test whether the chitin oligosaccharide deacetylases NodB and COD are also active on partially deacetylated chitosan oligomers -and whether the enzymes can be used in combination to broaden the spectrum of defined chitosan oligomers with a specific PA -we deacetylated GlcNAc 5 with NodB, removed NodB, and used the mono-deacetylated chitosan pentamer as a substrate for COD. The same was done in reverse order, and in a combined reaction of NodB and COD together. In all cases, GlcNAc 5 was completely converted into doubly deacetylated chitosan pentamers ( Figure 3). The PA of the doubly deacetylated chitosan pentamer was determined using enzymatic/mass spectrometric sequencing which revealed that the first two units from the non-reducing end were deacetylated in the combined as well as in both sequential reactions of NodB and COD ( Figure 4). In addition of GlcNAc 5 , also GlcNAc 2-4 and GlcNAc 6 were converted into the respective doubly deacetylated chitosan oligomers in a combined reaction of NodB and COD (Table 2). In addition to the in vitro experiments, binding of partially deacetylated chitosan dimers and trimers to COD, was examined in silico. The recently solved three dimensional structures of COD in complex with GlcNAc 2 , and GlcNAc 3 25 were used as reference structures (PDB accession codes: 4NZ1 and 4OUI respectively). The dimeric and trimeric substrates where manually converted to all Figure 5 shows the crystal structure of COD in complex with the disaccharide substrate AA, and the modelled structure of the complex with the NodB product DA, which properly binds to form a competent complex. The closest amino acid residues to the 2-NAc or 2-NH 2 in the glucosyl unit on the non-reducing end are Arg304 and Asn119.
Removal of the acetyl group in the DA substrate does not alter significantly the interactions map where only one hydrogen-bond with Arg304 is lost. The binding affinity of all acetylated, partially deacetylated, and fully deacetylated compounds to COD was calculated by means of the VINA scoring function 26 (Table 3). Not surprisingly, the fully acetylated compounds are the ones with strongest binding affinity towards COD (29.4 kcal mol 21 for AA, 210.6 kcal mol 21 for AAA). On the other hand, the deacetylated compounds produced by COD (AD and ADA) loose affinity towards the enzyme (increase in energy of roughly 1 kcal mol 21 ). Interestingly, the deacetylated compounds produced by NodB (DA and DAA) are able to bind COD with a stronger affinity than the COD originating products, but the interaction energy to COD is still reduced by 0.8 to 0.4 kcal mol 21 relative to the natural fully acetylated substrates.
In other words, the contribution to the binding affinity of the Nacetyl substituent of the sugar ring at the non-reducing end is only 0.8 kcal mol 21 for chitobiose in COD, in agreement with the lost of one hydrogen bond with Arg304 in the deacetylated substrate ( Figure 5). This contribution is lower for chitotriose (just 0.5 kcal mol 21 ). It is expected that this contribution will be even lower for longer oligosaccharides. Thus, although COD is more active on AA and k cat decreases with increasing oligosaccharide length 18,25 , compounds such as chitotetraose or chitopentaose continue to be substrates of COD even if they are deacetylated at the non-reducing end because the loss of affinity due to the acetyl substituent of the sugar ring at the non-reducing end will presumably be negligible.
The fact that NodB and COD accept the products of each other offers new possibilities for the biotechnological production of defined chitosan oligomers. Assuming that also other chitin deacetylases can accept partially deacetylated products as substrate, a rather large number of different fully defined chitosan oligomers with different PA could be produced by combining a rather limited number of different chitin deacetylases.
In order to produce sufficient amounts of specifically mono as well as doubly deacetylated chitosan oligomers for further downstream bio-testing, we biotechnologically produced chitin pentamer NodB was removed and the obtained mono-deacetylated chitosan pentamer (A 4 D 1 ) was further deacetylated with COD (b). The same was done vice versa: GlcNAc 5 (A 5 ) was deacetylated with COD (c) in the first step, and the enzyme was then replaced by NodB (d). Furthermore, NodB and COD were combined in a single reaction leading to a double-deacetylated chitosan pentamer in one reaction (e). All reactions were carried out in ammonium hydrogen carbonate buffer (pH 8) at 37uC for 2 h. in E. coli, of which we then deacetylated 100 mg in a combined reaction using NodB and COD at mg-scale. The chitosan pentamers thus obtained were purified using size-exclusion chromatography (SEC), separating them from chitosan tetramers, which originated from a by-product of the recombinant production of chitin pentamers in E. coli. The purified products were analysed using UHPLC- To determine the PA, it was first hydrolysed with the GlcNase GlmA TK , which removes exclusively GlcN units from the non-reducing end. The reaction resulted in GlcN 1 (D 1 ) and GlcNAc 3 (A 3 ) (b). In the next step, GlmA TK was replaced by the GlcNAcase BsNagZ, which exclusively removes GlcNAc units from the non-reducing end, resulting in GlcN and GlcNAc monomers (c).The enzymatic sequencing of the simultaneous hydrolysis product of NodB and COD revealed that the first two units starting from the non-reducing end were deacetylated, giving a specific chitosan oligomer with the novel PA DDAAA. ELSD-ESI-MS and proved to be highly pure ( Figure 6). We obtained 27 mg of highly pure doubly deacetylated chitosan pentamer (DDAAA) and 4 mg of equally pure doubly deacetylated chitosan tetramer (DDAA). As this method can be scaled up, it is feasible to produce fully defined chitosan oligomers in sufficient amounts for trial applications in different fields such as cosmetics or bio-medicine. Evaluating the scientific and commercial potential of fully defined chitosan oligomers is not possible at the moment due to the lack of welldefined chitosan oligomers on the market. When the chitin oligomers used as a substrate are of biotechnological origin, as in this study, our method has the added advantage that the chitosan oligomers produced, can be guaranteed to be free of allergenic contaminants.
Methods
Bacterial strains, vectors and culture conditions. E. coli strains TOP10 (Invitrogen, Darmstadt, Germany) and DH5a 27 were used for general cloning, whereas Rosetta 2 (DE3) [pLysSRARE2] and BL21 (DE3) (Novagen, Darmstadt, Germany) were used for protein expression. The vector pCRII-TOPO (Invitrogen) was used for cloning of PCR fragments, whereas pET-22b(1) (Novagen, Darmstadt, Germany) including a StrepII tag sequence upstream of the multiple cloning site (MCS) 23 was used for cloning and expression. E. coli was grown on LB agar, in LB medium, or in autoinduction medium 28 . Media were supplemented with the appropriate antibiotics (34 mg ml 21 chloramphenicol and/or 100 mg ml 21 ampicillin).
Preparation of genomic DNA from Rhizobium sp. GRH2 and 16S rDNA analysis. Genomic DNA was extracted from Rhizobium sp. GRH2 as described by Rainey et al. 29 . The 16s rDNA was amplified with Phusion Hot Start II High Fidelity proof reading DNA Polymerase (Fisher Scientific, Schwerte, Germany) using three different primer pair combinations (27F/1525R, 27F/926R and 16S-F/16S-Rev). All primer sequences are listed in Supplementary Table S1. Only 27F/926R yielded a product, which was purified, sequenced and analysed using BLASTn.
Identification of unknown nodB gene from GRH2 and plasmid construction.
Consensus-degenerate hybrid oligonucleotide primers (CODEHOP) [24] were designed against known nodB genes from different Rhizobium strains as previously described. All primer sequences used during the study are listed in Supplementary Table S1. The genes were amplified using Mango-Taq DNA Polymerase (Bioline, Luckenwalde, Germany) and different combinations of CODEHOPs (B-for, D-rev, C-for, E-rev). Obtained PCR products were cloned in pCRII-TOPO for sequencing, followed by sequence analysis using BLASTx and alignment with CloneManager (Scientific & Educational Software, Cary, USA). Inverse PCR was used to isolate the upstream and downstream regions from genomic DNA. Total DNA was each digested with either SacI or HindIII and circularized with rapid T4 DNA ligase.
The circulized DNA was used as template for a PCR with different inverse primer combinations. Amplified PCR products were ligated into pCRII-TOPO vector for sequencing. Promising sequences were aligned with CloneManager to generate a contig. The full-length nodB gene was isolated by standard PCR using a further CODEHOP primer (BC-for) designed for known nodB genes from different Rhizobium strains and specific primers that bind within the partially identified regions of the nodC gene from Rhizobium sp. GRH2, which is located upstream of nodB. In the last step the full length nodB gene was amplified by using specific primers and Phusion Hot Start II High Fidelity proof reading DNA Polymerase (Fisher Scientific) to exclude possible mutations, which may had occurred during the different PCR amplification steps with the non-proof reading Mango-tag polymerase. by SDS-PAGE in a 12% (w/v) gel 30 . Separated proteins were visualized with zincon/ ethyl violet staining 31 or transferred to a nitrocellulose membrane (GE Healthcare Europe GmbH, Freiburg, Germany) using a semi-dry transfer procedure 32 . Strep-tag II fusion proteins were detected with Strep-Tactin-horseradish peroxidase (HRP) conjugate and chemiluminescent reaction was developed according to the instruction of the manufacturer (IBA GmbH, Göttingen, Germany).
Determination of pH and temperature optimum for NodB. The pH and temperature optima of NodB were determined indirectly, by measuring the amount of acetate released during the enzymatic reaction, using an acetic acid assay kit (Rbioharm AG, Darmstadt, Germany) adapted for microtiter plates. The pH optimum was determined over the pH range 4-12 at 37uC for 2 h, in buffer containing 100 mM NH 4 HCO 3 , 20 mM TEA, 20 mM KH 2 PO 4 and 20 mM Na 2 HPO 4 . Teorell and Stenhagen buffer (100 mM citric acid, 100 mM phosphoric and 100 mM boric acid 33 ) was used for high-pH conditions, overlapping at pH 10. The temperature optimum was determined at 4uC, 22uC, 37uC, 45uC and 55uC at pH 9 using the buffer and conditions described above. The reaction volume was set to 400 ml, comprising 50 mM buffer, 1 mM chitin pentamer (Megazyme, Bray, Ireland) and 0.5 mM purified NodB. The reactions were stopped and the amount of released acetic acid was measured directly. Determination of the PA of the generated chitosan oligomers. The PA of the different chitosan oligomers generated by NodB and COD, or by a combined action of both enzymes was determined by enzymatic sequencing 23 . Samples were analysed by UHPLC-ELSD-ESI-MS analysis instead of HPTLC. The composition of the shake flask and minimal batch fermenter medium was the same as described by Waegeman et al. 36 . Fed-batch medium consisted of 500 g l 21 glucose, 1 g l 21 MgSO 4 37H 2 O and 30 g l 21 NH 4 Cl. All media were supplied with 0.1 g l 21 ampicillin. The recombinant E. coli strain was pre-cultured in two shake flasks of 2 l total volume, filled with approximately 0.2 l shake flask medium and grown at 30uC while constantly shaking at 200 rpm. After 24 hours, 100 ml of broth was transferred into two shake flasks of 5 l total volume filled with 2 l shake flask medium at 30uC while constantly shaking at 200 rpm. After 24 hours the resulting fermentation broth (4 l in total) was inoculated into a 150 l fermenter (Sartorius, Göttingen, Germany). Fermentation conditions were; temperature: 30uC, stirrer speed: minimum 500 rpm and if pO 2 drops below 30% stepwise increased, aeration: 10 slpm, pH: 7, pressure: 500 mbar. Feeding was started when glucose was depleted in the batch phase and the feed rate applied was 4 l h 21 . The total fermentation time was 110 h. After fermentation, the broth was harvested and cells were separated from supernatant by ceramic tangential flow microfiltration (Tami Industries, Nyons, France, 0.45 mm). Both the cell and supernatant fraction contained the fully acetylated pentamer and were further purified. Cells were disrupted by a homogeniser (GEA Process Engineering, Mechelen, Belgium, 10 l h 21 ) and cell debris was removed by tangential flow microfiltration (Kleenpak, Pall, Zaventem, Belgium, 0.45 mm). In both fractions, salts were removed by ion exchange (Amberlite, Dow, Tessenderlo, Belgium) and the solutions were further concentrated by wiped film evaporation (Carl Canzler, Germany, 150 l h 21 , 60uC, 70 mbar) and finally spray-dried (Xedev, Zelzate, Belgium). After purification, 371 g of product was obtained, which consisted for 85% of fully acetalyated pentamer (GlcNAc 5 ). Purified chitin pentamer (100 mg) was incubated with 0.2 mM NodB and 0.6 mM COD in 100 ml 50 mM NH 4 HCO 3 buffer (pH 8) at 37uC overnight. The sample was freeze dried and solved in water before size exclusion chromatography (SEC). After SEC the sample was diluted with water and freeze dried. The obtained product was analysed using UHPLC-ELSD-ESI-MS.
UHPLC-ELSD-ESI
Size-exclusion chromatography (SEC). Deacetylated chitosan oligomers were purified on three HiLoad 26/600 Superdex 30 pg columns (GE Healthcare Europe GmbH, Freiburg, Germany) in a row with an overall dimension of 2.60 3 180 cm. Ammonium acetate buffer (0.15 M, pH 4.5) was used as mobile phase and the flow rate was set to 0.8 ml min 21 . The effluent was monitored with an online refractive index detector (1260 Infinity Refractive Index Detector, Agilent Technologies Deutschland GmbH, Böblingen, Germany) which was coupled to a datalogger 13,37 . Fractions containing oligomers were pooled and analysed by UHPLC-ELSD-ESI-MS.
Calculation of relative binding affinities of partially deacetylated chitobiose and chitotriose to COD structure. Three dimensional structures of COD in complex with chitobiose and chitotriose have been recently obtained 25 and deposited in the Protein Data Bank with accession codes 4NZ1 and 4OUI respectively. Polar hydrogens were added to the receptor protein structure with AutoDockTools 38 . AutoDock4.2 atom typing was used. Gaisteger partial charges were computed for each atom with AutoDockTools. Ligand structures (chitobiose, AA and chitotriose, AAA) were extracted from the corresponding PDB files. For each ligand, a series of partially and fully deacetylated compounds were generated by removing the corresponding N-acetyl group with AutoDockTools while keeping the overall geometry of the molecules. Thus, three dimensional structures AA, AD, DA, DD, AAA, ADA, AAD, ADD, DAD, DDD were obtained. Every ligand was parametrized in the same way as the receptor. Each ligand was docked onto COD structure by means of AutoDock VINA algorithm 26 . A grid-box of 30 3 26 3 30 A 3 centered at the active site was used as the search space for docking. Interaction energies between ligands and COD receptor were calculated with VINA scoring function 26 . Reported values in Table 3 are an estimation of the differences in free energy upon binding for those docking poses in which the ligand binds in a productive orientation at the active site. The uncertainty of these calculations is 0.6 kcal mol 21 . This was estimated as the average range of VINA score values obtained for a series of similar docking poses (structures within 1.5 A of the lowest score one). This work is licensed under a Creative Commons Attribution 4.0 International License. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in the credit line; if the material is not included under the Creative Commons license, users will need to obtain permission from the license holder in order to reproduce the material. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ www.nature.com/scientificreports SCIENTIFIC REPORTS | 5 : 8716 | DOI: 10.1038/srep08716 | 2018-04-03T03:52:46.186Z | 2015-03-03T00:00:00.000 | {
"year": 2015,
"sha1": "1fc0979efa3c27aedc87b68d32d4c37a141748ff",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/srep08716.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1fc0979efa3c27aedc87b68d32d4c37a141748ff",
"s2fieldsofstudy": [
"Chemistry",
"Materials Science",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
225294430 | pes2o/s2orc | v3-fos-license | A rare case of huge congenital parieto-occipital teratoma in a female infant: An unusual site of occurrence
Teratomas are considered the most common congenital tumors located on the dorsal midline and arise from cells derived from more than one germ layer (i.e. ectodermic, endodermis and mesodermic) at different regions of the body. Those in the head and neck regions are considered rare with an incidence of 1-3.5% of all cases. Imaging provides critical information that helps in formulating differential diagnosis even though, history and physical examination provides critical information in making the diagnosis. A successfully surgically treated case of an usually huge parietooccipital teratoma in a 40 days old girl was presented with emphasis on the importance of imaging in diagnosis and management.
Introduction
Teratomas are rare tumors with a prevalence of approximately 1/13,000, but are however, considered the most common congenital tumors located on the dorsal midline and arises from cells derived from more than one germ layer at different regions of the body due to change in location of the germ cells. 1 They contain ectodermic, endodermic, and mesodermic tissues.
Teratomas located in the head and neck region are very rare comprising 1-3.5% of all cases. 2 More common sites are the sacrococygeal region (40%) and the gonads (40%). 3 Males are reported to have slightly higher incidence (male: female of 2.5:1). 4 Although clinical history and physical examinations are very important in managing these patients, imaging provides critical information that helps in formulating differential diagnosis.
An infant who presented with a large rare teratoma of the scalp was presented and the importance of imaging in diagnosis and treatment of this lesion was emphasized.
Case Report
M.A., a 40 days-old girl was brought to the Paediatric Out-patient Department of Aminu Kano Teaching Hospital by her mother on account of a mass on the right side of the head which extended to the back of the head. The mass was said to have been noticed during routine prenatal sonography at second trimester of gestation at the referral clinic where antenatal care was booked. The mother who has four other healthy children with no history of similar mass at birth attended antenatal visits regularly in the course of the pregnancy of this patient. She had no history of consumption of alcohol or non prescribed medications during pregnancy.
Delivery of the child was via a caesarian section on account of obstructed labour due to dystocia. The child was said to have cried immediately at birth and did well since delivery. The mass on the right side of head was however said to have increased in size from its initial size at birth.
On physical examination, the general condition of the child was good. She was alert with an interest in the environment. There was a soft tissue, mobile, cystic mass overlying the right parieto-occipital region of the skull with an intact skin covering. It measured 17x24cm in size. No neurological deficits were noted on neurological assessment. Abdominal, cardio-respiratory and musculoskeletal systemic physical examinations were essentially normal. No swelling was seen in other parts of the body. Full blood count and differentials including PCV, Urea and creatinine, chest radiographs, abdominal ultrasound scan and transcranial cranial ultrasound scan of the brain were found to be within normal limits. An initial diagnosis of cystic hygroma was made. Differential diagnoses considered included giant teratoma, encephalocele, lipoma, dermoid cyst, dermal cysts and hemangioma.
A plain radiograph of the skull ( Figures 1 and 2) showed a large, fairly oval shaped mass of soft tissue density with regular outline and margins and internal calcifications in the right parieto-occipital region. No associated adjacent bony destruction or evidence to suggest intracranial extension of the mass was seen. Ultrasonography of the mass ( Figure 3) revealed a mixed echogenic appearance with both solid and cystic components. Focal areas of calcifications were noted within it. Doppler interrogation showed absent blood flow within the mass suggestive of hypovascular lesion. These findings in conjunction with those of clinical examination were suggestive of a benign soft tissue mass likely teratoma. She had a total surgical excision and histology of the mass, which on gross inspection of its content revealed hairy and finger-like structures embedded in a gelatinous fluid. Primary skin closure was done after securing haemostasis. The report at histology confirmed a grade II-III immature teratoma. The patient had uneventful post-operative recovery. She was discharged from the hospital one week post-op and was followed-up in the pediatric outpatient clinic. At six months of regular follow-up, the wound has healed well with no evidence of tumor recurrence. and spinal regions, gonads, and subcutaneous tissues. 2 Head and neck teratomas are rare conditions with a few reports in the literature. In a comprehensive review 3 conducted between 1966 and 2005, 10 teratoma patients were identified who had a midline teratoma on the back of the neck. The patient in this report had a parieto-occipital located lesion.
Males are reported to have slightly higher incidence (male: female of 2.5:1). 4 The index case is however seen in a female infant.
Histopathologically, teratomas are typically divided into three groups (mature, immature, and malignant). Mature teratomas contain well differentiated cells, while immature teratomas contain primitive structures that are not adequately differentiated. Teratomas that contain malignant component are classified as a malignant teratomas. 2 This patient had the benign form.
Both mature and immature forms usually contain tissues from all three germ layers, including skeletal muscle, cartilage, bone, bronchial epithelium, gut epithelium, and neural tissue. According to O'Connor and Norris, mature teratoma is grade-0, immature teratoma is classified as grade-3, based on the proportion of its immature elements and the mitotic rate. 5 Histopathological examination of specimen from the present patient contained immature elements (grade-3).
Imaging is very relevant in evaluation of children with cranial masses both in pre and post natal life as teratomas on the head and neck regions of the newborn can be easily confused with other causes such as encephalocele.
Congenital teratomas are often easily recognized during pregnancy by Ultrasound scan and Magnetic Resonance Imaging studies. 6 Ultrasound is non-invasive and safe. At prenatal Ultrasound, the diagnosis of teratoma should be considered for a complex cranial mass with calcifications. Mostly they are seen on post natal examination as large, heterogeneously echogenic masses with cystic and solid components as well as presence of characteristic calcific elements in it. Sonographic examination of the mass in this patient showed similar features ( Figure 3).
Cranial encephalocele closely resembles this condition and Ultrasound diagnosis is based on the recognition of a cystic (meningo-encephalocele) or complex (meningo-encephalocele) mass of variable size protruding through a skull defect, 7 often localized in the occipital region. The mass usually lacks calcification as in teratoma. There may be associated hydro-Case Report cephalus and a spina bifida. The index patient did not have any of these abnormalities. The plain skull radiograph demonstrates a soft tissue mass overlying the involved area of skull vault with calcific foci. There is usually preservation of the adjacent cranial vault. In the index case, the skull radiograph (Figures 1 and 2) demonstrated characteristic calcific structures within the mass with intact adjacent skull bone. 4 Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) are valuable diagnostic procedures in showing the extracranial localization of the mass and ruling out intracranial extension. 4 These were not done for this patient due to financial constraints.
Teratomas show cystic and solid areas and areas of fat density at CT and MRI. 2 The typical T1-and T2-weighted imaging appearance is a large heterogenous mass with cystic component. No apparent difference between mature and immature teratomas may be identified. 1 Although MR imaging is poor for detecting small calcifications, CT has demonstrated regions of calcification in most teratomas.
In addition to cranial encephalocele, other differential diagnoses of teratomas of the head and neck may include hemangiomas, cystic hygromas, lymphangiomas, lipomas, dermal cysts, vascular malformations, and cutaneous cysts. 4 The treatment of teratomas consists of early and complete resection of the tumor mass as was done for this patient. Surgical intervention can prevent the risk of malignant transformation. 8 The prognosis is excellent after total excision in scalp teratomas such as in this case. 9 Conclusions A rare case of right parieto-occipital teratoma in a 40 days-old female child was presented who had imaging and histological diagnosis. She had a total surgical excision of the tumor with no post-operative complications and uneventful follow up. | 2020-09-10T10:24:13.477Z | 2020-09-07T00:00:00.000 | {
"year": 2020,
"sha1": "87154ac13e728156e90e78d9756bbcf8cc715176",
"oa_license": "CCBYNC",
"oa_url": "https://www.pagepress.org/medicine/pjm/article/download/61/28",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b22836d539be41f1e1b0912dd8a123995c16f094",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
233598490 | pes2o/s2orc | v3-fos-license | Correlation of Oxidative Stress and Hepatoprivial Syndrome in Dogs with Comorbidity of Babesiosis and Dirofilariasis
The comorbidity of babesiosis and dirofilariasis in dogs is an important clinical problem, despite the significant achievements of recent years in understanding the pathogenesis of this mixed invasion. It has been established that the leading pathogenetic component in the development of the cytolytic syndrome with this comorbidity in dogs is oxidative stress resulting from the mismatch of the prooxidant and antioxidant resources of the cell under the influence of Babesia canis parasitism. On the basis of morphological, biochemical and ultrasonographic studies, a direct correlation was found between oxidative stress and hepatoprivial syndrome in dogs with comorbidity of babesiosis and dirofilariasis. Hepatoprivial syndrome was accompanied by the development of hypochromic anaemia, leukocytosis, hyperproteinemia, hypoglycemia, a disorder of pigment metabolism and an increase in the catalytic activity of serum enzymes, which indicated a violation of the metabolic activity of the liver and damage to its parenchyma. The activation of lipid peroxidation processes in the hepatocytes contributed to an increase in the catalytic activity of the blood serum enzymes in sick animals, and to a decrease in the antioxidant defence of sick dogs, due to a relative decrease in the level of vitamin A in the blood. Thus, the comorbidity of babesiosis and dirofilariasis in dogs enhances the oxidative syndrome that underlies the pathogenetic mechanisms of this mixed invasion, thereby increasing the degree of involvement in the pathological process of the liver, which is manifested by hepatoprivial syndrome.
Introduction
Long-term studies have proved that the classical clinical picture of many diseases in modern conditions undergoes changes and gradually loses its typicality. The reason for these changes is that the existence of life on Earth is possible only in the form DonAgro of associations. In all biological systems, the process of self-organization necessarily proceeds with the participation of a large number of objects of various types. Therefore, the question of the comorbidity of diseases is the result of the existence of a biological system. This circumstance gives reason to put forward the assumption that monoinvasions not only do not occur in the animal kingdom but any infectious disease, in principle, cannot occur without the mutual influence of other pathogens on it [1].
This problem is especially acute among vector-borne diseases. Russia is one of the vastest countries, on the territory of which blood-sucking insects are almost universally distributed. This state of affairs creates the prerequisites for the existence of associative natural foci [2][3][4].
Prerequisites for the comorbidity of babesiosis and dirofilariasis are polymorphism of manifestations of dog babesiosis and the persistence of parasites in animals. This issue is an important clinical problem, despite the significant achievements of recent years in understanding the pathogenesis of this mixtinvasion. Also, the problem of the mechanism of the development of the association remains open. There are also many unresolved issues in the diagnosis of the disease, therapy and prevention of this mixtinvasion [5].
It can be argued that oxidative stress is the leading pathogenetic component in the development of the cytolytic syndrome in babesiosis in dogs. This stress arises as a result of the prooxidant mismatch and antioxidant resources of the cell under the influence of parasitism of Babesiacanis. Besides, a violation of the homeostatic mechanisms of oxidative metabolism of the body is considered recently as an independent syndrome [6].
Thus, it can be argued that the comorbidity of babesiosis and dirofilariasis makes it difficult both to choose methods of therapeutic correction [7 -9] and to predict the course of this association [10 -12]. A set of treatment and prophylactic measures cannot be fully implemented for many reasons: chronicity of this pathology, complications associated with intoxication, impaired metabolic function of the liver and involvement of components of the cardiopulmonary and hepatorenal systems in the pathological process [13 -15].
Our research aimed to study the correlation level of oxidative stress and hepatoprivial syndrome in dogs with the associative course of babesiosis and dirofilariasis. Taking into account the peculiarities of the etiopathogenetic aspects of the disease, to achieve the intended goal, we set the following tasks: to study the clinical status, morphological, biochemical blood parameters, ultrasonographic studies of the hepatobiliary system in dogs with mixtinvasive dirofilariasis -babesiosis, with signs of hepatopriva syndrome. DOI
Methods and Equipment
The work was performed during 2019-2020 at the Department of Therapy and Propaedeutics of the Don State Agrarian University (Persianovsky village) and the Vitavet Veterinary Clinic (Novocherkassk city).
In order to carry out research, experimental and control groups of animals were formed. In each group, there were ten large-breed dogs aged 3.5 to 4 years, patients with mixtinvasion, dirofilariasis-babesiosis with signs of hepatoprivial syndrome. The groups were formed based on pairs of analogues as animals entered the veterinary clinic. The diagnosis was made based on anamnesis, the results of a clinical study, laboratory blood tests and microscopy of peripheral blood smears. A clinical study of sick animals was carried out according to standard methods.
In order to confirm the diagnosis of dirofilariasis in dogs, a blood smear was examined, and an immunochromatographic method was used. For the final diagnosis of babesiosis, microscopy of smears of the peripheral blood of dogs was performed. Ultrasonographic studies of the hepatobiliary system in sick animals were performed on a MindrayUMT-150 apparatus. At the same time, they were evaluating the size, structure of the liver, gall bladder, as well as the presence of free fluid in the abdominal cavity.
In the blood, the content of erythrocytes, leukocytes, the concentration of hemoglobin on a veterinary hematological analyzer PCE-90 VET, and the erythrocyte sedimentation rate were determined. The level of total serum protein, albumin, glucose, total bilirubin, direct bilirubin, creatinine, urea, alanine aminotransferase, aspartate aminotransferase, cholinesterase, amylase, alkaline phosphatase was determined using a BIOBASE-8021A automatic biochemical analyzer. To determine the level of vitamin A, the quantitative method of O. A. Bessie in the modification of A. A. Anisova was used.
Results
As a result of clinical trials of sick animals, signs of apathy, anorexia and polydipsia were revealed. An increase in body temperature was observed in dogs of the experimental group to 40.9 ± 0.5°C. In the control group to 41.1 ± 0.4°C. An increase in heart rate was recorded to 182 ± 2.0 beats per minute in the experimental group and 185 ± 3.0 beats per minute. In control, the respiratory rate reached 35 ± 4.0 respiratory movements per minute and 36 ± 2.0 respiratory movements per minute, respectively. In animals, icteric staining of the mucous membranes of the oral cavity, conjunctiva of the eyes was DonAgro observed, urine was red. Also, in sick dogs, dehydration, salivation, vomiting, shortness of breath during physical exertion were detected.
Microscopic examination of peripheral blood smears of sick animals stained according to Romanowski Giemsa visualized paired pear-shaped forms of Babesia canis canis in red blood cells, and parasitemia reached 1% (Figure 1b). Microfilariae Dirofilaria immitis in blood was detected in sick dogs using a saturated smear method (Figure 1a). a b (Table 1).
Thus, the erythrocyte count in dogs of the experimental group was lower than the arithmetic average of the reference values by 29.41%, and the control -by 33.82%, the haemoglobin value was lower by 40.76% and 39.45%, respectively.
Discussion
Against the background of leukocytosis in dogs with the comorbidity of babesiosis and dirofilariasis, the development of hypochromic anaemia was due to impaired hematopoiesis and red blood cell lysis due to parasitization of Babesiacanis. Also, the massive breakdown of red blood cells and the release of haemoglobin, which breaks down in the liver to bilirubin, caused an increase in the level of direct and total blood bilirubin. An
Conclusion
Thus, with the comorbidity of babesiosis and dirofilariasis, the leading trigger of oxidative stress is the parasite Babesiacanis. Therefore, lipid peroxidation processes affect the structure of the hepatorenal system primarily and not cardiopulmonary. Also, oxidative stress correlates with the degree of involvement of hepatocytes in the pathological process, which leads to the development of hepatoprivial syndrome with the involvement of structures of the hepatorenal system. | 2021-05-04T22:05:28.202Z | 2021-04-05T00:00:00.000 | {
"year": 2021,
"sha1": "d1fc5af4d6600fa6dda6814fa3546a6be7ab005e",
"oa_license": null,
"oa_url": "https://knepublishing.com/index.php/KnE-Life/article/download/9007/15317",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "1f3793b45276dcee01051a74c37faa4e9fcb6fb9",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
267754341 | pes2o/s2orc | v3-fos-license | Knowledge, Attitude, Practice, and Perceived Barriers Regarding Colorectal Cancer Screening Practices Among Healthcare Practitioners: A Systematic Review
The recommendations of medical professionals play a significant role in colorectal cancer (CRC) screening. This study aims to systematically review knowledge, attitude, practice, and perceived barriers regarding CRC screening practices among healthcare practitioners (HCPs). From January 2023 to December 2023, a comprehensive literature search was conducted using online databases, including Web of Science, PubMed, Scopus, and Research Gate, by using the following keywords in combination: "knowledge," "attitude," "practice," "perceived barriers," "colorectal cancer," and "health practitioners." The researchers screened and examined the retrieved literature. A total of 21 studies were considered relevant for the current review. Among these studies, eight assessed the level of knowledge, attitude, practices, and perceived barriers toward CRC screening among various health practitioners. Three studies assessed knowledge and attitudes toward CRC screening among health practitioners. The remaining ten studies assessed awareness, perceived barriers, or only knowledge of CRC screening among HCPs. In addition, all the included studies employed a cross-sectional design. The review shows that many healthcare providers need more fundamental knowledge of CRC screening. Healthcare procedures must be improved to enhance the knowledge, attitudes, and practices of healthcare professionals regarding CRC screening and their understanding of the associated barriers.
Introduction And Background
Colorectal cancer (CRC) is one of the leading causes of cancer-related deaths in both industrialized and developing nations [1].Researchers predict that the global male age-standardized CRC rates will be 20.6/100,000 and the female age-standardized rates will be 14.3/100,000 [1].In 2012, CRC diagnoses reached 1.4 million, leading to 693,900 deaths [2].According to global cancer projections, the Middle East and Western Asia are seeing rising rates of CRC, which is due mainly to the rising incidence of CRC risk factors [2].
Current evidence establishes age as a significant contributing factor to the occurrence of CRC, with the risk increasing at the age of 40; after the age of 50, the risk of CRC diagnosis significantly increases [3].Other risk factors, such as alcohol, smoking, familial or genetic polyposis, and ulcerative colitis, are also associated with an increased risk of CRC [4].Screening procedures in primary healthcare settings, where primary healthcare physicians play a significant role in the early detection of cancers and chronic disorders, are part of CRC prevention methods [4,5].Furthermore, experts recommend improving current medical education, starting at the academic degree level, for prevention and early identification [4].
Early identification and removal of precancerous polyps avoid the earliest stage of CRC pathogenesis [4].A study conducted by Zauber et al. stated that identifying and excising colon polyps in their early stages can avoid CRC mortality [6].Most worldwide guidelines suggest screening individuals 50-75 years of age for CRC every 10 years with a colonoscopy, every five years with a flexible sigmoidoscopy, or every year with a fecal occult blood test (FOBT) [7].Therefore, primary care physicians and healthcare practitioners (HCPs) are essential in the early identification of cancer.Research has indicated that general practitioners diagnose most cancer patients at primary care centers [7].
Several studies have shown that enhancing HCPs' knowledge and attitude about CRC screening enables them to adhere more closely to screening recommendations and ensure proper administration of tests [7].The HCPs' backgrounds also have a significant role as they influence their capacity to offer CRC screening services.HCPs with personal experience with CRC, such as having a family member with the disease or treating a patient with it, are more likely to offer or suggest CRC screening to all eligible patients [7].Therefore, we aimed to conduct a systematic review on assessing knowledge, attitude, and perceived barriers regarding CRC screening practices among HCPs.
Search Strategy
This review utilized the Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols (PRISMA-P) search strategies.The researchers completed a literature search for relevant studies using the electronic databases PubMed, Science Direct, Scopus, Research Gate, and Google Scholar.The search strategy consisted of keywords and controlled vocabulary terms related to CRC, knowledge, attitude, perceived barriers, and HCPs.In order to conduct a comprehensive search, we also explored additional sources, including reference lists of relevant articles and grey literature.
Inclusion and Exclusion Criteria
The inclusion criteria for articles used in this review were 1) studies conducted among healthcare professionals regardless of ethnicity, country, or profession, 2) experimental studies that assessed knowledge, attitude, and perceived barriers, 3) studies related to CRC, and 4) articles published in English.Exclusion criteria were narrative reviews and studies conducted among non-healthcare professionals.In addition, this review excluded dissertations, theses, monographs, and commentaries.
Data Extraction
Two reviewers independently performed data extraction using a standardized form.The reviewers extracted participant characteristics (sample size, gender, age, and profession), study design, country of study, followup period (weeks or months) for assessing knowledge, attitude, and practice regarding CRC, and study outcome details from the retrieved studies.
Quality Assessment
The authors assessed the methodological quality and risk of bias in the included studies using the Cochrane Risk of Bias tool.
Results
Criteria for relevant articles that investigated the assessment of knowledge, attitude, and perceived barriers regarding CRC screening practices among HCPs followed the PRISMA guidelines (Figure 1) [8].The initial search identified a total of 337 records.All duplicate records were removed, leaving 190 records (n = 147).After screening the title and abstract, we identified 60 studies.We excluded 39 out of the 60 considered studies for not meeting the study inclusion criteria.Hence, 21 studies were considered eligible and relevant to the research objectives for this review.
NA
The majority of students (79.8%) knew of CRC and its screening procedures (98.9%).
Discussion
Globally, CRC is the second leading type of cancer in women and the third leading type in males [1].CRC screening significantly reduces both the incidence and death of CRC cancers [2,18].Researchers have indicated that physician advice is the most significant predictor of patient compliance with CRC screening [2,7].The present review presents a comprehensive literature review on the level of knowledge, attitude, practices, and perceived barriers toward CRC screening among health practitioners.Most of the studies investigated the level of knowledge, attitude, and practice of CRC screening among physicians.It is critical to comprehend the healthcare providers' viewpoint on CRC screening, given the significance of early screening, especially when advised by healthcare providers.
The findings of this review illustrate that a high proportion of HCPs need to gain more knowledge about CRC screening.For example, many healthcare providers need to gain current knowledge regarding CRC screening and the guidelines for widely used CRC screening techniques, including FOBT, sigmoidoscopy, and colonoscopy [15,26,28].Most medical professionals stated that their knowledge of CRC screening and prevention needed to be improved.Studies focusing primarily on medical professionals have revealed similar results, demonstrating that most medical students graduate without the skills required to help patients prevent and detect cancer and that numerous physicians lack adequate training and confidence in essential cancer prevention and detection techniques [18,22,29].
Our study's findings demonstrate that most healthcare providers needed better CRC screening practices since they infrequently carried out tasks, including ordering, referring, educating patients about their health, or suggesting CRC screening for eligible patients [10,11,13,17].Healthcare professionals' CRC screening practices were impacted by the absence of mechanisms to identify eligible individuals, the presence of cancer experts, continuing professional education on cancer prevention, and cancer screening policies at health facilities [13,15].Raising the level of knowledge among healthcare workers through ongoing education initiatives and establishing specific regulations, procedures, and guidelines for CRC screening inside healthcare facilities are likely to improve CRC screening [15].Research shows that medical students' professional habits and practices greatly depend on the knowledge and attitudes they acquire in medical school or professional training programs [15,18].
The most reported barriers to CRC screening are a lack of qualified personnel to follow-up with CRC screening techniques and patients' anxiety over learning they have cancer [9,12,15,23].Health professionals' knowledge and inability to inform patients about the advantages of CRC screening, available screening test alternatives, potential side effects, and the specifics of the procedures could exacerbate these barriers [15,23].The health professionals also mentioned the lack of qualified healthcare professionals and patient follow-up, the cost, the accessibility of screening services, the absence of a policy, the length of waiting periods for appointments, and the volume of patients [15,23].In addition, primary care providers expressed their top concerns about colonoscopy screening, including a lack of information, time constraints, and patient anxiety over the invasive nature of the procedure, and patients' perceptions of FOBT's benefits [12].
This article mainly focuses on the level of knowledge, attitude, practices, and perceived barriers toward CRC screening among health practitioners.However, the study does have limitations despite being conducted across various parts of the world.First, all the studies were conducted using a cross-sectional survey design; this illustrates the need to develop appropriate interventions to improve knowledge, attitude, practice, and perceived barriers and test their efficacy among HCPs.Second, only the studies limited to HCPs were included in the current study.Future studies could consider reviewing studies involving different demographics, such as patients and healthy people.
FIGURE 1 :
FIGURE 1: PRISMA flow diagram PRISMA, Preferred Reporting Items for Systematic Review and Meta-Analysis Protocols that, in spite of their crucial roles in health education, counseling, and the referral of patients who qualify for screening, physicians and nurses working in primary care settings lack sufficient knowledge of CRC screening.Furthermore, there were no significant differences in the attitudes and level of knowledge about CRC screening between nurses and physicians.the three groups is familiar with the concept of the high-risk population, the results indicate that family medicine physicians are significantly more knowledgeable about the majority of CRCrelated issues than primary care specialists in other fields and general practitioners.who did not undertake CRC screening, those who reported practicing the procedure had higher knowledge scores.Compared to female physicians, male physicians scored better on the attitude scale.In comparison to practicing physicians, the study indicated that physicians who did not provide CRC screening identified barriers more frequently.The frequently reported barrier was patients' lack of knowledge.Furthermore, despite their level of knowledge and positive attitude, a sizable portion of family physicians do not perform CRC screening.general internists.56.9% were male and 43.1% were female.The mean number of years since graduation from medical school was 21.5 (SD = 8.6).NA Although family medicine physicians and general interns showed awareness of the CRC guidelines, they were less knowledgeable in the recommended modalities and periodicities to accurately apply the guidelines to screening patients at an average risk.Nichols et al. (2009) [13] consisting of family practice physicians (86.2%), male (67.9%), and white (82.3%).Most had been in practice for 16 or more NA The results show that physicians' selfreported screening rates were not optimal, and the majority of physicians stated they had little knowledge of insurance coverage for screening.2024 Alzoubi et al.Cureus 16(2): e54381.DOI 10.7759/cureus.yearmedical students in the class of 2007 at the University of California, San Francisco.The study population consisted of 54.4% females and 45.6% males.NA The type and amount of experience students had in giving CRC screening counseling predicted their performance in the standardized patient contact, while their knowledge or confidence did not.= 52).The study population consisted of 31.82%females and 68.18% males.NA In line with the recommendations of the American Cancer Society, the majority of respondents (76%) suggested one or more CRC tests, and 93% said that patient anxiety around these tests posed a substantial barrier to screening.90% of physicians said they used the FOBT as a screening test; however, the majority said they did not track their patients' progress or use any kind of incentive to have them finish and return the FOBT kits.that a significant percentage of HPRN providers and nursing staff members in general have poor knowledge that CRC is one of the primary causes of cancer death.Furthermore, only 26% of nurses discuss colon cancer prevention and screening with their patients who meet the eligibility criteria; 29% of nurses said that their patients begin the conversation about screening. of the students were in the clinical years, while one-third were in the pre-clinical years.
years (SD = 6.42).NA who reported utilizing a guideline, just 3.8% could identify it.In this study, the vast majority performed CRC screening.The respondents do not have sufficient knowledge of the ages at which screening tests should begin and end, as well as the frequency at which they should be conducted.They also fail to assign the CRC guidelines enough importance.physicians consisted of females (78.2%) and males (21.8%), with a mean age of 35.4 years (SD = 7.6).NA Primary care physicians lacked sufficient knowledge and expertise in CRC screening.The practice of screening did not follow from knowledge of it.Their opinions regarding the cost-effectiveness of the process and the availability of sufficient resources significantly influenced the practice of the participants in the study believed that CRC posed a major risk to public health.Nonetheless, the overwhelming majority of participants recognized symptoms associated with CRC and agreed that screening for the disease is crucial for lowering its incidence and mortality.Merely 38% of the participants reported having come across information about CRC screening, either as part of their medical education or as part of public education.Additionally, only 60% of the participants indicated that they would be open to receiving more information.In terms of colonoscopy, 85% would rather use a different procedure for screening for CRC.Furthermore, 68% of respondents said they would like more information about the procedure, and 53% said it was an .The majority were between the ages of 30 and 39 years old (33%) and had more than 10 years of working experience (47%).NA The great majority of healthcare workers were poorly knowledgeable of the CRC screening program's protocols.Nonetheless, once trained, the great majority of HCWs were willing to perform screening.2024 Alzoubi et al.Cureus 16(2): e54381.DOI 10.7759/cureus.54381sectional survey
TABLE 1 : Summary of existing literature on the assessment of knowledge, attitude, and perceived barriers regarding CRC screening practices FOBT
, fecal occult blood test; SIU, Southern Illinois University; UW, University of Wisconsin; HPRN, High Plains Research Network; HCP, healthcare practitioners; HCWs, healthcare workers; CRC, colorectal cancer Among the included studies, six were conducted in the USA, three in Saudi Arabia, two in Jordan and Turkey, and one in Canada, Ghana, Greece, Israel, Malaysia, Oman, South Africa, and Syria.The sample size of the included studies ranges from 36 to 581.Furthermore, 11 included studies were carried out among physicians, including family physicians, primary care physicians, and general physicians.Five studies used general healthcare workers, including doctors and nurses.The remaining five studies used medical students.All of these things are clarified in Table1. | 2024-02-20T16:03:48.034Z | 2024-02-01T00:00:00.000 | {
"year": 2024,
"sha1": "6a16cc09198e118858b6709955cd544bed023ade",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "e513303a7db4b42d501e03513afe8c27ad29d39e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
254406250 | pes2o/s2orc | v3-fos-license | The magnitude of healthcare professionals' turnover intention and associated factors during the period of COVID-19 pandemic in North Shewa Zone government hospitals, Oromia region, Ethiopia, 2021
Background Healthcare professional turnover and shortages are perceived as a global issue affecting the performance of healthcare organizations. Studies show that the coronavirus disease has physical and psychological effects on healthcare workers. This study assessed the magnitude of turnover intention and related factors during the COVID-19 pandemic. Methods A hospital-based cross-sectional study of 402 healthcare professionals working in the North Shewa Zone was conducted during the COVID-19 pandemic from 1 February to 28 February 2021. The data were collected using a self-managed structure questionnaire, entered into EpiData version 3.1, and exported to SPSS version 25 for further analysis. We performed a logistic regression analysis to identify factors related to healthcare professionals' turnover intention. Finally, the data were displayed in frequency, percentage, and summary statistics. Result From the total of 402 study participants, 363 of them were involved in the study with a response rate of 90.3%. The magnitude of healthcare professionals' turnover intention was 56.7%. Single marital status (AOR: 3.926; 95% CI: 1.961; 7.861), completion of obligatory service years (AOR: 0.287; 95% CI: 0.152, 0.542), dissatisfaction with the training opportunities (AOR: 2.407) 95% CI: 1.232, 4.701), having no established family (AOR: 2.184; 95% CI: 1.103, 4.326), dissatisfaction with organizational decisions process (AOR: 0.483; 95% CI: 0.250, 0.932), low continuous organizational commitment (AOR: 0.371; 95% CI 0.164; 0.842), dissatisfaction with professional development opportunities (AOR: 2.407; 95% CI: 1.232–4.701), and a non-conducive work environment (AOR: 2.079; 95% CI: 1.199, 3.607) were independent predictors of turnover intention. Conclusions Our study showed that 56.7% of healthcare professionals have turnover intention. Being unmarried, lack of training opportunities, lack of established family, having completed the obligatory service years, non-conducive work environment, low continuous organizational commitment, dissatisfaction with the decision-making of the organization, and dissatisfaction with professional development opportunities of the organization all contributed to a higher rate of healthcare professionals' turnover intention. Recommendations Healthcare organizations and other concerned bodies should create strategies that enhance the working environment, foster continuous organizational commitment, improve organizational decision-making, and provide professional development and training opportunities to lower the rate of turnover intention.
Background: Healthcare professional turnover and shortages are perceived as a global issue a ecting the performance of healthcare organizations. Studies show that the coronavirus disease has physical and psychological e ects on healthcare workers. This study assessed the magnitude of turnover intention and related factors during the COVID-pandemic.
Methods:
A hospital-based cross-sectional study of healthcare professionals working in the North Shewa Zone was conducted during the COVID-pandemic from February to February . The data were collected using a self-managed structure questionnaire, entered into EpiData version . , and exported to SPSS version for further analysis. We performed a logistic regression analysis to identify factors related to healthcare professionals' turnover intention. Finally, the data were displayed in frequency, percentage, and summary statistics.
Result: From the total of study participants, of them were involved in the study with a response rate of . %. The magnitude of healthcare professionals' turnover intention was . %. Single marital status (AOR: . ; % CI: .
Introduction
Following the report of the first case of Coronavirus Disease in China Wuhan Hubei Province, the World Health Organization (WHO) declared on 31 December 2019 that the new disease was a pandemic (1). Because of its widespread infectiousness and high infection rate, it has posed a serious threat to national public health (2). The COVID-19 pandemic is an extraordinary challenge for the world, and it has placed great stress on healthcare systems globally, increased the demand and workload for healthcare workers, and had an impact on the physical and mental health of healthcare workers due to fear of contracting the disease or passing it on to their loved ones (3).
Healthcare workers, the main force in the fight against the pandemic, are at greater risk than others. They developed a sense of frustration and helplessness while caring for patients with confirmed or potentially suspected for COVID-19 (4). Studies have found that during the COVID-19 pandemic, healthcare workers were experiencing physical and mental stress (5,6), from 2.2% to 14.5% of health care workers suffered from severe stress, anxiety and depression (7). Studies also found that working in high pressure and stress full work environment makes healthcare workers more prone to fatigue, produce psychological disorders and leading to significantly higher turnover rate (6,8).
In countries with limited resources and increased demand for personal protective equipment during the COVID-19 pandemic, the lives of healthcare workers were severely threatened (9). A large number of healthcare workers have died of COVID-19 due to a lack of personal protective equipment, while others continue to struggle with the pandemic (10,11). Longer working times, increased patient loads, shortages of human resources, and shortages of personal protective equipment were explained by the increased turnover intention of healthcare workers during the time of the pandemic (12). Turnover intention which is defined as the probability that employees of the organization will voluntarily leave their jobs at some point in the near future was considered to be the strongest predictor of actual turnover among healthcare personnel (13). COVID-19 and the high rate of the turnover intention of healthcare workers have become major challenges and have placed a heavy burden on the healthcare system (14). Prior to the COVID-19 pandemic, research in Ethiopia reported 52.5-67.8% magnitude of turnover intention among healthcare professionals (15)(16)(17)(18), however, studies done during the pandemic reported turnover intentions of 56.3-70.7% (19)(20)(21). This slight discrepancy might be due to the effect of the pandemic. Sociodemographic characteristics, organizational commitment, professional development, organizational leadership style, work satisfaction, work environment, and work performance all influence healthcare workers' turnover intention (17, [22][23][24]. It is of great significance to understand the relation between COVID-19 and turnover intention. Therefore, the purpose of this study was to assess the current situation and influencing factors of healthcare workers' turnover intention during the COVID-19 pandemic in North Shewa Zone Government hospitals.
Source population
All healthcare professionals of the six government hospitals at North Shewa Zone in 2021.
Study population
All randomly selected healthcare professionals including nurses, midwives, pharmacists, laboratorist, medical doctors, radiographers, surgeons, and gynecologists of the six government hospitals at North Shewa Zone were available during the study period.
Inclusion and exclusion criteria
The study included all healthcare professionals that were available during the data collection period and excluded those providing care for critically ill patients, non-permanent workers, and involuntary participants.
Sample size determination and sampling technique
A single population proportion formula was used to determine the sample size with the prevalence of healthcare professionals' turnover intention = 61.3% from a previous study conducted in the Amhara Region, Ethiopia (17) at a 95% level of confidence and 5% margin of error.
Data collection tool and procedures
The data were collected by two diploma-holder nurses using a self-administered structured questionnaire developed from the review of related literature in Ethiopia (17, 18). The questionnaire had four parts. Part one: Sociodemographic characteristics of the study participants (age, sex, marital status, ethnicity, religion, profession, educational status, work experience, hospital type, obligatory service year, extra income sources, and salary satisfaction). Part two: A turnover intention measurement scale with three questions on a 5-point Likert scale ranging from strongly disagree to strongly agree where 5 = strongly disagree, 4 = disagree, 3 = no opinion, 2 = agree, and 1 = strongly agree developed from a previous study (17) was used. We recorded strongly disagree, disagree, and no opinion as "0" and agree and strongly agree as "1" and the mean was used to classify the turnover intention status of the study participants.
Part three is about organizational related factors. Under this variables such as the level of satisfaction with leadership style, professional opportunity, communication level, involvement in decision making, perceived organizational support, and training opportunity in the organization were measured with yes/no questions. The level of organizational commitment was measured with questions on a 5-point Likert scale ranging from strongly disagree to strongly agree where 5 = strongly disagree, 4 = disagree, 3 = no opinion, 2 = agree, and 1 = strongly agree. Organizational commitment has three domains: continuous, affective, and normative commitment. The normative commitment was measured by three questions whereas affective and continuous commitment were measured by four questions (17). We recorded strongly disagree, disagree, and no opinion as "0" and agree and strongly agree as "1" and the mean was used to classify the level of organizational commitment for each domain.
Part four is about job related factors with five subdomains on a 5-point Likert scale ranging from strongly disagree to strongly agree where 5 = strongly disagree, 4 = disagree, 3 = no opinion, 2 = agree, and 1 = strongly agree. Job satisfaction related questions with, five item questions, Job performance related questions with three item questions, Work environment related questions with six item questions, Job stress related questions with eight item questions, and Work overload related questions with six item questions on five point Likert scale (17). We recorded strongly disagree, disagree, and no opinion in to Frontiers in Health Services frontiersin.org . /frhs. .
"0" and agree and strongly agree in to "1" and the mean was used to classify study participants based on the job related factors.
Data quality control
To control the data quality, data collectors were trained and supervised by the principal investigators. Before actual data collection, a pretest was conducted on 5% of the sample size in the Fiche Town Health Center. The completeness of the questionnaire was checked before data entry.
Data processing and analysis
Data were entered into EpiData version 3.1 and exported to SPSS version 25 for analysis after being checked for correctness. A bivariable analysis was used to find variables that were significantly related. Variables with a p-value < 0.25 in bivariate analysis were incorporated into a multivariable logistic regression model to investigate independent factors by controlling for possible confounders. In the multivariable logistic regression analysis, an AOR of 95% CI with a pvalue < 0.05 was determined to identify the associated factors. To characterize the study variables and factors under study, data were presented in frequency, proportions, and summary statistics.
Statement of ethical approval
Salale University's Ethical Review Committee provided ethical permission prior to data collection with the reference number SLU-IRB-003/2021. The North Shewa Zone Health Bureau and the study hospitals provided permission letters. Furthermore, after being informed about the study's goal, each study participant completed a written informed consent form.
Operational definition Turnover intention
The extent to which health personnel intends to depart their organization in the near future. In the current study, it was measured with three questions on a 5-point Likert scale with 5 denoting strongly disagree, 4 denoting disagree, 3 denoting no opinion, 2 denoting agree, and 1 denoting strongly agree. Strongly disagree, disagree, and no opinion were recorded as 0 representing "not intending to leave" whereas, agree and strongly agree were recorded as 1 representing "intending to leave." Depending on the mean score value, those study participants who scored the mean and above were regarded as intending to leave and those who scored below the mean were regarded as not intending to leave.
Organizational commitment
An organization member's psychological attachment toward their working organization. It has three domains: Affective commitment is an employee's emotional attachment toward their organization and is measured by four questions on a 5point Likert scale with 5 denoting strongly disagree, 4 denoting disagree, 3 denoting no opinion, 2 denoting agree, and 1 denoting strongly agree. Strongly disagree, disagree, and no opinion were recorded as "0" whereas, agree and strongly agree were recorded as "1." Depending on the mean score value, study participants who scored mean and above were regarded as having a high affective commitment denoted by "1" and those who scored below the mean were regarded as having a low affective commitment denoted by "0" respectively.
Continuance commitment
This is the level of commitment by employees who think leaving their organization would be costly and is measured by four questions on a 5-point Likert scale with 5 denoting strongly disagree, 4 denoting disagree, 3 denoting no opinion, 2 denoting agree, and 1 denoting strongly agree. We recorded strongly disagree, disagree, and no opinion as "0" and agree and strongly agree as "1." Depending on the mean score value, study participants who scored mean and above were regarded as having high continuous commitment denoted by "1" and those who scored below the mean were regarded as having low continuous commitment denoted by "0."
Normative commitment
This is the level of commitment of employees who feel obligated to stay in the organization and that staying is the right thing to do. It is measured by three questions on a 5-point Likert scale with 5 denoting strongly disagree, 4 denoting disagree, 3 denoting no opinion, 2 denoting agree, and 1 denoting strongly agree. We recorded strongly disagree, disagree, and no opinion as "0" and agree and strongly agree as "1." Depending on the mean score value, study participants who scored mean and above were regarded as having high normative commitment denoted by "1" and those who scored below the mean were regarded as having low normative commitment denoted by "0."
Job satisfaction
The state of health workers being satisfied by their job was measured by five questions rated on a 5-point Likert scale with 5 denoting strongly disagree, 4 denoting disagree, 3 denoting no opinion, 2 denoting agree, and 1 denoting strongly agree. We recorded strongly disagree, disagree, and no opinion as "0" and agree and strongly agree as "1." Depending on the mean score, study participants who scored mean and above were regarded as satisfied with their job denoted by "1" and those who scored below the mean were regarded as unsatisfied with their job denoted by "0."
Work environment
This is a pleasant working atmosphere and was measured by five questions rating on a 5-point Likert scale with 5 denoting strongly disagree, 4 denoting disagree, 3 denoting no opinion, Frontiers in Health Services frontiersin.org . /frhs. .
2 denoting agree, and 1 denoting strongly agree. We recorded strongly disagree, disagree, and no opinion as "0" and agree and strongly agree as "1." Depending on the mean score, study participants who scored mean and above were regarded as having a non-conducive work environment which was denoted by "1" and those who scored below the mean were regarded as having a conducive work environment which was denoted by "0."
Workload
This is the work pressure present in health institutions. It is measured with six questions rating on a 5-point Likert scale with 5 denoting strongly disagree, 4 denoting disagree, 3 denoting no opinion, 2 denoting agree, and 1 denoting strongly agree. We recorded strongly disagree, disagree, and no opinion as "0" and agree and strongly agree as "1." By using the mean score, study participants were classified as having workload when they scored mean and above which was denoted by "1" and as not having workload when they scored below the mean denoted by "0."
Job stress
This refers to work-related duties and responsibilities that become burdensome and impose unhealthy effects on the mental and physical wellness of employees. It is measured by eight questions rated on a 5-point Likert scale with 5 denoting strongly disagree, 4 denoting disagree, 3 denoting no opinion, 2 denoting agree, and 1 denoting strongly agree. We recorded strongly disagree, disagree, and no opinion as "0" and agree and strongly agree as "1." By using the mean score, study participants were classified as having job stress when they scored at mean and above which was denoted by "1" and as not having work stress when they scored below the mean as denoted by "0."
Job performance
This is the work-related activities expected of an employee and how well those activities were performed. It is measured by three questions rated on a 5-point Likert scale with 5 denoting strongly disagree, 4 denoting disagree, 3 denoting no opinion, 2 denoting agree, and 1 denoting strongly agree. We recorded strongly disagree, disagree, and no opinion as "0" and agree and strongly agree as "1." By using the mean score, study participants were classified as having good job performance when they scored mean and above which was denoted by "1" and as having poor job performance when they scored below the mean denoted by "0."
Result Socio-demographic characteristics
A total of 363 health care professionals took part in this study, with a response rate of 90.3 percent. The research respondents' median age was 38, with a standard deviation of participants were nurses and 52.1% of the respondents had a bachelor's degree ( Table 1).
The magnitude of turnover intention
The overall turnover intention among the study participants was 56.7% (95% CL: 52-62%). The mean turnover intention of healthcare professionals was 9.3 (± 3.16 SD), which is computed from the three intention measuring questions. Midwives, younger, and unmarried health professionals all indicated higher rates of turnover intention, with 77.8, 68.2, and 67.9%, respectively. Medical doctors, on the other hand, reported a lower, 32.3% rate of leave intention (Table 2).
Work related factors
The effect of work-related factors such as job satisfaction, job performance, job stress, work environment, and work overload were studied. The mean score for job satisfaction was 14.67 (±4.15), for job performance 9.63 (±2.91), for work environment 18.33 (±5.67SD), for job stress 24.2 (±6.69), and for work overload 18.71 (±6.05), respectively, which are computed from the five, three, six, eight, and six questions, respectively. According to the findings, 50.4% of healthcare professionals were dissatisfied with their jobs, 55.9% reported a hostile work environment, and more than half of respondents (54.5%) had experienced job stress (Table 4).
Organizational commitment related factors
The overall organizational commitment of the study respondents was found to be 51.2%.
The mean score for affective, continuous, and normative commitment was 12.1 (±3.96 SD), 12.35 (±3.71 SD), and 11.84 (±3.89 SD) respectively. The study found that 52.6% and 54% of the study participants exhibited high normative and affective commitment. However, 59.8% of them had a low level of continuous commitment (Table 5).
Bivariable and multivariable logistic regression analysis
In the bivariable logistic regression analysis, seventeen variables out of thirty-one were candidate variables. Marital status, profession, type of health facility, mandatory service year, salary, established family, professional opportunity, organizational communication level, involvement in organizational decision making, perceived organizational support, affective, continuous, and normative commitment, job performance, work environment, job stress, and work overload were the variables. In a Multivariable Logistic analysis, marital status, obligatory service year, professional opportunity, an established family, dissatisfaction with the organization's decision-making process, continuous organizational commitment, and condition of work environment showed significant association ( Table 6).
As a result, single healthcare professionals were nearly four times more likely than married healthcare workers to leave their current working organization (AOR: 3.926; 95% CI: 1.961, 7.861). Those who finished their obligatory service year had a 71% higher chance of intending to depart (AOR: 0.287; 95% CI: 0.152, 0.542) than those who did not complete their obligatory service year. Similarly, healthcare professionals who were dissatisfied with the professional opportunities were 2.4 times more likely than their counterparts to leave their current working organization (AOR: 2.407; 95% CI: 1.232, 4.701).
Healthcare professionals with no established family were 2.2 times more likely than those with established families to plan to leave (AOR: 2.184; 95% CI: 1.103, 4.326). Those healthcare professionals who were satisfied with their organization's decision-making process had a 51% lower likelihood of leaving their current working organization (AOR: 0.483; 95% CI: 0.250, 0.932) than their counterparts. When comparing healthcare professionals with a low continuous organizational commitment to those with high continuous organizational commitment, the odds of turnover intention were nearly 63% greater (AOR: 0.371; 95% CI 0.164, 0.842). Additionally, healthcare professionals who reported a non-conducive work environment were two times more likely (AOR: 2. 079; 95% CI: 1.199, 3.607) to want to leave their current working organization than their counterparts (Table 6).
Discussion
Healthcare professionals are the key assets of the health system. Increasing and holding proficient and motivated healthcare workers is crucial for improving the quality of healthcare services. This study was conducted to examine the magnitude of health workers' turnover intention and associated factors. This is a significant issue because the turnover intention is the strongest predictor of actual turnover. In this study, we found that 56.7% (95% CL: 52-62%) of healthcare professionals have turnover intention. The magnitude of leave intention varied across the profession with a higher proportion (77.8%) among midwife professionals and a lower proportion among medical professionals (32.3%).
. /frhs. . In the context of the COVID-19 pandemic, the magnitude of turnover intention in our study is significantly higher than the study finding in South Korea (8%) (25), in China (10.1%) (12), in Saudi Arabia (32.2%) (26), and in Ghana (49.3%) (27), respectively. A possible reason for this discrepancy might be from a lower perceived risk of COVID-19 due to the availability of personal protective equipment and materials, increased awareness of healthcare professionals, and measures taken against COVID-19 in these countries which are all reducing healthcare professionals' turnover intention. Compared with studies conducted in Ethiopia during the ordinary period, it is similar to the study findings in the Kafa Zone in southwest Ethiopia (56.3%) (19), and in the North Shewa Zone of the Amhara region (61.3%) (17). However, it is lower than the result of studies conducted in the North Gondar Zone at the Amhara Regional State hospitals (67.8%) (18), and in Addis Ababa in the primary public health facilities (70.7%) (21). Since the study was conducted late in the pandemic, turnover intention in this period may be lower than a long-term turnover intention in a normal period. This variation might be due to the increased awareness of the disease by healthcare professionals and due to the beginning of COVID-19 vaccinations which are reducing the turnover rate.
The study also examined various factors related to turnover intention. Accordingly, being unmarried, finishing an obligatory service year, having a non-conducive work environment, being dissatisfied with decision-making of the organization, being dissatisfied with professional development opportunities, not having an established family, and having a low continuous organizational commitment were significantly associated with healthcare professionals' turnover intention. Unmarried healthcare professionals were more likely to consider leaving intention than married healthcare professionals, which is consistent with a study result in Korea (28). This might be due to the fact that unmarried healthcare professionals may not be stable and will prefer to move out than married healthcare professionals. But this finding is contradicted by a study finding in Kafa Zone Southwest Ethiopia, where married healthcare professionals were more likely to have leave intention than unmarried ones (19).
The study showed that the obligatory service year was the other determinant factor in that healthcare professionals who finished their obligatory service year have higher odds of leave intention than those who had not completed their obligatory service year. This was supported by a study finding in the Wollega Zone of northwest Ethiopia (29). This might be explained by not completing the obligatory service year, which enforces healthcare professionals to be stable in one place. Another sociodemographic factor that predicts turnover intention was not having an established family in that healthcare professionals without established families were more likely than those with established families to consider leave intention. This is consistent with a study conducted in Amhara Region's North Gondar Zone (18). Another aspect explaining turnover intention was professional development opportunities. The higher turnover intention was reported by healthcare professionals who were dissatisfied with the professional development opportunities than by their counterparts. A non-conducive work environment also predicts healthcare professionals' turnover intention. Healthcare professionals who reported a non-conducive work environment were more likely to have leave intentions than those healthcare professionals who reported a conducive work environment. This finding is consistent with studies conducted in Ethiopia (19,23). As for the individual dimensions of organizational commitment that were linked with the intention to leave, healthcare professionals with low continuous commitment experienced leave intention more likely than those healthcare professionals having high continuous commitment. This finding was supported by a study finding in Addis Ababa, Ethiopia (21). The organization's decision-making process was found to affect the healthcare professionals' turnover intention. Satisfied healthcare professionals by their organization's decision-making process have 51% lower leave intention than their counterparts.
Study limitation
Some limitations were noted in our study. First, our study data may be biased due to the entire self-report data. Second, it is difficult to explain the causal relationship between risk factors and turnover intention because of the cross-sectional study design. Third, the use of one sampling frame for all stratums leads to a lack of generalizability to the study population.
Conclusions
In this study, more than half of the study participants reported turnover intention. Variables such as single marital status, completion of obligatory service years, lack of established family, dissatisfaction with organizational decision-making, dissatisfaction with professional development opportunities, having a low continuous organizational commitment, and a non-conducive work environment were factors that increased turnover intention of healthcare professionals.
Recommendations
Improving the working environment, continuous organizational commitment, and decision-making of the organizations, and increasing opportunities for professional development helps to reduce healthcare professionals' turnover intention.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding authors.
Ethics statement
The studies involving human participants were reviewed and approved by Salale University's Research and Ethical Review Committee. The patients/participants provided their written informed consent to participate in this study.
Author contributions
The ideal conception, study design, data acquisition, analysis, and interpretation were all contributed to by all of the authors. All authors have reviewed the manuscript and approved its publication in this journal.
Funding
Salale University funded this research work.
. /frhs. . of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. | 2022-12-08T18:23:49.860Z | 2022-12-08T00:00:00.000 | {
"year": 2022,
"sha1": "c4976b19b5559c594966e502fea538e282b0474e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "c4976b19b5559c594966e502fea538e282b0474e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
225193674 | pes2o/s2orc | v3-fos-license | Antioxidant activity and compound constituents of gamma-irradiated black rice ( Oryza sativa L.) var. Cempo Ireng indigenous of Indonesia
2020. Abstract. Suryanti V, Riyatun, Suharyana, Sutarno, Saputra OA. 2020. Antioxidant activity and compound constituents of gamma-irradiated black rice ( Oryza sativa L.) var. Cempo Ireng Indigenous of Indonesia. Biodiversitas 21: 4205-4212. Nowadays, black rice is gaining consumer interest because of its health benefit. Due to the high content of antioxidant compounds such as phenolics and flavonoids, the nutritional profile of black rice is much better than any other rice varieties. Anthocyanins, pigment with powerful antioxidant properties, give a vibrant color to the rice. The antioxidant activity and chemical constituents of the non-irradiated and gamma-irradiated black rice Oryza sativa L. var Cempo Ireng were investigated. The total phenolic content was determined based on the reaction of the Folin-Ciocalteu reagent with samples. Total anthocyanin was determined by the pH differential method. Antioxidant activity was fulfilled using DPPH method. The results revealed that non-irradiated and gamma-irradiated black rice were categorized as potent antioxidants. It is noted that irradiation increased antioxidant activity and changed the chemical components of black rice. Both of non-irradiated and irradiated black rice contains simple phenolics and flavonoids, including anthocyanins. Non-irradiated and irradiated black rice possess similar types of secondary metabolites, with different chemical content. The non-irradiated black rice contains anthocyanins of cyanidin-3-O-glucoside, whereas the irradiated black rice possesses anthocyanin of peonidin-3-O-glucoside. Additionally, irradiated black rice contains terpenoids, which increased its antioxidant activity compared to the
INTRODUCTION
Rice (Oryza sativa L.) is consumed by nearly half of the world's population and considered one of the most important cereal crops. Black rice, a pigmented rice, is becoming more popular as a functional food (Pratiwi and Purwesti 2017). Black rice consumption inhibits the cancer cell invasion, prevents cardiovascular disease, and reduces the risk of fatty liver diseases, diabetes, and obesity (Pratiwi et al. 2015;Rathna Priya et al. 2019;Rukmana et al. 2016). The health benefits of black rice are associated with its nutritional values and antioxidant components (Hu 2003;Zhang et al. 2010;Bolea et al. 2016;Suttiarporn et al. 2016;Batubara et al. 2017). Black rice has a high content of fat, protein, and crude fiber. It also contains phenolic compounds, such as p-coumaric acid, caffeine acid, and ferulic acid, protocatechuic acid, vanillic acid, and hydroxyl benzoic acid, which are responsible for the antioxidant activity (Walters and Marchesan 2011). Furthermore, black rice contains flavonoids, including anthocyanins, which play an important role in antioxidant activity (Seo et al. 2011). Cyanidin-3-O-glucoside and peonidin-3-O-glucoside are anthocyanins usually found in black rice (Figure 1). Cyanidin-3-O-glucoside is the main anthocyanins, forming to 94% of the total anthocyanins content (Chen et al. 2012;Hou et al. 2013;Hao et al. 2015;Apridamayan et al. 2017). The color of black rice is caused by anthocyanin pigments (Park et al. 2008;Lee 2010;Loypimai et al. 2016). Nowadays, phenolic-and flavonoidrich natural food have gained interest in nutrition and food science (He and Giusti 2010;Cisowska et al. 2011). These compounds possess aromatic rings having at least one hydroxyl group that can act as electron donors. Their hydroxyl group can directly engage in antioxidant action (Walter and Marchesan 2011;Khoo et al. 2017;Suryanti et al. 2020). The cultivation of black rice is very rare due to its plant height, sensitivity to the natural enemy, long harvest period, and low productivity. A mutation is often used to overcome the limitation of the crops (Harding et al. 2012;El-Degwy 2013;Marcu et al. 2013;Shao et al. 2013). It has been reported that mutation induction through irradiation improves the quality of local black rice cultivar (Hartanti et al. 2017). The grains of black rice var. Cempo Ireng irradiated with gamma-ray at a dose of 200 and 300 Gy, namely BR-200 and BR-300 results in shorten harvesting time, reduce plant height, and enhanced stress tolerance (Patmi et al. 2008). It also changes the anthocyanin content of the wild variety of black rice (Purwanto et al. 2019). However, the nutrient content, i.e. moisture, lipids, proteins, carbohydrates, and fibers contents in gammairradiated black rice var. Cempo Ireng was not significantly different from non-irradiated rice.
The nutritional value of BR-200 was slightly better than that of BR-300 (Riyatun et al. 2017). Although the antioxidant activity and chemical compounds of the black rice have been reported, studies on the antioxidant activity and chemical compounds of irradiated black rice are still limited. Therefore, this paper describes the antioxidant activity and chemical compound diversities of irradiated black rice in comparison to non-irradiated variety.
Materials
Non-irradiated black rice (BR-NI) and irradiated black rice (Oryza sativa L. cv. Cempo Ireng) were used in this research. The irradiated black rice used in this study was the third generation of gamma-irradiated black rice with doses 200 and 300 Gy (BR-200 and BR-300). 2,2diphenyl-(1-picrylhydrazyl) (DPPH) and gallic acid were purchased from Sigma Aldrich. Other analytical grade chemicals were obtained from E-Merck and used without further purification.
Sample preparation
Black rice grains (200 g) were grounded into a fine powder and macerated three times with ethanol at room temperature for 24 h. The filtrate was collected and evaporated using a rotary evaporator to get concentrated extracts.
Gas Chromatography-Mass Spectroscopy (GC-MS)
The ethanol extract of BR-NI and BR-200 was analyzed by Shimadzu QP2010S GC-MS. The GC-MS was run using EI 70 Ev ionizing type, Rtx 5 Ms column (30 m length x 0.25 mm ID), injector temperature of 300°C, column temperature of 70°C, splitless injection method, detector temperature of 300°C and the carrier gas was He with the operating pressure of 13.7 kPa.
Liquid Chromatography-Mass Spectrometry (LC-MS)
The ethanol extract of BR-NI and BR-200 was analyzed by LC-MS Waters 2489 with a UV-Vis detector. The column temperature was 35°C and the solvent used was a mixture of solution A (aqua dest: formic acid = 9:1) and solution B (aquabidest: acetonitrile: formic acid = 6:3:1) with a flow rate of 1 mL/min for 25 mins. The solvents gradient used for the initial 5 mins was 75% solution A and 25% solution B, the second 5 mins were 71% solution A and 29% solution B, the third 5 mins was 66% solution A and 34% solution B, the fourth 5 mins was 62% solution A and 38% solution B, 57 % A and 43% solution B, the last 5 mins was 100% solution B. Their absorbance was measured at 520 nm. MS analysis was used ESI ionization.
Determination of total phenolics
The total phenolics were analyzed by a modified method of Doymaz and Karasu (2018). In 5 mL volumetric flask, gallic acid (100 ppm, 1 mL) was mixed with Folin-Ciocalteu reagent (0.5 mL) and left for 1 minute. Four ml of 7.5% Na2CO3 solution was added to the mixture and left for a further 1 minute. The samples were analyzed using UV-Vis Spectroscopy (Perkin Elmer Precisely Lambda 25 UV-Vis) at 10-minute intervals until reaching equilibrium state. The same procedure was applied for obtaining the standard curves of gallic acid with concentrations of 25, 50, 75, 100 ppm. The ethanol extract of black rice (100 ppm) was analyzed for total phenolics content. Total phenolics were quantified using formula (1), where C is gallic acid concentration determined from the calibration curve (g/L), V is the volume of sample extract (L) and m is the weight of the sample extract (K). Total phenolics are expressed as milligram of gallic acid equivalents (GAE) per gram of dry.
Determination of total anthocyanins
The determination of anthocyanin content was carried out using the pH-differential method (AOAC 2005-02). Ethanol extract (0.05 g) was put into a tube and added with 4 mL of KCl buffer solution (pH 1) and 4 mL of CH3COONa buffer solution (pH 4.5). After 2 hours, the samples were filtered and their absorbance was measured at 520 nm and 700 nm wavelengths using UV-Vis Spectroscopy (Perkin Elmer Precisely Lambda 25 UV-Vis).
……… (2)
Cyanidin-3-O-glucoside, the most common pigment in nature, is selected as a standard for the evaluation of total anthocyanins contents.
Determination of antioxidant activity
The antioxidant activity was determined using DPPH scavenging radical activity by following the method of C.V m 4207 Salazar-Aranda et al. (2011). The stock solution (100 ppm) was diluted to the concentrations of 12.5, 25, 50, 75, and 100 ppm in a 5 mL volumetric flask. Vitamin E concentrations (2.5, 5, 10, 12.5, and 20 ppm) was used as a positive control. One mL of DPPH solution in methanol (100 ppm) was added to the mixture and was left for 30 minutes in a dark condition. The mixture was then analyzed at the λmax wavelength. The λmax wavelength was obtained previously by measuring the DPPH solution (100 ppm) at a wavelength of 800-400 nm. Antioxidant activity was assessed using formula (3), where A is the absorbance of ethanol as the black and B is the absorbance of the sample. The correlation between each concentration and its percentage of scavenging was plotted and the IC50 was calculated by interpolation. The IC50 value represents the concentration of antioxidants to inhibit 50% of free radicals.
RESULTS AND DISCUSSION
The ethanol extract of black rice grain was obtained in 14-15% yield as a viscous yellow oil. These extracts were analyzed for total phenolic and total anthocyanin contents ( Table 1). The total phenolic content determination was performed based on the reaction of the Folin-Ciocalteu reagent with samples. The resulting blue-colored solution intensity is proportional to the amount of phenolics present. Total anthocyanin is determined based on changes in anthocyanin structure at pH 1 and pH 4.5. The difference in absorbance between the two buffer solutions is due to the total content of monomeric anthocyanin pigments. The non-enzymatic brown pigments and polymerized anthocyanin pigments are omitted from the absorbance calculation. They do not show reversible structural transformation as a function of pH.
Antioxidant activity of the extracts was determined using DPPH method. DPPH radical is widely used as a substrate for evaluating antioxidant activity because DPPH is stable radical, and its simplicity testing and accuracy.
The color changes of DPPH solution from purple to yellow is observed when the radical is quenched by antioxidant. The DPPH radical scavenging activities of vitamin E and black rice are presented in Figure 2. The IC50 values were presented in Table 2. The lower the IC50 value, the better the antioxidant activity. The radical scavenging capacity of the black rice extracts exhibited concentration-dependent. The non-irradiated black rice (BR-NI) has the highest total phenolics and anthocyanins contents. The total phenolics content of BR-200 is higher than that of BR-300, whereas the total anthocyanins contents are lower than that of BR-300. All samples can be categorized as strong antioxidants since all samples have IC50 values less than 50 ppm. Remarkably, both the irradiated black rice have lower IC50 values than that of the non-irradiated black rice. In this case, BR-200 has lower IC50 values than that of BR-300, indicating that the BR-200 has a higher antioxidant activity than the BR-300. Furthermore, the BR-NI and BR-200 were subjected to GC-MS and LC-MS analysis to identify the chemical constituents. The GC spectra of BR-NI and BR-200 revealed 13 and 18 peaks, respectively (Figures 3 and 4). The retardation time (Rt) and compounds of BR-NI and BR-200 detected by GC-MS are presented in Table 3. The LC spectra of BR-NI and BR-200 displayed 26 and 23 peaks, respectively (Figures 5 and 6). The retardation time (Rt) and mass-to-charge ratio (m/z) of compounds of BR-NI and BR-200 eluded by LC-MS are presented in Tables 3 and 4, respectively.
Discussion
The results of GC-MS analysis of BR-NI and BR-200 showed that non-irradiated and gamma-irradiated black rice extracts have the same 4 major components, i.e. hexadecanoic acid; 9,12-octadecadienoic acid; 2-methyl-, 2-(dimethylamino) ethyl ester and cyclohexaneethanamine (Table 3). Several compounds in BR-200 were not detected in BR-NI, which are classified as terpenoids, such as geyrene, caryophyllene, selinene, and cadinene. Decanoic acid was also only be found in BR-200.
Among the identified compounds in BR-NI, the phenolics and flavonoids are responsible for antioxidant activity. Methyl-4-hydroxy-3-methoxybenzoate or methyl vanillate present in cow's milk and beer and are known to have antioxidant activity (Khan 2019). 4-Vinylphenol or 4hydroxy styrene is also found in beer and wine having a hydroxy group at position 4 (Fulcrand, 1996). Tectochrysin is a flavone substituted by a hydroxy group at position 4 and a methoxy group at position 7 respectively. Tectochrysin is a major compound in propolis and has been known to have antioxidant activity that can inhibit the growth of human colon cancer cells (Park at al. 2015). Cyanidin-3-O-glucoside is an anthocyanidin which is a reddish-purple pigment in fruits and vegetables, and as the main pigment in red-colored vegetables and berries (Khoo et al. 2017). It possesses a good antioxidant activity for radical scavenging capacity against superoxide but not hydroxyl radicals (Olivas-Aguirre et al. 2016, Stintzing et al. 2002). These findings are in line with previous studies that black rice contains phenolics and flavonoids, including anthocyanins (Apridamayan et al. 2017, Hao et al. 2015.
In BR-200, compounds that may have antioxidant activity are terpenoids, furan, and flavonoids. Geyrene, caryophyllene, selinene, and cadinene are terpenoids that are usually extracted from volatile oils. These terpenoids are known to have good antioxidant activity (Kawaree and Chowwanapoonpoh 2009). 5-hydroxymethylfurfural is a furan which is substituted at positions 2 and 5 by formyl and hydroxymethyl substituents, respectively. It is not found in fresh foods, but it is naturally formed in sugarcontaining foods during drying or cooking (Arribas-Lorenzo and Morales 2010). It has a hydroxyl group that is responsible for antioxidant activity. 7-Hydroxy-3-methoxy-2-p-methoxyphenyl-4H-chromen-4-one is an aromatic heteropolycyclic compound that belongs to the methylated flavonoids, classified as a flavonoid lipid molecule and found in beans. It is also known as a hydroxy flavone and its hydroxyl group is responsible for its antioxidant activity. Peonidin-3-O-glucoside is anthocyanin, a type of flavonoid, which is a natural pigment in fruits and vegetables (Khoo et al. 2017). Several hydroxyl groups present in peonidin-3-O-glucoside, therefore it has powerful antioxidant activities, in terms of the free radical scavenger. It also lowers the metastasis of lung cancer cells and suppresses tumor cell growth (Baea et al. 2015, Jaclyn and Abdel-Aal 2010, Sun et al. 2018. Non-irradiated and irradiated black rice are categorized as strong antioxidants activity due to the presence of simple phenolics and flavonoids. These compounds can deactivate free radicals due to their ability to donate hydrogen atoms to free radicals. It is noted that IC50 of BR-200 is lower by 32% compared to the control one, demonstrating that antioxidant activity of irradiated black rice was better than non-irradiated black rice. Surprisingly, phenolic and anthocyanins contents of BR-200 are lower in comparison to BR-NI. Mutation by gamma irradiation has changed the chemical composition of black rice. Although BR-NI and BR-200 have similar metabolite secondary classes, they have different chemical constituents. Gamma-irradiated black rice BR 200 contains peonidin-3-O-glucoside, while BR-NI contains cyanidin-3-O-glucoside. The irradiated black rice BR 200 also contains terpenoids, which are not discovered in BR-NI, that increase its antioxidant activity. In BR-200, the synergistic effect of simple phenolics, flavonoids, and terpenoids improves its antioxidant activity. The irradiation enhanced the antioxidant activity and changed the chemical composition of the nonirradiated black. These findings are in line with previously published results that the mutations change antioxidant activity and chemical contents (Purwanto et al. 2019). | 2020-09-10T10:24:19.277Z | 2020-08-24T00:00:00.000 | {
"year": 2020,
"sha1": "cfe79987041e3725f2ecf80a4f66d5ccdc600422",
"oa_license": "CCBYNCSA",
"oa_url": "https://smujo.id/biodiv/article/download/6093/4210",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "7f08ded2b10940989e49c84e16e4c2202f4e892d",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
16739565 | pes2o/s2orc | v3-fos-license | Specific patterns of PIWI-interacting small noncoding RNA expression in dysplastic liver nodules and hepatocellular carcinoma
Hepatocellular carcinoma (HCC) is the result of a stepwise process, often beginning with development within a cirrhotic liver of premalignant lesions, morphologically characterized by low- (LGDN) and high-grade (HGDN) dysplastic nodules. PIWI-interacting RNAs (piRNAs) are small noncoding RNAs (sncRNAs), 23–35 nucleotide-long, exerting epigenetic and post-transcriptional regulation of gene expression. Recently the PIWI-piRNA pathway, best characterized in germline cells, has been identified also in somatic tissues, including stem and cancer cells, where it influences key cellular processes. Small RNA sequencing was applied to search for liver piRNAs and to profile their expression patterns in cirrhotic nodules (CNs), LGDN, HGDN, early HCC and progressed HCC (pHCC), analyzing 55 samples (14 CN, 9 LGDN, 6 HGDN, 6 eHCC and 20 pHCC) from 17 patients, aiming at identifying possible relationships between these sncRNAs and liver carcinogenesis. We identified a 125 piRNA expression signature that characterize HCC from matched CNs, correlating also to microvascular invasion in HCC. Functional analysis of the predicted RNA targets of deregulated piRNAs indicates that these can target key signaling pathways involved in hepatocarcinogenesis and HCC progression, thereby affecting their activity. Interestingly, 24 piRNAs showed specific expression patterns in dysplastic nodules, respect to cirrhotic liver and/or pHCC. The results demonstrate that the PIWI-piRNA pathway is active in human liver, where it represents a new player in the molecular events that characterize hepatocarcinogenesis, from early stages to pHCC. Furthermore, they suggest that piRNAs might be new disease biomarkers, useful for differential diagnosis of dysplastic and neoplastic liver lesions.
INTRODUCTION
Hepatocellular carcinoma is the sixth most prevalent cancer and the third most frequent cause of cancer death [1]. In Europe, more than 90% hepatocellular carcinomas (HCCs) develop on a cirrhotic background, due to chronic hepatitis B or C infection, alcohol abuse or metabolic syndrome [2]. Human hepatocarcinogenesis is a multi-step process characterized by different nodular lesions, currently classified as low-(LGDN) and highgrade (HGDN) dysplastic nodules, early HCC (eHCC) and progressed HCC (pHCC), depending on the degree of cytological or architectural atypia [3]. The multistep nature of human hepatocarcinogenesis has long been suggested, and convincingly demonstrated by a recent molecular study showing a progressive increase in the rate of mutations of the telomerase reverse transcriptase (TERT) gene promoter from cirrhosis (no mutation) to LGDN (6%), HGDN (19%), eHCC (61%), small pHCC (42%) to advanced HCC (64%) [4]. Progressive overexpression of tumoral biomarkers in the sequence: dysplastic nodule-eHCC-pHCC further supports the multistep origin of liver cancer in cirrhosis [5]. Although many factors, including chromosomal anomalies, genetic and epigenetic alterations contribute to HCC onset and progression [6], the relevant molecular mechanisms remain largely unclear.
piRNAs comprise a large family of small (23-35 nucleotides [nt]), single stranded noncoding RNAs that bind to PIWI proteins, forming a piRNA-induced silencing complex (piRISC). PIWI proteins have been discovered in D. melanogaster germline tissues, but their presence has been recently reported also in mammalian somatic cells, including human cancers [7]. piRNAs can be grouped in four classes, according to their origin and function: 1) repeat-associated piRNAs, derived from intergenic loci (piRNA clusters), that are enriched in transposon fragments; 2) mRNA-derived piRNAs; 3) long noncoding RNA-derived piRNAs and 4) the worm specific 21U RNAs [8]. These sncRNAs are often transcribed as long (up to 200 kb), singlestranded primary precursors, processed in a Dicerindependent manner [9,10] to mature piRNAs through still not fully understood mechanisms. Two main piRNA biogenesis pathways have been described in germline cells: the primary synthesis and the 'ping-pong' amplification mechanisms [11]. Mature piRNAs derived from piRNAs-precursors via primary processing, show a very strong preference for Uridine (U) at the 5ʹ end and no nucleotide bias at position 10. Those derived via secondary processing show, instead, a bias for Adenine (A) at position 10 and no 5ʹ end bias. Somatic cells do not use the latter amplification pathway and thus are likely to contain only primary piRNAs [12]. Furthermore, piRNAlikes have been identified and characterized by sequence analysis of expressed small RNAs in somatic tissues [13,14], including rat liver [15].
The best-established biological role of piRNAs is inhibition of transposon mobilization by both epigenetic and post-transcriptional silencing [16,17], but recent finding indicates their involvement in mRNA degradation in somatic cells [18,19], acting in this case like microRNAs.
Given their regulatory role in control of genome stability and in epigenetic and post-transcriptional regulation of gene expression, it is not surprising that piRNAs has been found specifically expressed in several human neoplasms, including cervical [20], gastric [21][22][23], breast [24,25], pancreatic [26], bladder [27] and endometrial [28] cancer and myeloma [29]. Recently Martinez et al. [30] demonstrated that a set of piRNAs is deregulated in many cancer types, and proposed that these might represent a core gene-set that facilitates cancer growth, while piRNAs unique to individual cancer types could contribute to cancer-specific biology. Notably, however, no information is available to date on piRNA expression in HCC and during liver carcinogenesis. Applying small RNA sequencing (smallRNA-Seq) we found that piRNAs are abundant in human liver, where the expression pattern of 125 of them clearly differentiates cirrhotic from HCC tissues. Interestingly, 24 piRNAs deregulated in advanced HCC show distinctive expression patterns also in earlier hepatic lesions, suggesting that these sncRNAs may participate to the carcinogenic process in this organ and could represent new markers of early hepatocarcinogenic lesions.
The piRNA expression pattern in liver distinguishes cirrhotic and tumor tissues
The PIWI subfamily of Argonaute proteins comprises four human members (PIWIL1/HIWI, PIWIL2/ HILI, PIWIL3 and PIWIL4/HIWI2) [31], all initially found in testis. HIWI and HILI have been recently found highly expressed in a variety of human cancers [7], but little is known on their presence and expression in HCC. Therefore, we first investigated their expression in a cohort of 50 HCCs and matched normal liver tissue samples, available from The Cancer Genome Atlas (TCGA, http:// cancergenome.nih.gov/). PIWIL1 and PIWIL2 mRNAs expression resulted altered in tumor tissues, confirming previous observations in HCC [32]. Furthermore, altered expression of several genes of the pathway, including DDX4, HENMT1, MAEL, PDL6, PRMT5, TDRD1, TDRD6, TDRD9, TDRKH and WDR77, suggesting that the piRNA pathway is functional in liver and its activity is modified in the liver cancer (Supplementary Figure S1).
To identify and quantitate piRNAs, we then performed smallRNA-Seq of RNAs extracted from CN (n = 14) and HCC (n = 20) samples from 17 patients (Supplementary Table S1). Sequencing resulted in ~35 million reads/sample, with ~5% corresponding to known piRNAs (Supplementary Table S2), demonstrating that this class of small RNAs is indeed present in this tissue, as shown earlier in regenerating rat liver [15]. A total of 601 and 753 piRNAs were identified in cirrhotic and HCC samples, respectively. Interestingly, 81% of the total piRNA repertoire in liver was identified with few sequence reads, to indicate that these RNAs are expressed at a very low level, similarly to what generally observed for other regulatory sncRNAs, in particular microRNAs. Filtering out cases with a very low read counts (see Supplementary Materials and Methods), 197 piRNAs were selected and used for further analysis ( Figure 1A and Supplementary Table S3). Notably, 41 piRNAs displayed a very high level of expression in at least one sample type (average read count ≥ 5,000 rpm, Figure 1A and Supplementary Tables 3A) and constituted ~90% of total liver piRNome. Interestingly, these RNAs were found highly expressed also in normal human liver (TCGA datasets, data not shown).
Liver piRNAs resulted distributed among all chromosomes except for the 21, with a preference for chromosome 6, that comprise 107 mapped piRNA loci ( Figure 1B, Supplementary Figure S2A). Many liver piRNAs have multiple origins in the genome, with a maximum of 28 different locations for a single piRNA (hsa_piR_018596). Unlike piRNAs previously described in germline, but consistent with what observed in other somatic tissues [15,25,30], less than 5% liver piRNAs mapped to known human piRNA clusters, a result suggesting that the mechanism(s) driving their expression in hepatocytes is distinct from that of germline cells. Table S3B). Of note, a sizeable fraction (78 out of 197) of the liver piRNAs identified here derived from protein coding, snoRNA and long non-coding RNA (lncRNAs) genes and from pseudogenes.
A comparative analysis of piRNA expression between cirrhosis and HCC showed significant differences between the two conditions ( Figure 1A, 1B). Applying a stringent statistical analysis to search for differences in piRNAs expression, in tumors respect to matched non-malignant tissues, we identified the signature of 58 piRNAs shown in Figure 1C, including 16 belonging to the group of 41 highly expressed ones (average read count ≥ 5,000 rpm). Hierarchical clustering revealed a clear segregation of the samples in two major clusters, separating almost completely all CNs from HCCs. All the piRNAs included in the signature resulted differentially expressed in cancerous respect to cirrhotic liver (Fold Change, |FC| ≥ 1.5, False Discovery Rate, FDR ≤ 0.05); considering the median read counts within each group of samples, 34 were found overexpressed and 24 underexpressed in HCC samples (Supplementary Table S4). The HCC clade ( Figure 1C) includes two recognizable sub-clusters, characterized by different piRNA expression levels. Considering FC variations in the two individuals HCC groups, respect to the cirrhotic one, it is interesting to notice that: for overexpressed piRNAs, 30 out of 34 molecules showed higher expression in clade B compared to A, while for underexpressed piRNAs, 20 out of 24 showed a more pronounced down-regulation in group A compared to B (Supplementary Table S4). Considering the biological processes associated to mRNAs targeted by the differentially expressed piRNAs, microvascular invasion, a histologic feature of HCC aggressiveness, scored more frequently in clade B (5 cases out of 9) as compared to A (1 case out of 9) (P < 0.04), suggesting that piRNAs could be involved in control of angiogenic processes in liver cancers.
Interestingly, many piRNAs distinguishing malignant form non-malignant liver tissues have been previously described with a similar behavior also in other cancers, specifically in gastric cancer [22,23], myeloma [29], breast cancer [25], renal carcinoma [33], endometrial cancer [28], and pancreatic cancer [26] (see details in Supplementary Table S5). Furthermore, 22 of these (marked by orange arrows in Figure 1C) are in common with piRNAs recently found deregulated in multiples cancer types [30]. When combined, these results suggest that the piRNA signature identified here highlight involvement of these sncRNAs in liver cancer biology.
Identification of novel liver piRNA-likes deregulated during hepatocarcinogenesis
SmallRNA-Seq in both cirrhotic and cancerous nodules revealed that ~25% of the reads aligning to the genome do not match to any annotated human RNA (Supplementary Table S2), a result in agreement with the notion that a sizeable fraction of existing RNAs is still uncharacterized. Previous works demonstrated that in silico prediction tools [13] allowed to identify new, nonannotated, piRNAs that were then experimentally validated and found to exert important biological activities [14]. Non annotated 21-35 nt long reads from all smallRNA-Seq datasets of CN and HCC samples, generated as described above and first analyzed for known piRNAs, were thus searched for novel piRNAs as described by Mei et al. [14]. Considering that, by definition, canonical piRNAs are PIWI protein-interacting RNAs and piRNAs present in piRNABank (http://pirnabank.ibab.ac.in/) were all originally discovered in germline cells due to their ability to bind PIWI proteins, the new somatic piRNAs discovered here were named piR_LLi (piRNA-Like in Liver), to avoid confusion. We identified 879 piR_LLis in CN and 646 in HCC samples. Likewise known liver piRNAs, several piR_LLis (71%) showed with very low read counts. Filtering for low copy molecules was www.impactjournals.com/oncotarget thus applied and the top 359 piR_LLis were selected (Figure 2A and Supplementary Table S6) and further analyzed. These showed sequence features similar to those of known somatic piRNAs, but distinct from germline ones, including in particular a 5ʹ-U bias and no preference for A in position 10 ( Figure 2B), implying that their biogenesis occurs through the primary pathway described above. Considering their genomic location, piR_LLis show a very limited overlap with known germline piRNA clusters (< 5%), much like liver piRNAs, a preference for chromosomes 1 and 6, and, of note, 19 of them map to the mitochondrial chromosome (Supplementary Table S6B). Finally, piR_LLis originate preferentially from intragenic regions (53%) and predominately (99%) from the transcribed strand of introns ( Figure 2C, Supplementary Figure S2B and Supplementary Table S6B) piR_LLis showed a broad range of expression, with few of them expressed at very high levels (14 with > 10,000 rpm in at least one tissue type), and 108 exclusively expressed in HCCs (Figure 2A and Supplementary Table S6). To identified differences in piR_ LLis expression between cirrhosis and HCC, we again applied the Wilcoxon Mann-Whitney test (|FC| ≥ 1.5, FDR ≤ 0.05), obtaining a 67 RNA signature of differently expressed piR_LLis ( Figure 2D and Supplementary Table S7), including 6 of mitochondrial origin (highlighted in pink in Figure 2D). Hierarchical clustering revealed an almost complete segregation of the matched HCC-cirrhotic liver samples ( Figure 2D). Notably, 65 piR_LLis resulted overexpressed and only 2 underexpressed in tumor respect to non-malignant tissues, suggesting a global increase in piRNA-like transcription in tumor tissues.
In silico identification of HCC-responsive piRNA and piRNA-like target-RNAs piRNAs were first shown to function in posttranscriptional regulation of transposon expression, inducing rapid and effective degradation of their transcripts [34], but they have been shown to form piRISC and induce mRNA deadenylation and decay via imperfect base-pairing in mouse elongating spermatids [18] and Drosophila embryo [19], inducing mRNA degradation by a mechanism that closely resembles that of microRNAs. To gain insight on the molecular processes in which piRNAs are involved in liver cancer cells, we searched for the mRNA-targets of known piRNAs and novel piR_ LLis found differentially expressed in HCC respect to non-malignant tissues ( Figure 1C and 2D), as described by Zhang et al. [35]. Results led to a set of 4.085 target-RNAs, including pseudogene transcripts and lncRNAs (Supplementary Figure S3A), some of which showing multiple binding sites for the same piRNA, while others binding up to 4 distinct piRNAs. Interestingly, the piRNA binding sites mapped to different regions of transcripts: 492 in 5ʹUTR, 612 in CDS and 2.981 in 3ʹUTR (Supplementary Figure S3A), similarly to what previously observed [15,25,28].
Ingenuity Pathway Analysis tool (IPA: www. ingenuity.com) was used to identify cellular pathways targeted by HCC-deregulated piRNAs. Results showed that the deregulated piRNAs identified here can target multiple signaling pathways, including death and TNF receptors, HIPPO, p53, PI3K/AKT, WNT/β-catenin, GADD45, AMPK, HMGB1 and PTEN pathways (Supplementary Figure S3B and Supplementary Table S8), that control, among others, cell cycle regulation, telomerase activity, protein ubiquitination, DNA methylation and apoptosis, all functions compromised in HCC. Indeed, the piRNA-targeted pathways affect such key processes as cell proliferation and death, angiogenesis, invasion, and metastasis, all known to be deregulated in HCC [3,5,6]. This was confirmed by downstream effects analysis (by IPA Molecule Activity Predictor tool), used to assess the overall effects (biological trends) of deregulated liver piRNAs on pathway activity. As an example, in Supplementary Figure S3C are summarized the predicted effects of the piRNAs targeting, respectively, the death receptor and PTEN pathways, that thereby influence cell proliferation (enhanced) and death (inhibited). These results support the possibility that the sncRNAs identified here represent a new class of regulators in liver cancer cells.
piRNA and piRNA-like RNAs are deregulated during the early stages of hepatocarcinogenesis
Human hepatocarcinogenesis is a long, stepwise process more often arising in a chronically altered hepatic microenvironment and proceeding from intermediate dysplastic lesions to eHCC and, ultimately, pHCC [36]. The sequential morphological lesions recognized during human hepatocarcinogenesis [37] are each characterized by distinctive clinicopathological features, with little molecular determinants known. A detailed knowledge of the early molecular changes occurring during liver carcinogenesis is thus much sought after, as it may provide means to help recognize the dysplastic nodules committed to malignant transformation, as well as early malignant lesions likely endowed with a more aggressive behavior. We thus performed smallRNA-Seq also in LGDN (n = 9), HGDN (n = 6) and eHCC (n = 6) samples (Supplementary Table S1), and investigated in detail piRNA and piR_LLi expression patterns by comparative analyses with respect to both CNs and pHCCs. This allowed identification of 15 piRNAs and 10 piR_LLis showing significant differences in expression among the groups of samples considered (Figure 3 and Supplementary Figure S4), providing clues of existing relationships with the carcinogenic process. Indeed, 7 piRNAs showed higher expression in CNs, that decreased significantly already in LGDNs and remained low across all the other pathological www.impactjournals.com/oncotarget stages analyzed, up to pHCC. In contrast, 6 piRNAs and 8 piR_LLi displayed an opposite behavior, with very low or absent expression in cirrhotic liver and significant increases at all stages of malignant transformation. On the other hand, hsa_piR_020498 and piR_LLi_30552 were overexpressed from HGDN to progressed HCC, being almost undetectable in CN and LGDN, suggesting that they are involved only in later stages of the process. Finally, piR_LLi_24894 was detected only in LGDNs, while hsa_piR_013306 resulted up-regulated only in HCCs (Figure 3 and Supplementary Figure S4). These last results suggest that some piRNAs might be involved in stage-specific processes. When combined, these evidences indicate that dysplastic lesions are characterized by selective piRNA deregulation, which distinguishes them from cirrhotic nodules and, more often, is maintained up to pHCC.
DISCUSSION
Understanding the molecular events leading to malignant transformation is required to unravel the genetic path of liver carcinogenesis, a necessary prerequisite to identify early diagnostic and prognostic markers and novel therapeutic targets. In recent years, several studies focused on the role of noncoding RNAs in hepatocarcinogenesis, given the pivotal role of these molecules in establishment of complex physiological and pathological phenotypes, including cancer [review in 38]. Such studies provided valuable knowledge on microRNA and lncRNA involvement in liver carcinogenesis. In this study, we investigated in detail the behavior of a new family of regulatory RNAs, piRNAs, in 14 cirrhotic and 20 matched HCC samples and thereby identified > 700 known piRNAs and 900 novel piRNA-likes expressed in human liver, indicating that PIWI-piRNAs system is active in benign and neoplastic tissues of this organ. The piRNAs identified in liver show a strong preference for Uridine at the 5th position and lack any Adenine bias at nucleotide 10, suggesting that their synthesis is likely to occur through the primary piRNA biosynthetic pathway, as previously shown in other somatic tissues (9,15). Stringent analyses revealed a well-defined molecular signature, comprising 125 piRNAs, that discriminates HCCs from to cirrhotic liver ( Figures 1C and 2D), providing a clear demonstration of piRNA deregulation in liver carcinogenesis. These results are supported by results obtained from a large TCGA cohort, demonstrating that many genes of piRNAs pathways are dys-regulated in HCC (Supplementary Figure S1). When combined, these results indicate presence of the piRNA pathway in human liver and its modulation during neoplastic transformation. This result is in line with those suggesting the potential of PIWI proteins as cancer diagnostic and prognostic markers [7], including a positive correlation between HIWI expression and liver tumor size and metastasis, matched by a negative one with survival rate [39][40][41]. Furthermore, MAEL, a key gene in the piRNA biogenesis pathways, has been recognized as oncogene for its role in HCC development and progression [42]. Finally, considering the known association between molecular events occurring during liver regeneration and neoplastic transformation [43][44], we recently demonstrated a significant reprogramming of the piRNome in an experimental model of liver regeneration post-hepatectomy, with proliferationresponsive piRNAs able to target key cellular processes implicated in organ regeneration [15]. By homology comparison between human and rat piRNAs, we found that 16 of the 72 piRNAs differentially expressed during rat liver regeneration are also expressed in human liver and 3 of them (hsa_piR_017724, hsa_piR_020829, hsa_piR_004309) are differentially expressed between CN and HCC in the present study, two of which (hsa_ piR_017724 and hsa_piR_020829) are also deregulated already in dysplastic nodules, suggesting that these RNAs could be involved in hepatocyte proliferation. Altogether, these findings suggest that piRNAs represent new players in liver proliferation and carcinogenesis. In line with this possibility, prediction and functional analysis of the RNA targeted by piRNAs aberrantly expressed in HCC revealed a nonrandom, significant correlation with several major oncogenic pathways that are deregulated in HCC, including death receptor, Hippo, PI3K/AKT, Wnt/βcatenin, the p53 and PTEN pathways [3,5,6]. The piRNA targets include tumor suppressors, oncogenes, growth factors, cell cycle regulators, effectors of apoptosis and angiogenesis and cell adhesion molecules involved in cell-cell interaction (Supplementary Table S8), all known to take an active part in liver carcinogenesis and tumor progression. These results, although to be considered preliminary in the absence of rigorous experimental validation, support the possibility that the liver piRNAs discovered here target key cellular pathways in normal and transformed hepatocytes. In line with this view, the sncRNA expression patterns identified match to specific clinicopathological characteristics of HCC, since based on piRNA expression the tumor samples cluster in two different groups ( Figure 2D), with group B comprising tumors characterized by a more advanced stage of the disease and increased angiogenic potential. This last process is known to be activated early during carcinogenesis and to be important for tumor growth and metastatic potential [45]. Interestingly, hsa_piR_00823, one of the piRNAs differentially up-regulated in HCCs (FC 1,9 and 5,7 for clade A and B, respectively: Figure 1C and Supplementary Table S4), has been involved in regulation of de novo DNA methylation and angiogenesis in multiple myeloma, where its inhibition reduced VEGF secretion by cancer cells [29].
HCC is typically preceded by appearance of nonmalignant liver nodules, that frequently contain one or more microscopic transformed foci, suggesting that dysplastic nodules, especially HGDNs, could be viewed as the earliest HCC precursors [46][47][48]. Due to the arduous histological distinction of pre-cancerous lesions from welldifferentiated HCCs, we evaluated whether piRNAs were also differentially expressed in the different classes of nodules. To this end, we analyzed piRNA expression also a series of dysplastic nodules (9 LGDN, 6 HGDN) and eHCC (6) from the same patient series (Supplementary Table S1), and compared piRNome expression profiles both among these lesions and respect to cirrhotic and HCC nodules from the same patients. Results reveal how specific piRNA expression patterns mark known stages of the carcinogenic pathway (summarized in Figure 4): while high piR_LLi_24894 is a feature of low-grade lesions only, significant accumulation of piR_LLi_30552 and hsa_ piR_020498 occurs from HGDNs up to pHCCs (Figure 4 and Supplementary Figure S4), and hsa_piR_013306 accumulates only in HCC. On the other hand, most of the changes found in HCC occur already in LGDNs, the earliest known stage of hepatocarcinogenesis investigated here. When combined, all these results strongly support the possibility that piRNAs are directly or indirectly involved in the spectrum of changes that characterize the hepatic carcinogenic process.
Comparing the results obtained here in HCC with previous work in other cancers, we observed that several piRNAs are similarly affected by cell transformation. In particular, 31 out of the 58 piRNAs of the HCC signature of Figure 1C were found aberrantly expressed in a comparable way in other cancer types, when malignant tissues were compared to non-malignant ones [22,25,26,[28][29][30]33], suggesting that a set of these sncRNAs may be involved in common steps of the carcinogenic pathway, independently of the cell of origin. Indeed, Martinez et al. [30] reported that the piRNome exhibits specific pancancer, as well as tumor-type specific, expression patterns.
It is noteworthy to mention that a very high fraction (~80%) of the liver piRNA repertoire identified here is expressed at a very low level, so that these RNAs are likely to be present, on average, in less than one molecule/cell. We cannot exclude that they exert biological functions, as each of them could be expressed in only one or few cell subtypes among those present in the tissues analyzed, that comprise inflammatory and immune cells, stroma, vasculature and, of note, cancer stem cells. The molecular resolution of the analytical methods applied here (whole tissue analysis) is not sufficient to investigate this aspect, but the technological advances of single cell transcriptome sequencing [49] made now possible to address it. In this respect, it is worth mentioning that piRNAs are known to be involved in stem cell regulation, and PIWI proteins are expressed in normal and cancer stem cells.
In conclusion, we identified a piRNA expression signature specific of progressed liver cancers that distinguishes it from non-malignant liver. This finding, combined with the observation of a progressive deregulation of the liver piRNome during carcinogenesis, suggests that members of this novel family of small results of piRNA and piRNA-like expression changes detected at different steps of human hepatocarcinogenesis. Deregulation of some of these molecules in dysplasia or HCC respect to cirrhosis supports the hypothesis that deregulation of these molecules may be implicated in tumor onset. A potential involvement of these small RNAs in malignant transformation and tumor progression is further suggested by specific changes in their expression confined to HGDN, eHCC and pHCC, or only in overt cancerous lesions. The green and red boxes indicate, respectively, piRNAs down-and up-regulated in HCC samples respect to paired non cancer tissues. www.impactjournals.com/oncotarget regulatory RNAs are likely to play a role in malignant transformation of hepatocytes. Collectively, these results indicate that piRNAs represent a new family of regulatory and effector molecules involved in liver carcinogenesis, whose better understanding will help shed light in HCC pathogenesis and is exploitable to better characterize dysplastic and neoplastic liver lesions.
Samples collection
Resected specimens from 17 patients, each with multiple hepatocellular nodules (HN) well representative of different steps of human hepatocarcinogenesis, were included in this study. After proper identification of the hepatocellular nodule on the H/E section, the lesion was manually microdissected from sequential, matched 10 µm paraffin-embedded sections. These tissue samples harbored 61 HN (mean: 3,5 HN/patient; range 2-6 patient) as follows: 17 cirrhotic nodules (CN), 9 LGDN, 6 HGDN, 6 eHCC, and 23 pHCC. Clinical and pathological features of the series are illustrated in Supplementary Table S1. Of these 61 nodules, 55 had sufficient material for a complete morphological characterization and molecular analysis, namely 14 CN, 9 LGDN, 6 HGDN, 6 eHCC and 20 pHCC.
RNA purification and small RNA sequencing
Total RNA was extracted from FFPE sections of human livers using RNeasy FFPE Kit (QIAGEN GmbH, Hilden, Germany) in duplicates and quantitated with NanoDrop-1000 spectrophotometer (Thermo Fisher Scientific, Cinisello Balsamo, Italy). For smallRNAseq, 1μg of total RNA/samples was used for library preparation with Illumina TruSeq smallRNA sample preparation Kit and each library (8pM) was sequenced on HiSeq2500 (Illumina) for 50 cycles at Genomix4life (www.genomix4life.com).
Bioinformatics analysis
The sequencing reads from each samples were processed using iMir [50] to detect piRNAs expression using piRNABank annotation [51] adding piR-HEP1 sequence [32]. Other non-coding RNA reads mapping were discarded in further analysis, briefly, human miRNA annotated in miRBase (v21) and other ncRNAs (rRNA, tRNA, snoRNA) annotated in UCSC were downloaded and integrated into a comprehensive dataset for annotated non-coding RNAs.
The reads failing to match known piRNAs and other sncRNAs, with lengths between 25 and 35 nt, were filtered using a piRNA prediction tool [13]. The retained reads were mapped to human genome (hg19) using Bowtie v0.12.9 [52] to infer the genomic coordinates, then BEDTools suite v2.23.0 [53] and custom python scripts were used to recognize the piRNA-LLi loci and to calculate the read coverage.
The expression values of piRNAs and piRNA-LLi were represented like read per million (RPM) values making them comparable across samples and to filter out low expression molecules filtered.data function of R package NOISeq [54] was used.
Differential expression analysis between different tissues were performed with R using Wilcoxon Mann-Whitney test (α < 0.05). piRNAs were considered differentially expressed when showing absolute Fold Change (FC) ≥ 1.5 and significance false discovery rate (FDR) ≤0.05.
In order to find putative mRNAs target, piRNAs and piRNA-LLi sequences were cut from second nucleotide with length twenty and mapped to reverse complement of RefSeq annotation (release 70) using Bowtie [52] with three miss-match like previous described in Zhang et al [35].
Unsupervised hierarchical clustering of samples based on expression profiles of selected piRNAs was performed using Ward's agglomeration method operated on Kendall rank correlation coefficient.
The functional analyses were generated with IPA (Ingenuity ® Systems, www.ingenuity.com) to identify the canonical pathways associated with putative mRNAs targeted by piRNAs. The Molecule Activity Predictor (MAP) tool in IPA was used to simulate the downstream effects of activation or inhibition of molecules in the pathway considering an inverted correlation between piRNAs and their direct putative targets. | 2018-04-03T05:23:55.905Z | 2016-07-13T00:00:00.000 | {
"year": 2016,
"sha1": "3e4c1a493feda7d33d554db688a87dcc706ccd04",
"oa_license": "CCBY",
"oa_url": "http://www.oncotarget.com/index.php?journal=oncotarget&op=download&page=article&path[]=10567&path[]=33391",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e4c1a493feda7d33d554db688a87dcc706ccd04",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
233310126 | pes2o/s2orc | v3-fos-license | The role of IL-6 and IL-6 blockade in COVID-19
ABSTRACT Introduction Severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) induces a dysregulated hyperinflammatory response. Areas covered Authors review evidence on IL-6 and IL-6 blockade in coronavirus disease 2019 (COVID-19) and discuss the pathophysiological and prognostic roles of this cytokine and the clinical impact of pharmacological blockade of IL-6 . The material includes original articles and reviews published from March 2020 to March 2021 and searched on PubMed, medRxiv, and bioRxiv. Expert opinion IL-6 is one of the most prominent pro-inflammatory cytokines. Increased levels are recorded in COVID-19 patients, especially those with severe-to-critical disease. Evidence is accumulating on the relevance of IL-6 as a prognostic marker in COVID-19. Since IL-6 is a druggable target for several inflammatory diseases, pharmacological blockers of the IL-6 signaling pathway were repurposed to blunt the abnormal SARS-CoV-2-induced cytokine release. Data are limited to few randomized controlled trials that reported encouraging, though not conclusive, results, indicating the usefulness of IL-6 blockade early in the course of the disease in patients with hyperinflammation and no or limited organ damage. Further research is warranted to explore the role of IL-6 in different COVID-19 phenotypes and identify subgroups of patients who may mostly benefit from IL-6 pathway inhibition.
Introduction
Coronavirus disease 2019 , caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), presents with a mild form of upper respiratory infection or no symptoms at all in the majority of individuals [1]. A subgroup of patients, however, may progress to severe and critical disease experiencing acute hypoxic respiratory failure, acute respiratory distress syndrome (ARDS), multi-organ failure and not infrequently death [2]. SARS-CoV-2 induces a dysregulated hyperinflammatory response in later stages as the virus was found to infect monocytes, macrophages, and dendritic cells (DCs) that increase the secretion of pro-inflammatory cytokines including interleukin-6 (IL-6) [2].
IL-6 is one of the most important pro-inflammatory cytokines [3] and was discovered in late '80s [4]. After its molecular cloning, IL-6 was found to be identical to other proteins with different functions, indicating its pleiotropic nature. The IL-6/ IL-6 receptor (IL-6 R) axis is known to be involved in the pathophysiology of many diseases and its inhibition has been already proven to be beneficial in rheumatoid arthritis, Castleman disease, and the cytokine release syndrome following chimeric antigen receptor (CAR) T cell therapy, among others [5].
In COVID-19, IL-6 is believed to drive multi-organ injury, the most severe form of the illness [6,7] (Figure 1). To this end, IL-6 blockade was postulated to help reducing the inflammatory burden of COVID-19 in the setting of a cytokine storm [8] and improve the clinical status of patients [9,10]. In this narrative review, we summarize basic concepts about IL-6 biology and currently approved therapeutic indications for IL-6 blockade. Then, we discuss in detail the relevance of IL-6 in the pathophysiology of COVID-19 along with its prognostic implications. The safety and efficacy of IL-6 pathway inhibition in COVID-19 is also extensively covered. Finally, we provide future perspectives about the role of IL-6 based on contemporary evidence.
Search criteria
The material for this review includes original articles and reviews published over the past year (from March 2020 to March 2021) and searched on PubMed through the following search terms or their combination: 'IL-6', 'SARS-CoV-2', 'COVID-19', 'cytokine storm', 'cytokine release syndrome', 'IL-6 blockade', 'IL-6 inhibitors', 'tocilizumab', 'sarilumab', 'siltuximab', and 'outcomes'. We considered only English language papers. Additional articles identified from the reference list of the searched articles and from medRxiv and bioRxiv (free online Figure 1. SARS-CoV-2 induces a deregulated, hyperinflammatory response mediating organ injury. After binding its receptor -angiotensin-converting enzyme 2 (ACE2) -SARS-CoV-2 enters type II pneumocytes and replicates. Following viral invasion, macrophages, neutrophils, and dendritic cells activate to capture SARS-CoV-2. Damaged cells release pathogen-associated molecular patterns (PAMPs) stimulating further recruitment of immune cells, that in turn release a large amount of pro-inflammatory cytokines, including IL-6. These mediators are responsible for the increased permeability of alveolar vessels and further immune cell recruitment to the site of infection, thus sustaining the positive, hyperinflammatory loop. Because of lung vessel permeability, SARS-CoV-2 can spread to other organs rich in ACE2, such as kidney, intestine, and pancreas, explaining clinical manifestations other than respiratory ones. Legend binder (Gab)/mitogen-activated protein kinase (MAPK) and phosphoinositide-3 kinase (PI3K) intracellular signaling pathways [13][14][15]. Only few cells, such as macrophages, neutrophils, cluster of differentiation (CD)4 + T cells, podocytes, and hepatocytes express IL-6 R on their cell surface and can activate this classic signaling pathway [13].
A soluble form of the IL-6 R (sIL-6 R) is also found in body fluids, such as blood and urine. sIL-6 R derives from Both pictures are adapted from Rose-John S, 'Interleukin-6 signalling in health and disease' [11], that is an open access article distributed under the terms of the Creative Commons Attribution License. the cleavage of IL-6 R by the membrane-bound a disintegrin and metalloprotease 17 (ADAM17) [16]. An alternative mechanism for sIL-6 R generation is mediated by the translation of a differentially spliced IL-6 R messenger ribonucleic acid (mRNA) lacking the transmembrane and cytosolic domains [17]. sIL-6 R binds IL-6 with the same affinity of the membrane receptor, and the IL-6/sIL-6 R complex activates gp130, that is ubiquitously expressed on cell surface, and can induce signaling also on those cells lacking the membrane-bound IL-6 R [18]. This kind of gp130 activation, termed IL-6 trans-signaling pathway, allows modulation of a broad spectrum of target cells [19,20] (Figure 2A).
In human plasma, a soluble form of gp130 (sgp130) can be retrieved and derives primarily from alternative splicing rather than from proteolysis. sgp130 can interact with the IL-6/sIL-6 R complex and such a property makes sgp130 a very specific inhibitor of IL-6 trans-signaling pathway without affecting classical signaling [21].
These complex regulatory mechanisms enable IL-6 to play a wide plethora of biological activities mainly depending on distinct effector cells and activated signaling cascades. In particular, the classic signaling pathway appears to impact vital and regulatory functions of cells presenting IL-6 R on their surface (e.g., neutrophils, naïve T cells, and hepatocytes), while the trans-signaling pathway is a driver of dysregulated inflammatory responses potentially affecting all cells ( Figure 2B).
By focusing on the effects on inflammation and immunity, IL-6 can promote the differentiation of naïve CD4 + T cells (linking innate and acquired immunity), the T helper (Th)17 differentiation, the T follicular helper cell differentiation, IL-21 production (regulating immunoglobulin synthesis), the differentiation of CD8 + T cells into cytotoxic T cells, the differentiation of activated B cells into antibody-secreting plasma cells, macrophage activation, bone marrow megakaryocyte maturation, and inhibition of transforming growth factor (TGF)-βinduced regulatory T cell differentiation [5,22]. IL-6 can also induce vascular endothelial growth factor production, that increases vascular permeability and angiogenesis and stimulates receptor activator of nuclear factor κ beta (RANKL) to differentiate and activate osteoclasts [22].
The activation of IL-6 pathways is responsible to induce hepatocytes to synthesize and release acute-phase proteins (C-reactive protein [CRP], serum amyloid A, fibrinogen, haptoglobin, and α1-antichymotrypsin) and reduce the synthesis of fibronectin, albumin, and transferrin [23]. IL-6 is also important for the induction of hepcidin during inflammation, that finally leads to the typical hypoferremia of inflammation [24].
In light of its pleiotropic effects, IL-6 signaling pathway has become an attractive druggable target to blunt the inflammatory signaling that contributes to the pathogenesis of several diseases.
Current indications for IL-6 blockade
Several clinical trials explored the potential benefits of IL-6 inhibition on systemic symptoms of inflammatory diseases.
The first IL-6 pathway inhibitor was the humanized monoclonal antibody tocilizumab, which binds to both sIL-6 R and membrane IL-6 R, thus preventing the formation of the complex with IL-6 and signal transmission [25]. Sarilumab is another human monoclonal antibody targeting both sIL-6 R and membrane IL-6 R [26]. The positive results of clinical trials testing IL-6 R blockers for the treatment of inflammatory diseases led to the production of several other antibodies directly targeting IL-6 (i.e. siltuximab and sirukumab, olokizumab, and clazakizumab).
The use of IL-6 inhibitors has been extensively evaluated in a number of randomized clinical trials (RCTs) in patients with rheumatoid arthritis [27] (Table 1). Tocilizumab was shown to improve signs and symptoms of the disease, radiological progression, and patients' quality of life [28][29][30][31][32][33][34][35][36][37][38][39][40][41][42]. It is approved for rheumatoid arthritis treatment in combination with methotrexate, and is the drug of choice as monotherapy in patients unable to be treated with methotrexate [43]. Tocilizumab is currently used to treat systemic-onset juvenile idiopathic arthritis [44,45] and adult-onset Still's disease [46], whereas no benefit has been found in the treatment of ankylosing spondylitis [47]. Conflicting results came from studies on systemic lupus erythematosus, with tocilizumab that seemed promising in a phase I study [48]. Tocilizumab is the first drug approved for giant cell arteritis other than glucocorticoids [49,50]. In Japan, this drug was approved also for Takayasu arteritis, although the primary efficacy endpoint was not met in the phase III study [51]. Potential benefits in lung function decline were reported in patients with systemic sclerosis treated with tocilizumab [52]. More recently, based on retrospective analyses of pooled data from prospective clinical trials of CAR T cell therapies, tocilizumab was approved for the treatment of severe or lifethreatening CAR T-cell-induced cytokine release syndrome [53].
Sarilumab also obtained the approval for the treatment of rheumatoid arthritis based on clinical trials showing its favorable efficacy and safety [59][60][61].
The effectiveness of sirukumab was proven in patients with active rheumatoid arthritis refractory to disease-modifying anti-rheumatic drugs (DMARDs) [62,63], despite not being superior to adalimumab [64]. RCTs with sirukumab failed to prove the efficacy of IL-6 inhibition in systemic lupus erythematosus [65,66].
Olokizumab also gave promising results for the treatment of patients with moderate-to-severe rheumatoid arthritis for whom methotrexate was inadequate [67][68][69].
Clazakizumab was shown to be effective for the treatment of musculoskeletal manifestations in patients with psoriatic arthritis, but no improvements in skin disease were observed [70].
Both siltuximab and tocilizumab are approved for the treatment of multicentric Castleman disease [71,72] and are indicated for unresectable unicentric Castleman disease with inflammatory symptoms [73], although limited data exist for the unicentric form [74].
General information about the current use of IL-6 inhibitors can be found in Table 1.
Pathophysiological role of IL-6 in COVID-19related dysregulated cytokine response
According to the clinical−therapeutic staging of COVID-19 [75], IL-6 plays a pivotal role in the third stage, that is characterized by an abnormal systemic hyperinflammatory response. The dysregulated cytokine release is clinically responsible for severe COVID-19, whose main marker is abnormal IL-6 levels [76,77]. IL-6 is produced by a subset of highly inflammatory macrophages [78], but not by alveolar macrophages, which are low or absent in the bronchoalveolar fluid of severely ill patients [79]. Of interest, while IL-6 absence at early phases of a viral infection was shown to depress T follicular helper cell maturation thus reducing antiviral response [80], COVID-19 patients admitted to the intensive care unit (ICU) show a negative correlation between IL-6 and other cytokines, as well as CD4 + and CD8 + T cells [81]. This indicates that aberrant IL-6 production has a negative impact on adaptive immunity.
Rheumatoid arthritis
Tocilizumab • Improvement in signs and symptoms in patients with active RA without increase in incidence of AEs [28][29][30][31][32][33][34], even on long-term follow-up [35,36] • Improvement in signs and symptoms in patients with active RA who had inadequate response to TNF antagonists [37] and DMARDs [38] • Reduction in radiographic disease progression [39] • Superiority of tocilizumab compared with methotrexate in sign and symptom reduction in patients with active RA [40] • Superiority of tocilizumab compared with adalimumab in sign and symptom reduction in patients with severe RA [41,42] Sarilumab • Suppression of joint damage biomarkers [59] • Improvement of signs and symptoms in patients with active RA without safety issues [60] • Superiority of sarilumab compared with adalimumab in sign and symptom reduction in patients with active RA who have to discontinue methotrexate [61] Systemic juvenile idiopathic arthritis It appears now clear that patients with higher levels of inflammatory mediators experience worse outcomes during SARS-CoV-2 infection [86,87]. In particular, COVID-19 patients progressing to ARDS showed increased concentrations of IL-6, IL-1β, and tumor necrosis factor (TNF)-α [82]. This abnormal increase in cytokine levels, described as a cytokine storm, is responsible for an exaggerated activation of the immune system that, in turn, promotes further production of cytokines and chemokines. Importantly, dysregulated inflammation seems to be associated with abnormalities in the coagulation cascade, finally leading to immunothrombotic processes [88], which are also responsible for organ damage. SARS-CoV-2 infects preferentially type II pneumocytes and alveolar macrophages within the lungs [89,90]. Recently, Patra et al. showed that the SARS-CoV-2 spike protein can trigger an angiotensin II type 1 (AT1) receptor-mediated signaling cascade, finally increasing IL-6 release, which is down-regulated by the AT1 receptor antagonist candesartan [91]. In the lungs, the virus replicates and alters the lung epithelial layer thus entering underlying tissues, where immune cells -namely neutrophils, macrophages, and dendritic cells (DCs) -capture the pathogen [7]. In lung samples of patients who died because of COVID-19-related ARDS, SARS-CoV-2 was found to trigger the activation of the NACHT, LRR, and PYD domainscontaining protein 3 (NLRP3) inflammasome in monocytes [92,93,155] leading to the production and release of IL-1β, which is upstream of IL-6.
Injured pneumocytes can release danger-associated molecular patterns (DAMPs) and pathogen-associated molecular patterns (PAMPs), that trigger the activation of the lung epithelium and resident immune cells [90,94]. Activation of neutrophils and macrophages, antigen presentation by DCs, and local SARS-CoV-2 replication lead to increased production of inflammatory cytokines, especially IL-6, IL-1β, and TNF-α, finally contributing to organ damage, especially the lungs, as this uncontrolled inflammatory response is able to selfpropagate [95]. Indeed, a correlation between IL-6 and viral load was described, with the latter being associated with ARDS severity and lung damage [96]. Finally, during infections, increased vessel permeability allows immune cell infiltration and viral spread, followed by the release of inflammatory mediators, such as IL-6, that exacerbate the hyperinflammatory environment [97].
Lymphopenia is commonly observed among COVID-19 patients and correlates with disease severity [98]. Injured spleen and lymph nodes and increased IL-6 levels, by inducing lymphocyte apoptosis, are likely to play a role in the development of lymphopenia [98,99]. IL-6 was also shown to affect the lymphoid function through a marked reduction in human leukocyte antigen D related (HLA-DR) expression coupled by an important depletion of CD4 + lymphocytes, CD19 + lymphocytes, and natural killer (NK) cells [100].
IL-6 is also involved in the COVID-19-associated coagulopathy [101] as it is known to interfere with the coagulation cascade through the generation of tissue factor and thrombin [102,103], to stimulate platelet activity, and induce endothelial dysfunction [104]. With this regard, tocilizumab seems to improve the hypercoagulable state in COVID-19 patients, irrespective of prophylactic or therapeutic dose of anticoagulant therapy, and was associated with a parallel improvement in respiratory function [105]. Recently, Canzano et al. provided evidence that COVID-19 coagulopathy may be supported by diffuse cell activation mediated by tissue factor produced by platelets, granulocytes, and microvesicles on the common background of endothelial dysfunction, with all of these events strongly sustained by increased levels of IL-6. Indeed, IL-6 blockade with tocilizumab and antiplatelet drugs (aspirin or P2Y12 inhibitors) were found beneficial in blunting these effects [106].
IL-6 as a prognostic marker in COVID-19
Coronaviruses including SARS-CoV-2 have the ability to induce, in a subgroup of infected subjects, an excessive and dysregulated immune response which appears to be crucial in the progression of disease [8,76].
Initial evidence from postmortem analyses of fatal COVID-19 cases due to refractory ARDS revealed diffuse pulmonary interstitial mononuclear inflammatory infiltrates, predominantly lymphocytic and perivascular, associated with overactivation of cytotoxic Tcells and high concentrations of cytotoxic mediators, whose local or systemic biological activity is believed to substantially contribute to organ damage and to the development of severe forms of COVID-19 [107,108]. Similarly, a large number of early reports from China describing the immunological profile of severe COVID-19 patients demonstrated the hyperactivation of the humoral immune pathways [86,96,[109][110][111][112]. As evidence is mounting , it became clear that, among a large number of biomarkers examined, IL-6 plays an essential role in COVID-19, being mechanistically implicated in disease pathogenesis and clinically associated with prognosis [6,7]. Notably, IL-6 has been already proven to be a valuable biomarker in a wide spectrum of diseases including pneumonia of other etiologies, and is frequently employed in clinical practice worldwide [23,113,114].
Salient observations by Chen et al. showed that increased baseline IL-6 was closely associated with vital signs and the detection of serum SARS-CoV-2 RNAemia, which appears to be a distinctive feature of critical disease [96]. Furthermore, critically ill patients displayed almost 10-fold higher IL-6 levels compared with severe patients, and all fatal cases exhibited extremely elevated IL-6 values [96]. In the early phase of the pandemic, these and other similar findings [86,96,109,111,112,115] contributed to link the immunological features of severe-to-critical COVID-19 to those of cytokine storm syndromes [8,116]. Increased baseline IL-6 was also positively associated with other inflammatory biomarkers, such as CRP, lactate dehydrogenase (LDH), ferritin, and D-dimer, and chest computed tomography (CT) findings [112]. Likewise, lower IL-6 levels were found in patients recovering from COVID-19 with improved findings on chest CT scan, whereas IL-6 was further increased during disease reexacerbation [112]. Furthermore, analysis of bronchoalveolar lavage of patients with COVID-19 revealed that IL-6 was significantly higher in ICU patients compared with non-ICU patients, thus providing further evidence of the local (pulmonary), besides systemic (serum), involvement of IL-6 [94].
A large number of meta-analyses confirmed the relevance of IL-6 as a prognostic marker in COVID-19 [117][118][119][120][121]. With regard to disease severity, meta-analyses found that, despite considerable heterogeneity, systemic levels of IL-6 were significantly higher in severe COVID-19 patients compared with non-severe cases [117][118][119][120][121]. IL-6 appeared to be able to discriminate disease severity across the clinical spectrum of COVID-19, as patients with progressively worse disease displayed proportional increases in IL-6 levels [117]. A meta-analysis from Zhu et al. found that IL-6 concentration of patients with mild COVID-19 was on average 24.49 pg/mL, lower than that of severely ill patients, whose IL-6 was on average 30.66 pg/mL, which was lower than that of critically ill patients [117]. Compared with non-complicated COVID-19, mean IL-6 concentrations were almost 3-fold higher in complicated cases, including severe-tocritical patients and those developing ARDS or requiring ICU admission, with the results being consistent when restricting the analysis to patients requiring or not ICU admission [119]. Furthermore, a study by Herold et al. found that IL-6 was highly predictive of the need for invasive ventilation, with IL-6 cutoff >35 pg/mL at hospital admission, and maximal values >80 pg/ mL [77]. Of note, elevated IL-6 levels in the course of disease predicted respiratory failure significantly earlier than CRP did (23.2 vs 15.7 hours) [77].
Besides reflecting disease severity, IL-6 levels also appear to predict fatality risk from COVID-19. A number of studies have consistently reported that, compared with survivors, nonsurvivors displayed markedly increased IL-6 values, with mean between-group differences ranging from 41.32 to 59.88 pg/mL [117,118,121]. Interestingly, Laguna-Goya et al. developed an IL-6-based mortality risk model for hospitalized patients with COVID-19 including, besides IL-6, peripheral blood oxygen saturation to fraction of inspired oxygen ratio (SpO 2 /FiO 2 ), neutrophil-to-lymphocyte ratio, LDH, and age, that showed high accuracy for the prediction of fatality [122].
Data herein reported demonstrate that circulating levels of IL-6 are closely associated with clinical outcomes and survival rates in patients with COVID-19. Monitoring of IL-6 dynamic changes, together with other biomarkers such as CRP and D-dimer, may therefore be advisable, since it anticipates the clinical evolution of disease serving as an early warning indicator and allowing physicians to adequately manage patients who might benefit from early treatment escalation (i.e. anticytokine strategies). It is, however, worth considering that, when employing IL-6 levels to make clinical decisions, these should be interpreted with caution as they might be influenced by multiple confounding factors, such as age, comorbidities, treatments, and genetic polymorphisms [123].
Elevated concentrations of inflammatory cytokines, including IL-6, observed in severe-to-critical COVID-19 have spurred comparisons with other syndromes of critical illness that are associated with increased cytokine concentrations [8]. A study by Leisman et al. found that elevations of IL-6 are markedly lower in patients with COVID-19 than those reported in patients with ARDS unrelated to COVID-19, sepsis and CAR T cell therapy-induced cytokine release syndrome, thus highlighting the need for a deeper understanding of the pathobiology of COVID-19 [124].
Given the central role of IL-6 in the pathogenesis of COVID-19 and its role as a prognostic biomarker, IL-6 signaling pathway inhibition was identified as an attractive therapeutic strategy in COVID-19 and has been evaluated for the treatment of COVID-19 by a large number of studies.
Shortly after SARS-CoV-2 outbreak, based on evidence showing a key involvement of IL-6 in the pathobiology of COVID-19, physicians in China initiated the off-label use of tocilizumab in seek for an urgent effective treatment. After an initial 21-patient retrospective cohort study by Xu et al. reporting on the potential clinical benefits of intravenous tocilizumab (4-8 mg/kg) in severe-to-critical COVID-19 patients [126], tocilizumab has been extensively administered either on a compassionate-use basis or within research settings worldwide. In the earlier phases of the pandemic, tocilizumab has become the preferred anti-inflammatory therapy for patients with rapidly progressing acute respiratory failure and hyperinflammation. A large number of observational studies in patients at different stages of COVID-19 described the potential usefulness of both intravenous and subcutaneous tocilizumab in quenching the hyperinflammatory response, as shown by a rapid and sustained reduction of inflammatory biomarkers, including CRP, ferritin and D-dimer, which was paralleled by an improvement in gas exchanges, as shown by significant increases in pressure of arterial oxygen to fractional inspired oxygen concentration (PaO 2 /FiO 2 ) [127][128][129][130][131][132]. Despite considerable heterogeneity in tocilizumab administration (route, dose, timing), lack of standardized background therapeutic schemes, and limitations due to the observational nature of these studies, changes in inflammation-and oxygenation-related parameters observed after tocilizumab treatment seemed to be associated with a trend toward reduced risk for clinical worsening, assessed by decreased need for mechanical ventilation or death [127][128][129][130][131][132]. In contrast, other observational studies reported limited efficacy of tocilizumab in COVID-19 [133,134]. Interestingly, a prospective cohort study with 138 patients by Masiá et al. found tocilizumab did not impair the viral specific antibody response, and the observed delayed viral clearance was likely associated with initially higher viral loads, thus supporting the safety of IL-6 blockade [135]. Cumulatively, meta-analyses of observational studies with low-certainty evidence found that the addition of tocilizumab to standard-of-care was associated with a lower risk of ICU admission, use of ventilation, and mortality across all degrees of COVID-19 severity, without significantly increasing the risk for infections or adverse events, such as elevation of transaminases or neutropenia [136,137]. These promising data accumulating from observational studies prompted the initiation of a large number of RCTs testing tocilizumab for COVID-19. Nevertheless, the initial controlled experiences with tocilizumab failed, showing that tocilizumab was marginally or not effective for the treatment of severe-to-critical COVID-19 [138]. In the small open-label RCT-TCZ-COVID-19 trial involving 126 hospitalized patients with COVID-19 and PaO 2 /FiO 2 between 200 and 300 mmHg, tocilizumab (8 mg/kg up to a maximum of 800 mg intravenous infusion, followed by a second infusion after 12 hours) had no benefit on disease progression compared with standard-of-care, leading to the premature interruption of the study [139]. In the BACC Bay Tocilizumab trial that enrolled 243 non-mechanically ventilated patients with hyperinflammation, a single 8 mg/kg tocilizumab dose, while showing a good safety profile, was not effective in preventing intubation or death [140]. In the larger double-blind COVACTA trial that randomized 452 patients to receive tocilizumab (8 mg/kg) or placebo, a shorter time to hospital discharge (20 vs. 28 days, P = 0.037) and a reduced duration of the ICU stay (9.8 vs. 15.5 days, P = 0.045) were found in patients treated with tocilizumab compared with placebo [141]. In the cohortembedded, open-label, Bayesian randomized CORIMUNO-19 trial of hospitalized patients with COVID-19 pneumonia requiring oxygen support but not admitted to ICU, tocilizumab (single intravenous infusion of 8 mg/kg, with a possible second dose, if clinically indicated) significantly reduced the risk of mechanical ventilation or death by day 14, however no difference on day-28 mortality was found [142]. A similar signal for efficacy of tocilizumab in COVID-19 was observed in the randomized, double-blind, placebo-controlled EMPACTA trial that included a group of 389 racial and ethnic minority patients who were not receiving mechanical ventilation [143]. In this study, treatment with tocilizumab (one or two 8 mg/kg intravenous doses) was safe and significantly reduced the likelihood of progression to the composite outcome of mechanical ventilation or death, while it showed no effect on survival compared with placebo [143].
Recently, very encouraging results from two large RCTs have been released [144,145]. In the multifactorial, adaptive platform REMAP-CAP trial, critically ill subjects with COVID-19 requiring oxygen support in ICU were randomized to receive either tocilizumab (dose of 8 mg/kg, 353 patients), sarilumab (dose of 400 mg, 48 patients) or standard-of-care (402 patients). Administration of one of the IL-6 inhibitors markedly improved outcomes including days free from organ support (10, 11 and 0 days for tocilizumab, sarilumab and control, respectively) and in-hospital mortality (28%, 22.2% and 35.8% for tocilizumab, sarilumab and control, respectively) [144]. The open-label, platform RECOVERY trial randomized 4,116 patients receiving supplemental oxygen (82%), noninvasive respiratory support (41%), or invasive mechanical ventilation (14%), and evidence of hyperinflammation (CRP ≥75 mg/ L) to receive tocilizumab (400-800 mg intravenously, with a second possible dose 12-24 hours later, if clinically indicated) plus standard-of-care, or standard-of-care alone [145]. Overall, 29% vs. 33% of subjects died within 28 days (rate ratio 0.86; 95% confidence interval [CI] 0.77-0.96; p = 0.007), with benefits observed regardless of the level of respiratory support [145]. Of note, in patients receiving concomitant systemic glucocorticoid treatment at randomization (82%), a clear effect on mortality was observed, showing that the benefits of tocilizumab were additional to those of glucocorticoids [145]. Furthermore, patients treated with tocilizumab were more likely to be discharged alive from hospital by day 28 (54% vs 47%; rate ratio 1.22; 95% CI 1-12-1.34; p < 0.0001) [145]. In summary, tocilizumab in patients with severe COVID-19 pneumonia was shown to be safe and reduce progression to mechanical ventilation [143,145] and death [145], although not all studies showed the same benefits.
Sarilumab
Sarilumab is a fully human monoclonal antibody that antagonizes both soluble and membrane-bound IL-6 R [125]. 125 With the exhaustion of tocilizumab supplies, sarilumab was repurposed for the treatment of COVID-19 due to its shared mechanism of action with tocilizumab. In parallel with encouraging data with tocilizumab, initial uncontrolled experiences with sarilumab also yielded promising expectations leading to the initiation of multiple phase II/III RCTs (NCT04324073, NCT04386239, NCT04357808, NCT04322773). For instance, in a small prospective single-center case-series from Italy, seven out of eight patients treated with sarilumab, administered as three single intravenous infusions of 400, 200 and 200 mg respectively at 24, 48 and 96 hours after hospital admission, showed an improvement in inflammatory biomarkers, pulmonary ultrasound score and oxygenation, and patients were discharged within 7 days [146]. Moreover, sarilumab (400 mg on day one, with a possible second infusion in case of unchanged or worsened clinical status) showed a favorable efficacy and safety profile among 53 severe-tocritical patients, leading to resolution of COVID-19 pneumonia in 83% of cases (89.7% and 64.3% for patients treated in medical wards and ICU, respectively) [147]. However, in an open-label cohort study of 28 COVID-19 patients not on invasive mechanical ventilation, a single sarilumab infusion of 400 mg did not significantly improve survival when compared with matched controls, despite being associated with faster recovery (10 vs. 24 days) in a subset of patients with limited lung consolidations at baseline [148]. Likewise, in a larger cohort of 255 patients, treatment with either tocilizumab (400 mg) or sarilumab (200 mg) was associated with better clinical outcomes in patients with lower oxygen requirements, although the study lacked a control group and the mortality of patients treated with the IL-6 R inhibitor was comparable with the overall mortality of the local New York city area [149].
Controlled evidence with sarilumab is currently limited to disappointing results from two RCTs. In the first trial, sarilumab administration was not associated with a statistically significant difference in clinical outcomes, although there was a beneficial effect in clinical improvement and mortality among mechanically ventilated patients. This, however, was counterbalanced by a detrimental effect in non-mechanically ventilated patients, leading to the early interruption of the study and the cancellation of the originally planned extension trial testing higher sarilumab dosage (800 mg) [150]. In a separate trial enrolling 420 critical COVID-19 patients, sarilumab treatment (200 or 400 mg) was associated with a positive signal, but it did not reach statistical significance [151]. In REMAP-CAP trial, sarilumab was found to increase organ support-free days compared with control (11, [152]. In summary, sarilumab showed a favorable effect on survival in patients with severe COVID-19 pneumonia in one RCT, while it was neutral in two other trials.
Siltuximab
Siltuximab is a chimeric monoclonal antibody that prevents IL-6 from binding to its receptors and inhibits the biological activity of IL-6 [125]. 125 Siltuximab has been recently deemed of interest for the treatment of COVID-19. Although there are no published studies supporting the use of siltuximab for COVID-19, findings from a non-peer reviewed report from Italy suggest that siltuximab, administered intravenously at a dose of 11 mg/kg to 30 patients requiring noninvasive mechanical ventilation, induced a rapid and sustained decline in CRP, was well-tolerated, and associated with a significantly lower mortality rate compared to standard-of-care alone [153]. A multicenter Belgian RCT (NCT04330638) comparing siltuximab (11 mg/kg intravenously, alone or in combination with the IL-1 blocker anakinra) to other cytokine inhibitors (anakinra and tocilizumab, alone or in combination) has just completed the enrollment, with results being expected soon [154].
Conclusion
Following the initial emphasis on the role of IL-6 and the cytokine storm in driving the clinical manifestations of SARS-CoV-2 infection, it is now apparent that IL-6 is an important mediator in COVID-19 but should be mostly regarded as a marker of disease severity. Indeed, increased levels of IL-6 in COVID-19 were found to have a negative prognostic role toward adverse outcomes, especially need for mechanical ventilation and death. In light of this, IL-6 pathway blockade was evaluated in several RCTs with overall encouraging results. In particular, in two RCTs -the EMPACTA and RECOVERY trials -testing IL-6 R blockers, tocilizumab and sarilumab reduced the risk for mechanical ventilation and death [143,145]. It is, however, important to carefully interpret results from these trials in order to adequately select the best candidates. In particular, patients with early signs of hyperinflammation and minimal or no evidence of organ damage may mostly benefit of IL-6 pathway inhibition, whereas IL-6 pathway inhibition later in the course of disease (e.g. when ARDS or multi-organ dysfunction is present) may result in limited clinical benefit. Adequately designed and powered studies are urgently needed to identify specific subgroups of COVID-19 patients who may mostly benefit from cytokine inhibition, as well as the optimal timing of IL-6 R inhibitor administration and early predictors of treatment success or failure. It is now recognized that SARS-CoV-2 may activate a wealth of redundant, inflammatory pathways including the IL-1 pathway [8,92,93,155], that could provide a kind of escape mechanism from IL-6 blockade. Therefore, whether the co-administration of other immunomodulatory agents (e.g., glucocorticoids, IL-1 blockers, Janus kinase 2 [JAK2] inhibitors) synergically maximize the benefits of IL-6 pathway blockade still remains to be determined. Finally, whether the inhibition of the abnormal cytokine response would be beneficial in reducing long-term effects in patients recovering from severe forms of COVID-19 is not known and certainly deserves further investigation. Based on available evidence, lL-6 blockade should be carefully considered for those patients presenting with an early hyperinflammatory phenotype and acute respiratory failure but without substantial organ damage.
Expert opinion
The role of IL-6 in COVID-19 was largely investigated over the past months, coming to the conclusion that it is an important mediator of the clinical manifestations of patients progressing toward a moderate-to-severe COVID-19 [5,82,156,157]. With this in mind, early during the SARS- . IL-6 blockade in clinical practice. IL-6 is known to take part to inflammatory events, immune responses, and acute-phase reactions in a wealth of inflammatory diseases. The binding of IL-6 to IL-6 R and/or sIL-6 R is responsible for the activation of the intracellular classical and trans-signaling pathways. IL-6 has a role in the pathogenesis of rheumatoid arthritis, Castleman disease, systemic juvenile idiopathic arthritis, giant cell arteritis, Takayasu arteritis, and cytokine release syndrome following CAR T cell therapy, just to cite the most important ones. IL-6 blockade has become a druggable target consisting in the pharmacological inhibition of IL-6 directly or through its binding to IL-6 R and/or sIL-6 R. Currently available drugs targeting either IL-6 or its receptor are presented in the picture.
The picture is adapted from Rose-John S, 'Interleukin-6 signalling in health and disease' [11], that is an open access article distributed under the terms of the Creative Commons Attribution License.
CoV-2 pandemic, IL-6 blockade using monoclonal antibodies was investigated with potential benefits described in several observational or cohort studies [105,127,131,[158][159][160][161]. However, some contrasting data from RCTs with the IL-6 R blocker tocilizumab have raised questions on the real usefulness of these drugs and posed some questions about the real role played by IL-6 in severe COVID-19 [10,139,140,142,143,162,163]. So, why is it difficult to draw definitive conclusions based on currently available data?
In the middle of a pandemic rapidly spreading all around the world, a class of drugs, namely the IL-6 pathway blockers, was repurposed in the attempt to reduce as much as possible poor outcomes, such as progression to mechanical ventilation and death. However, previously described trials incorporate considerable heterogeneity (e.g. different severity of respiratory failure and hyperinflammation, different background therapies). In addition, it should be considered that there are still some difficulties in defining the optimal therapeutic window for administering immunomodulatory drugs. It is generally recognized that cytokine inhibitors, when administered too early in the course of disease, may harm, however late administration may result in blunted clinical benefits, since organ damage has already occurred and several cytokines may be involved in the pathogenesis at that stage. These caveats are worth being further investigated by designing trials addressing these clinical knowledge gaps.
Another point to consider is the adequacy of IL-6 as the sole marker to be considered when choosing to administer an IL-6 blocker. Accumulating evidence is likely to suggest that IL-6 alone should not be considered, but other more specific mediators, such as PaO 2 , CRP, ferritin, and D-dimer, might also be helpful in identifying patients progressing toward severe-to -critical COVID-19 who can benefit more from these therapies [164]. While initially similarities between the cytokine storm occurring in patients receiving CAR T cell therapy and that occurring in patients with SARS-CoV-2 infection were postulated [165], the impact of the presumed cytokine storm in COVID-19 has been now downsized to an abnormal or eventually dysregulated cytokine response when compared with other conditions leading to elevated IL-6 levels [124,166,167,168]. With this regard, Kox et al. compared cytokine levels, including IL-6, of patients with COVID-19-related ARDS with those of patients with septic shock and out-ofhospital cardiac arrest and found that IL-6 was lower in COVID-19 than in other conditions [167], probably depicting a globally lower disease severity in spite of a severe pulmonary injury. In a recent systematic review and meta-analysis [124], levels of IL-6 in severe-to-critical COVID-19 were compared with those in ARDS, CAR T cell therapy-associated cytokine release syndrome, and sepsis and were found to be extremely lower of at least 30 to nearly 100 times (mean 36.7 pg/mL, 95% CI 21. Reproduced with permission from Leisman DE, 'Cytokine elevation in severe and critical COVID-19: a rapid systematic review, meta-analysis, and comparison with other inflammatory syndromes' [124]. In sum, individuals with hyperinflammation (marked by increased levels of IL-6, CRP, ferritin, and D-dimer) should benefit from IL-6 pathway blockers early in the disease course, when no organ injury (marked by lactate dehydrogenase) still exists. Whether cytokine blockade upstream of IL-6 (i.e. IL-1 blockade) is beneficial remains yet to be proven, as no RCTs directly comparing IL-6 and IL-1 inhibitors are currently available. As the activation of NLRP3 signaling pathway was demonstrated in COVID-19 [92,93,155], IL-1 blockade might be theoretically superior than IL-6 blockade alone, probably because the upstream inhibition of IL-1 may lead to a downstream reduction of IL-6 and, possibly, other inflammatory cytokines [168]. Collectively, these aspects need to be urgently addressed in order to unravel the actual role of different anti-cytokine treatments in COVID-19 and maximize clinical benefits of such therapeutic strategies.
Funding
This paper was not funded.
Declaration of interest
A Vecchié received a travel grant from Kiniksa Pharmaceuticals Ltd. to attend the 2019 AHA Scientific Sessions and currently receives honoraria from Effetti s.r.l. (Milan, Italy) to collaborate on the medical website www.inflam mology.org. A Abbate has served as a consultant for Astra Zeneca, Cromos Pharma, Eli Lilly, Effetti, Janssen, Kiniksa Pharmaceuticals Ltd., Merck, Novartis, Olatec, and Serpin Pharma. A Bonaventura received a travel grant from Kiniksa Pharmaceuticals Ltd. to attend the 2019 AHA Scientific Sessions and currently receives honoraria from Effetti s.r.l. (Milan, Italy) to collaborate on the medical website www.inflammology.org. The authors have no other relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript apart from those disclosed.
Reviewer disclosures
Peer reviewers on this manuscript have no relevant financial or other relationships to disclose. | 2021-04-21T06:16:51.050Z | 2021-04-20T00:00:00.000 | {
"year": 2021,
"sha1": "8106dfaad852905aaa0b29117219a5c1111db735",
"oa_license": "CCBY",
"oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/1744666X.2021.1919086?needAccess=true",
"oa_status": "HYBRID",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "77c8ba4f3d40ecfc342fac48aded5ddf08c8a610",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
269164861 | pes2o/s2orc | v3-fos-license | Cow Behavior Recognition Based on Wearable Nose Rings
Simple Summary Currently, smart devices for cows on the market are mainly leg rings and collars, but the behavioral data provided by these devices do not well reflect the real behavior of cows. Therefore, a cow-behavior-detection device based on a wearable device is proposed. It is a set of equipment that collects data on the daily behavior (eating, rumination, other behavior) of cows, which can help people monitor the health status of cows more accurately. This paper proposes for the first time an electronic device worn on the nose of a cow to record real-time behavioral data of the cow. Through these data, the daily behavior of cows can be analyzed, such as the time spent eating and ruminating that day, the number of rumination chews, etc., which can better help farm managers understand the health status of cows, reduce the occurrence of diseases, and thereby improve the overall health of the cows’ welfare. The wearing position of the device has no adverse effects on the normal life of the cows and is suitable for long-term wearing. This equipment helps improve the welfare of dairy cows and has far-reaching value for the dairy farming industry. Abstract This study introduces a novel device designed to monitor dairy cow behavior, with a particular focus on feeding, rumination, and other behaviors. This study investigates the association between the cow behaviors and acceleration data collected using a three-axis, nose-mounted accelerometer, as well as the feasibility of improving the behavioral classification accuracy through machine learning. A total of 11 cows were used. We utilized three-axis acceleration sensors that were fixed to the cow’s nose, and these devices provided detailed and unique data corresponding to their activity; in particular, a recorder was installed on each nasal device to obtain acceleration data, which were then used to calculate activity levels and changes. In addition, we visually observed the behavior of the cattle. The characteristic acceleration values during feeding, rumination, and other behavior were recorded; there were significant differences in the activity levels and changes between different behaviors. The results indicated that the nose ring device had the potential to accurately differentiate between eating and rumination behaviors, thus providing an effective method for the early detection of health problems and cattle management. The eating, rumination, and other behaviors of cows were classified with high accuracy using the machine learning technique, which can be used to calculate the activity levels and changes in cattle based on the data obtained from the nose-mounted, three-axis accelerometer.
Introduction
Abnormal behavior in cows may indicate problems related to their physiological health; as such, the use of automated sensors that record cow behavioral data has become increasingly important.This has prompted the development of a novel device, which is designed to be attached to the cow's nose for accurate behavior data collection.Certain physiological behaviors and reduced sleep may be caused by inflammation in dairy cows [1].
There exists widespread threats to the welfare of grazing ruminants, which may come from factors such as gastrointestinal upsets caused by feed or a sudden drop in temperature, which can cause illness, and such health issues can manifest in observable changes in behavior, such as reduced feed intake, altered rumination patterns, or increased lethargy, which serve as early indicators of potential welfare concerns [2].These threats are often reflected in the behavior of ruminants.For example, calves with respiratory disease may present abnormal lying and standing behaviors [3], and calves with diarrhea will lie down and be inactive for longer periods of time [4].There is also a certain correlation between the lying behavior of cows and their postpartum health [5].For most, other than the managers of large-scale breeding programs, it is impossible to pay attention to the behavior of cows for a long period of time.In response to this problem, certain dairy cow physiological behaviors are studied and analyzed in this study.
Systematic studies conducted on cow behavior thus far can be roughly divided into two types: those involving video surveillance and deep learning and those utilizing wearable sensors.Most studies on cow behavior have used cameras to collect cow image information and identify cow behavior.Fewer studies have used sensors to collect cow behavior information and perform deep learning processing, and those that do exist are still in their infancy.
Recent advancements in the field of agricultural technology have seen the application of machine learning techniques to analyze and understand the behavior of cows in a detailed manner.For instance, methods such as YOLOX and Siam-AM have been utilized to extract the skeletal features of cows for identification purposes [6][7][8].Furthermore, the installation of tags on cows enables the measurement of acceleration data through the use of convolutional neural networks (CNNs) [9,10].A cow's rumination, eating, and activity behaviors can be analyzed through measured acceleration data [11,12].The deployment of computer vision-specifically, non-contact video monitoring-facilitates the detection of respiratory behavior in cows and also showcases the efficacy of non-intrusive monitoring techniques [13].In addition, the detailed monitoring of the eating behavior of group-raised cows, including variables such as the number of eating times, average eating duration, average eating interval, and total eating time, has been documented, thereby highlighting the importance of a precise behavioral analysis in agricultural settings [14].The application of computer vision for the purpose of extracting discriminant features from the body part coordinates of cows further supports the identification of estrus periods, thus demonstrating the utility of machine learning in reproductive management [15].
Ultra-high frequency radiofrequency (RF) waves have been used to collect information from neck-mounted sensor tags equipped with accelerometers in order to assess the behavior of cows [16].In addition, non-invasive sniffing methods have been used to accurately measure methane emissions [17].Acoustic sensor technology has been used to non-invasively monitor cattle vocalizations [18], as well as to automatically identify and classify the feeding behavior of cow sounds [19].The detection of hoof lesions in cows has been achieved through sound analysis [20].The classification of the chewing and rumination behavior of cows has been achieved by using sound signals and machine learning [21].The detection of cow eating behavior and activities has been delivered through earhook sensors [22].Furthermore, the real-time body temperature of cows has also been detected [23].
There have also been studies on the detection of lameness in cows using certain pedometers and three-axis acceleration sensors [24,25] or to detect differences in eating behavior through lameness [26,27].Pressure sensors can effectively sense the mandibular movement of cows, in order to detect their basic behavior [28][29][30], and collar-mounted three-axis acceleration sensors have also been used to classify cow behavior [31][32][33].
In this work, we develop a behavioral data collection device that is designed to be worn on the nose of cows.Although the data used in this study were still obtained using the aforementioned three-axis acceleration sensor, the difference between the device in this study and those in previous research is that, for the first time, behavioral data can now be collected from the nose of a cow.Moreover, the proposed device is more accurate than the traditional method for detecting the rumination, feeding, and other behaviors of cows, as it records these behaviors of cows more clearly than when using behavioral data alone.
Animal Housing
This experiment was carried out from 3 June 2023 to 7 June 2023 and 11 June 2023 to 16 June 2023.The data were collected from the cows owned by farmers in Xuniban Village, Hohhot City, Inner Mongolia for 11 days.Data were collected from one cow at a time, for 5-6 h a day; furthermore, some of the data were collected on numerous occasions from certain cows.A total of 7 cows participated in the experiment.The overall block diagram of the system is shown in Figure 1.The reason for selecting seven cows for data collection in this experiment was primarily to assess the correlation between data collected by the cow nose rings and specific cow behaviors such as feeding and rumination.Since these behaviors are generally similar among cows and do not exhibit significant differences, using a larger sample would have likely yielded redundant results and increased experimental costs without enhancing the scientific value.The cows had feeding areas and free movement areas in their homes.Cows are easily frightened when they see strangers, which can lead to irregular activities.Therefore, the farmers chose to set up cameras outside the site for filming.The cows were outfitted with nose rings, the physical representation of which is depicted in Figure 2.
Hardware Design
The nasal ring device design integrated various components to facilitate efficient realtime data collection and transmission.The microprocessor served as the central processing unit, which managed data collection, preliminary data processing, and coordination with the LoRa module for the purpose of data transmission.The accelerometer captured motion data, which were crucial for analyzing cow behaviors such as feeding, rumination, and other behaviors.The LoRa wireless transmission module was a critical component, as it enabled long-range, low-power communication between the data collection nodes and also aided in the creation of a central data repository or control center.The power module ensured a consistent energy supply for the operation of the device.The described setup employed a nose ring that was equipped with a wireless sensor to capture tri-axial acceleration information from the cow's nasal region.These data were wirelessly transmitted via a module to a LoRa base station and were then serially sent to a supervisory control system.The device shell is shown in Figure 3. 1. Microprocessor: The STM32L051K8 microprocessor was selected for its low degree of power use, which is ideal for wearables and remote sensors.The STM32L051K8 microprocessor is manufactured by STMicroelectronics, which is headquartered in Geneva, Switzerland.It features a 32 MHz ARM Cortex-M0+ CPU, 64 KB flash, 8 KB SRAM, and operates with a power usage between 1.8 and 3.6 V.In addition, it also supports multiple power-saving modes; as such, it can reduce power consumption according to the user's preference.
2. Accelerometer: The ADXL362 accelerometer, known for its energy-efficient, threeaxis MEMS design, was selected due to its minimal power requirements.The ADXL362 is manufactured by Analog Devices, Inc., headquartered in Wilmington, MA, USA.Its FIFO feature minimizes the microprocessor's data-saving work, thus resulting in extending low-power sleep cycles and saving energy.It has a 512-sample FIFO capacity that can store large quantities of X, Y, and Z data, thus enabling continuous data collection while the microprocessor is asleep.
3. Communication chip: The SX1278 chip, from Semtech's SX127x series, was selected for its efficient, long-range LoRa communication capabilities as it outperforms traditional RF methods with a range of 137-525 MHz and is capable of exceeding 10 km.The SX1278 chip is manufactured by Semtech Corporation, headquartered in Camarillo, CA, USA.It also includes error coding for reliability, a 256-byte data packet engine with CRC, and automatic RF detection with RSSI.In a LoRa network, differentiating multiple devices is straightforward via assigning unique device numbers; this allows for simultaneous communications to be queued and processed in order by the base station.We set the transmit power of the SX1278 chip to −4 dBm, the spreading factor to 256, and the channel frequency to 2.4 GHz.Movement changed the communication distance between the two nodes.The communication distance was set to 20, 40, 80, 100, 120, and 150 m.In addition, 300 data packets were sent over a certain distance for testing.Please see Appendix A Table A1 for test data.
Power module:
The power module for the cow nose ring used a 3.6 V, 2450 mA Saft14500 lithium battery, which was chosen due to its high energy density.The battery is manufactured by Saft, which is headquartered in Levallois-Perret, France.This allowed it to hold more energy than other batteries of a similar size, thus providing durable power without added bulk or weight.This battery ensured that the device could operate continuously for 5-6 months.
The proposed system was aimed at minimizing power consumption while ensuring continuous data collection and transmission, which was achieved through carefully selecting the microprocessor, accelerometer, communication chip, and power module.This aligns with the objective of efficient and sustainable livestock monitoring in large-scale dairy farming operations.The hardware circuit block diagram is shown in Figure 4.
Data Set Establishment and Classification Model Design 2.2.1. Definition of the Behaviors
In this study, cattle behavior was divided into three categories, namely, feeding, rumination, and other behaviors.Other behaviors were understood as including all behaviors except feeding and rumination.Their definitions are detailed in Table 1.
Behaviors Abbreviation Description
Feeding behavior FB Defined as cows standing still with their legs extended and their necks bent to eat.
Rumination behavior RB Defined as a cow standing or lying still and chewing the cud repeatedly.
Other behaviors OB
Defined as all behaviors other than eating and ruminating, including walking back and forth, tilting the head, standing still, sleeping, and so on.
Data Acquisition and Pre-Processing
The activity level of 7 healthy cows in the same free-moving breeding area of the study site farm was tested for 11 days in order to verify the detection performance of the nose ring device.A single cow wore the device every day from 8 am to noon.The data for each cow included approximately 113,865 groups (one X, Y, Z axis acceleration represents one group), with a total of 1,252,522 groups.The behavior of the cows was recorded via cameras, which were time-synchronized with the wireless sensors.The camera settings allowed for real-time synchronization, so the recording time matched the actual time.Data from the cow nose rings were uploaded to a base station, which then transmitted them to a supervisory control system for reception.Upon reception, the data were timestamped to ensure that the video recording time aligned with the data upload time.Since the video was recorded continuously, there were no gaps in the video data.See Appendix A Figure A1 for raw data.
If there was any loss of cow behavior data, the extent of the missing data was determined manually.If a significant quantity of behavior data were missing, data annotation was halted until new behavior data that synchronized with the video time became available.
The entire process involved meticulous observation by researchers who watched the video alongside the cow behavior data for accurate annotation.
If the data loss was minor, such as the loss of a single set of XYZ three-axis acceleration data corresponding to 1 ms of time, it was deemed insignificant to the experiment, and data annotation continued.Once data annotation was complete, a portion of the annotated data was selected to plot the three-axis acceleration data charts.These charts were reviewed alongside the video to ensure that the data annotation was accurate.Both feeding and rumination behaviors in cows were similar; hence, the patterns in the three-axis acceleration charts remained consistent, which assisted in verifying the accuracy of the data annotation through a visual observation of the charts and video.After the data were annotated, the acceleration time series was cut into same-length segments, and then experiments were performed on the feature value extraction and behavior classification recognition on each data segment.
In this study, we meticulously segmented the original acceleration data into timelength units to enhance the precision and efficiency of the cow behavioral posture recognition, where data completeness, discriminability, and computational efficiency were prioritized in order to find the optimal segment length.After evaluating different duration periods (i.e., 3.2, 6.4, and 12.8 s) for their impact on classification accuracy, 6.4 s emerged as the optimal length for a single data segment.
A 6.4 s segment was found to effectively capture cow behavioral changes as it included sufficient action sequences for accurate behavior differentiation.Segments of 3.2 s may fail to encompass all of the features of certain behaviors, especially complex or longer actions like rumination or partial resting, and this could lead to incomplete data and lower recognition accuracy.Conversely, 12.8-s segments could blend multiple behaviors into one segment, thus complicating classification and diminishing recognition accuracy.
From a computational efficiency and real-time performance standpoint, 6.4 s segments balanced data integrity and discernibility with processing efficiency.While longer segments (e.g., 12.8 s) might reduce data volume, they could also introduce latency in real-time applications, thus affecting system responsiveness and monitoring capabilities.Meanwhile, 6.4 s segments provided a compromise by ensuring accurate behavior recognition alongside swift processing and real-time feedback.Please see Appendix A Figure A2 for the processed data.
The segmented motion data formed the feature vector X, with the corresponding cow behavior posture serving as the target label Y.
Among them, the behavior feature represented the behavior of the cow under that acceleration.The corresponding behaviors were designated in numbers, as shown in Table 2.
Behavior Number
Feeding behavior 0 Rumination behavior 1 Other behaviors 2 The data set was published at https://www.kaggle.com/datasets/fandaoerji/cownose-ring-data-set(CNRD), accessed on 1 November 2023.The acceleration curve of the rumination behavior is shown in Figure 5.We also observed significant differences between the different behaviors in the distribution range and change amplitude of the acceleration data.The acceleration data of the feeding behavior were distributed in a wide range and varied greatly, which may be related to the frequent head movements and body position adjustments of the cows during feeding.In contrast, the acceleration distribution of the rumination behavior was relatively concentrated, thus indicating that the movements of the cows during rumination were relatively stable and were mainly limited to the rhythmic up and down movement of the head.For the other behaviors, the amplitude of the acceleration changes was not only greater than that of the feeding and rumination behavior, but also changed irregularly, thereby reflecting that the acceleration changes in the cows during other behaviors were more dramatic and changeable.The acceleration curve of the feeding behavior is shown in Figure 6.The length of time a cow takes to eat can be determined by analyzing the Y-axis acceleration, and the feed intake of a cow in a day can be determined based on information such as the length of time and feed weight.After the cows wore the device, the researchers collected the video of the cow from that day and then annotated the real-time behavior of the cow with the collected cow behavior data.Through comparison with the simultaneously recorded video data, we further verified the characteristic changes in the acceleration signal in the different behavioral states.The clear increase in three-axis acceleration when the cows performed other activities corresponded to their active physical activities.In the moving state, the acceleration value was irregular, which was consistent with the state of the cow moving at will.During rumination, the acceleration signal showed a relatively small fluctuation amplitude and tended to be stable, which was consistent with the more orderly and limited head movement during rumination.The different value ranges of these acceleration signals not only represented the distinction between different behavioral states, but their time series correspondence also provided accurate time markers for the automatic recognition of behavioral patterns.
Classification Model Design
The long short-term memory network (LSTM) was selected as the cow behavior classification model [34].The reason for choosing the LSTM model was to verify that the acceleration information collected by the equipment could still accurately analyze whether the cow behavior information corresponded to the data without relying on an overly complex model.This kind of classifier is widely used in behavior recognition research and has strong classification and recognition capabilities.The LSTM network can effectively process long-term series data through an internal gating mechanism and can capture long-term dependencies, which means that it can remember past information and use that information in subsequent time steps.Moreover, the LSTM model can extract rich contextual information from the input sequence and use that information in classification tasks to improve performance.The model structure is shown in Figure 7.The first layer was an LSTM layer of 16 neurons, which were designed to process the input sequence.The second layer added a dense layer with 16 neurons and ReLU activation as a fully connected layer that introduced non-linearity, and it helped to learn complex patterns from the output of the LSTM.The last layer added another dense layer, which contained 3 neurons and a softmax activation function.This layer output the probability that the input belonged to one of three categories, thus making it suitable for multi-category classification tasks.
Evaluation Index
This article used the recall rate and F1 Score coefficient to evaluate the behavior classification effect of the model.The formulas are as follows:
Training Environment and Equipment Description
In this study, we used the Tensorflow framework.The training environment and equipment description are shown in Table 3.
Results
In summary, through in-depth analyses of the dairy cow behavior acceleration data, this study not only demonstrated clear differences in the acceleration characteristics between feeding, rumination, and other behaviors, but also revealed the biophysical mechanisms behind these differences.These findings provide a solid foundation for further developments of highly accurate cow behavior monitoring and automated management systems.We could clearly distinguish the subtle differences between the chewing and resting intervals of cows by analyzing the acceleration time series of the cow behavior, especially the changes in Z-axis acceleration.The specific fluctuation pattern of a Z-axis acceleration directly reflected the frequency and amplitude of the cow's head moving up and down.This movement characteristic was particularly significant during rumination.In addition, through the observation of feeding behavior, we noticed synchronous fluctuations in the X-axis and Z-axis accelerations, which showed that the forward, backward, up, and down movements of the cow's head were regular when eating, thus reflecting the periodicity of its eating behavior features.The number and times of the chewing when the cow ruminated could be determined by analyzing the peak value of the Z-axis acceleration.
The LSTM network model was used to accurately identify the three daily behaviors of cattle: eating, ruminating, and other behaviors.Moreover, the three metrics of precision, recall, and F1-score were found to be at high levels; in addition, all three values were similar.The model achieved a clear distinction between eating and rumination behaviors, and also reached the recognition level required in the industry.These results indicate that the LSTM network model can classify the various daily behaviors of cattle and provide technical support for smart breeding.When comparing the effects of the different time interception lengths (i.e., 3.2, 6.4, and 12.8 s) with respect to the accuracy of cow behavior recognition, 6.4 s was selected as the ideal time interception length, as the accuracy of the cow behavior recognition was at its highest.The accuracy of the cow behavior recognition at different interception times is shown in Table 4.The classification results and evaluation indicators of the LSTM model under 6.4 s are shown in Table 5.The confusion matrix of the LSTM model is shown in Figure 8. Mutual misclassifications of the feeding and rumination behaviors was relatively rare, which indicated that the identification of these two behaviors was delivered with a high accuracy.However, these two types of behaviors were sometimes misclassified as other behaviors.The reason for this phenomenon may be that the cows were occasionally disturbed during the feeding and rumination process, such as when two cows were competing for food or when the surrounding environment changed.Cows suspend rumination or eating behavior and switch to other behaviors in these cases.Such behaviors are rare, but they will cause the behavior data to change from regular to irregular, resulting in the model misclassifying the behavior.
Personalized health and nutritional management can be achieved by continuously monitoring the physiological and behavioral indicators of a cow.For example, based on a cow's activity level and feeding behavior, its dietary composition and supply can be adjusted to meet its specific nutritional needs.Behavioral changes in cows are often early signs of health problems.The rest duration, feeding, and rumination behavior can be determined in real-time through monitoring and analyzing cow activity.Our system can identify potential health issues, such as estrus, disease, or nutritional deficiencies early, thus allowing for timely intervention.The complex inner connections between physiological behavior and the health status of dairy cows can be revealed when a large quantity of collected data are obtained.These analysis results can provide farmers with scientific decision-making support through information on metrics such as optimal breeding time, health management measures, and nutritional adjustments.The discomfort and stress of the cows, in terms of wearing the equipment, should be reduced as much as possible, and they should be in line with the principles of animal welfare.In comparison with traditional cow monitoring equipment, the first-generation pedometer used in the literature mainly follows the principle of a pedometer to record the number of steps and movements of the cow through the installation of sensors on a cow's legs.The second-generation neck ring adopts more advanced technology and integrates 3D-accelerated sensing devices such as monitors and timers.In addition to accurate cow number identification, the second-generation neck ring can also collect various data such as activity level and estrus characteristics.However, these first two generations of dairy cow monitoring equipment alone cannot accurately identify the two key behaviors of dairy cows (i.e., eating and ruminating).
Discussion
Accurate monitoring of the physiological and behavioral indicators of dairy cows is essential to improve breeding efficiency and animal welfare.Although monitoring devices currently on the market, such as neck rings or pedometers, are effective in tracking animal activities, they are evidently deficient in comprehensively and accurately collecting key physiological indicators.In addition, such devices cannot provide detailed data on behaviors such as feeding and rumination.The nose ring's close correlation with the cow's mouth movements, as well as the AI model used, enabled it to capture and record behavior in greater detail in the pursuit of reliable data.The application of this equipment is expected to significantly improve the management efficiency and decision-making quality of the dairy farming industry, as well as promoting cost savings and efficiency improvements.
For our experiment, we collected acceleration data of the three daily behaviors of cattle (i.e., eating, ruminating, and other behaviors) via nose ring monitoring.Moreover, we then performed LSTM classification on the collected data.The results showed that the LSTM algorithm could identify the three behaviors of ruminating, eating, and other behaviors; having said that, however, there is currently no good way to distinguish between standing and lying-down behaviors.Thus, there is still a need to further improve the model's recognition effect of these two behaviors, or a different, larger model should be used instead.The purpose of this experiment was to test the device on the cow's nose and to establish whether there was a better recognition rate for cattle eating and rumination behavior; as such, the experiment was not performed with a larger model.At the same time, because cows do not stand and then lie down many times in a day, the data set had fewer categories for these two behaviors.Therefore, the model did not learn these two types of behaviors enough; thus, it could not accurately identify standing and lying behaviors.The results of this test can serve as a reference for improving the recognition level of cow behavior categories, as the degree of movement that a cow undertakes will change significantly when the cow in question has estrus or has developed hoof disease.Therefore, this algorithm can provide a reference and relevant theoretical support for the monitoring of cattle estrus and hoof disease.
There is still a long way to go in order to meet the demand for cow behavior detection systems.Although the work completed in this article met the proposed experimental requirements, there are still many problems that need to be addressed in the future.Some of the limitations of this study that are worthy of further exploration and research are as follows: (1) Limited data set and sample size: This study involved a data set collected from seven cows, which, although sufficient for a preliminary analysis, may not fully represent the behavioral diversity of cows of different breeds, in different environments or health conditions.Expanding the data set will help to develop more powerful and general models.
(2) Single mode of behavior recognition: This research mainly relied on three-axis acceleration data for cow behavior recognition.While eating and rumination behaviors can be effectively captured, integrating other modalities such as acoustic signals, video analysis, or physiological sensors will increase the accuracy and range of behavioral identification, especially with respect to the complex behaviors that are difficult to distinguish through acceleration measurements alone.
(3) Focus on specific behaviors: Emphasizing eating and rumination behaviors is critical for health monitoring and productivity assessment.However, the inclusion of other behaviors such as social interactions, detection of heat, and signs of distress or disease can provide a more comprehensive understanding of dairy cow welfare and management needs.
(4) Energy efficiency and equipment design: although the equipment designed in this study is innovative, continued improvements in energy efficiency and wear resistance (e.g., reduced size, improved cow comfort, and so on) will further improve the application of this technology in actual agricultural settings, as well as in applicability and acceptance.
(5) Universality of the machine learning model: This study demonstrates the feasibility of using machine learning for behavioral classification based on acceleration data.Future work could explore the robustness of these models across different farms, species, and environmental conditions to ensure their broad applicability.
Conclusions
This study used a wearable cow nose ring device to collect cow movement data and also used deep learning algorithms to classify the different types of cow behaviors in order to help farm workers effectively determine the health level and estrus period of their cows.The three-axis acceleration data of the three identified behaviors of dairy cows-namely, eating, ruminating, and other behavior-were collected multiple times, and the long shortterm memory network (LSTM) algorithm was used to successfully establish a behavior recognition model for dairy cows.The classification results showed that the LSTM model delivered accuracies of 81%, 86%, and 88% in terms of identifying the three behaviors of feeding, rumination, and other behaviors, respectively.Thus, the proposed model suggests a reference significance for wearable devices regarding cow behavior recognition.
For farmers, this equipment will be an effective tool for monitoring and preventing cow diseases.It can prevent the spread of cow diseases through identifying and curing them when they first appear.Such an approach could then reduce the economic losses caused by cow diseases in pastures and at the same time, it could improve the efficiency of detecting cow diseases, as well as helping to improve the welfare of dairy cows generally.
For large-scale farms, having too many cows makes it impossible for staff to immediately and accurately detect behavioral abnormalities and diseases in a certain cow.However, the proposed device can help to collect and analyze cow behavior data and can also aid in providing timely explanations of a cow's behavior.Having more information on the behavioral conditions of cows can help farmers to manage their dairy herds.
Future research may involve collecting and analyzing behavioral data from a larger, more diverse population of dairy cows that are of different breeds, of different health status, and in different environmental conditions.This will help to develop more general and powerful behavior recognition models.Integrating other data modalities, such as visual (video analysis), auditory (sound analysis), or physiological (heart rate and body temperature) sensors, may help to significantly enhance the system's capabilities, allowing for the identification of a wider range of behaviors and health conditions accurately.Expanding the range of recognized behaviors to include social interactions, estrus behavior, and the early signs of health problems could provide more comprehensive insights into farm management and animal welfare.Continued advances in sensor technology, energy harvesting, and materials science could lead to more efficient, durable, and cow-friendly wearable devices.This could include developing devices that are biodegradable or more environmentally friendly.Leveraging advances in artificial intelligence and machine learning, including deep learning and neural networks, could improve the accuracy and efficiency of behavioral classification.Exploring unsupervised or semi-supervised learning models may also reveal new insights into cow behavioral patterns.Developing systems that not only monitor and analyze cow behavior but also provide real-time alerts and recommendations for intervention could significantly improve farm management practices and animal welfare.Collaboration between technical experts, veterinarians, animal behaviorists, and
Figure 1 .
Figure 1.A block diagram of the system's structure.
Figure 2 .
Figure 2. A cow wearing the proposed equipment.
Figure 3 .
Figure 3.The cow nose ring shell.
Figure 4 .
Figure 4.A diagram of the circuit block hardware.
Table 3 .
The training environment and equipment description.
Table 4 .
Accuracy of cow behavior recognition at different time interception lengths.
Table 5 .
Model classification results and evaluation indicators. | 2024-04-17T15:31:08.568Z | 2024-04-01T00:00:00.000 | {
"year": 2024,
"sha1": "0c341df0ecb5fc61ea3b74ecb70c86a38fe0f82a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-2615/14/8/1187/pdf?version=1713176668",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "01ba93a04247595a5c3d57def778a5ad8fe625a1",
"s2fieldsofstudy": [
"Agricultural and Food Sciences",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
241985774 | pes2o/s2orc | v3-fos-license | Quality Assessment of 3D Synthesized Images Based on Textural and Structural Distortion Estimation
Emerging 3D-related technologies such as augmented reality, virtual reality, mixed reality, and stereoscopy have gained remarkable growth due to their numerous applications in the entertainment, gaming, and electromedical industries. In particular, the 3D television (3DTV) and free-viewpoint television (FTV) enhance viewers’ television experience by providing immersion. They need an infinite number of views to provide a full parallax to the viewer, which is not practical due to various financial and technological constraints. Therefore, novel 3D views are generated from a set of available views and their depth maps using depth-image-based rendering (DIBR) techniques. The quality of a DIBR-synthesized image may be compromised for several reasons, e.g., inaccurate depth estimation. Since depth is important in this application, inaccuracies in depth maps lead to different textural and structural distortions that degrade the quality of the generated image and result in a poor quality of experience (QoE). Therefore, quality assessment DIBR-generated images are essential to guarantee an appreciative QoE. This paper aims at estimating the quality of DIBR-synthesized images and proposes a novel 3D objective image quality metric. The proposed algorithm aims to measure both textural and structural distortions in the DIBR image by exploiting the contrast sensitivity and the Hausdorff distance, respectively. The two measures are combined to estimate an overall quality score. The experimental evaluations performed on the benchmark MCL-3D dataset show that the proposed metric is reliable and accurate, and performs better than existing 2D and 3D quality assessment metrics.
Introduction
Three-dimensional (3D) technologies, e.g., augmented reality, virtual reality, mixed reality, and stereoscopy, have lately enjoyed remarkable growth due to their numerous applications in the entertainment industry, gaming industry, for electro-medical equipment, etc. 3D television (3DTV) and the recent free-viewpoint television (FTV) [1] have enhanced users' television experience by providing immersion. 3DTV projects two views of the same scene from slightly different viewpoints to provide the depth sensation. The FTV, in addition to the immersive experience, enables the viewer to enjoy the scene from different viewpoints by changing his/her position in front of the television. To provide a full parallax, FTV needs dozens of views, ideally an infinite number of views. Capturing, coding, and transmitting such a large number of views is not practical due to various financial and technological constraints, such as limited available bandwidth. Therefore, novel 3D video (3DV) formats and representations have been explored to design compression-friendly and cost-efficient solutions. The multiview video plus depth (MVD) format is considered to be the most suitable for 3D televisions. In addition to color images, MVD also provides the corresponding depth maps, which represent the geometry of the 3D scene.
The additional dimension of the depth in MVD provides the ability to generate novel views from a set of available views using the depth-image-based rendering (DIBR) technique [2], thus enabling the stereoscopy. The quality of the synthesized views is important for a pleasant user experience. Since the depth maps are usually generated using stereomatching algorithms [3], they are not accurate. The inaccuracies in depth maps, when used in DIBR, might introduce various distortions in the synthesized images degrading their quality and resulting in a poor quality of experience (QoE). Thus, assessing the quality of the DIBR-synthesized views is necessary to ensure a satisfactory user experience.
Inaccuracies in depth maps cause textural and structural distortions such as ghost artifacts and inconsistent object shifts in the synthesized views [4][5][6][7][8]. Texture and depth compression also introduce artifacts in the virtual images [9,10]. Another factor that causes degradation in virtual image quality is occluded areas in the original view that become visible in the virtual view, which are called holes. These holes are usually estimated using image inpainting techniques that do not always produce a pleasant reconstruction. Figure 1a shows the artifacts introduced in a synthesized view due to visible occluded regions. Note the distorted face of a spectator in Figure 1b because of erroneous depth in DIBR. The various structural and textural distortions introduced in DIBR images may affect the picture quality, the depth sensation, and the visual comfort, which are considered three main factors of user quality-of-experience (QoE) [6]. Besides viewing experience, studies show that the distortion in 3D images can affect the performance of various applications designed for the 3D environment, such as image saliency detection, video target tracking, face detection, and event detection [11][12][13]. This means that the image quality is very important not only for viewer satisfaction in a stereoscopic environment but also for various 3D applications built for this environment. Therefore, 3D image quality assessment (3D-IQA) is an essential part of the 3D video processing chain.
In this paper, we propose a 3D-IQA metric to estimate the quality of DIBR-synthesized images. The proposed metric aims to measure the structural and textural distortions introduced in the synthesized image due to depth-image-based rendering and combines them to predict the overall quality of the image. The structural details in an image are considered important for their quality as the human visual system (HVS) is more sensitive to them [14,15]. It is the difference between luminance or color that makes the representation of an object or the main features of an image distinguishable. The distortion in these features, referred to as textural distortion, is also important for a true image quality estimation. The textural and structural metric scores are combined to obtain an overall quality score.
The rest of the paper is organized in the following way. Section 2 reviews the related literature, Section 3 presents the proposed 3D-IQA technique. The experimental evaluation of the proposed metric is carried out in Section 4 and we conclude the research in Section 5.
Related Work
The quality of an image can be either assessed through subjective tests or by using an automated objective metric [16]. As human eyes are the ultimate receiver of the image, a subjective test is certainly the best and the most reliable way to assess the visual image quality. In such tests, a set of human observers assigns quality scores to the image, which are averaged to get one score. This method, however, is a time-consuming and expensive approach. Therefore, it was felt necessary to introduce an automatic and fast way to assess the quality of an image. This provides the opportunity for researchers to introduce objective metrics for quantitative image quality evaluation, which proves to be a significant improvement in the field of image quality assessment.
Objective image and video quality metrics can be grouped into three classes based on the availability of the original reference images: full-reference (FR), no-reference (NR), and reduced-reference (RR) [17]. The IQA metric that requires the original reference image to evaluate the quality of its distorted version is referred to as a full-reference metric. The IQA approach that assesses the quality of an image in the absence of a corresponding reference image is classified as the no-reference metric. The reduced-reference metrics lie between the two categories, they do not require the reference images but some of their features must be available for comparison.
In the literature, several 2D and 3D objective quality assessment metrics have been proposed to assess visual image quality. Initially, 2D metrics were used to assess the quality of 3D content, however, the use of conventional 2D metrics was found inappropriate to assess the true quality of 3D images due to several additional factors of 3D videos that were not considered by 2D-IQA algorithms [18][19][20]. Therefore, novel IQA algorithms were needed to evaluate the quality of 3D videos. Such algorithms, in addition to 2D-related artifacts, must also consider artifacts introduced due to the additional dimension of depth in the videos.
In recent years, several algorithms have been proposed to evaluate the quality of 3D images. Many of them utilize the existing 2D quality metrics for this purpose, e.g., [21][22][23][24]. Since these algorithms rely on metrics especially designed for 2D images, they do not consider the most important factor of 3D images, i.e., depth, and therefore they are not accurate and reliable.
Many 3D-IQA techniques consider depth/disparity information while assessing the quality of 3D images, e.g., [25][26][27][28]. You et al. [19] adopted a belief-propagation-based method to estimate the disparity and combined the quality maps of distorted image and distorted disparity computed using conventional 2D metrics. The method proposed in [25] exploits the disparity as well as binocular rivalry to determine the quality. It uses the Multi-scale Structural Similarity Index Measure (MSSIM) [29] metric to evaluate the quality of disparity of stereo images. Zhan et al. [26] presented a machine-learning-based method that works by learning the features from 2D-IQA metrics and specially designed 3D features using the Scale Invariant Feature Transform (SIFT) flow algorithm [30], and was used to obtain the depth information. The different features of disparity and three types of distortions (blur, noise, and compression) were used by [28] in evaluating the quality of 3D images. These features were used to train a quality prediction model by using the random forest regression algorithm. The method proposed in [18] addressed the issue of structural distortion in a synthesized view due to DIBR, but the method is limited to structural distortions so it cannot be used to evaluate the overall quality of the image.
The 3D-IQA method presented in [31] identifies the disocclusion edges in the synthesized image and inversely maps them to the original image, and the corresponding regions are then compared to assess the quality. The algorithm in [32] uses feature matching points in the synthesized and reference images to compute the quality degradation. The Just Notice Difference (JND) model is exploited in [33] to compute the global sharpness and distortion in holes in the DIBR image to assess its quality. The quality metric proposed in [34] identifies the critical blocks in the DIBR synthesized image and the reference image. The texture and color contrast similarities between these blocks are compared to estimate the quality of the synthesized image. The method in [35] works by extracting the features of energy-weighted spatial and temporal information and entropy. Then, support vector regression uses these features for depth estimation. Gorley et al. proposed a stereo-bandlimited contrast method in [36] that considers contrast sensitivity and luminance changes as important factors for the assessment of image quality. The method presented in [37] extracts the natural scene features from a discrete cosine transform (DCT) domain, and a deep belief network (DBN) model was trained to get the deep features. These generated deep features and DMOS values were used to train a support vector regression (SVR) model to predict the image quality. The learning framework proposed in [38] also uses a regression model to learn the features and besides assessing the quality, it also improves the quality of stereo images. The method proposed in [39] considers the global visual characteristics by using structural similarities and the local quality was evaluated by computing the local magnitude and local phase. The global and local quality scores were combined to get the final score.
Binocular perception or binocular rivalry is an important factor in 3D image quality assessment [40,41]. Humans perceive images with both eyes and it is obvious that there is a difference between the perceptions of the left and the right eye in relation to an image. Indeed, binocular rivalry is the visual perception phenomenon in which there exists a difference in the perception of an image when it is seen from the left eye and the right eye. This difference is called the binocular parallax or binocular disparity. The binocular disparity can be divided into horizontal and vertical parallax. The horizontal parallax affects depth perception and the vertical parallax affects visual comfort [37]. This binocular perception was taken into account in [42] and a binocular fusion process was proposed for quality assessment of stereoscopic images. The 3D-IQA metric proposed in [41] is also based on binocular visual characteristics. A learning-based metric [43] uses binocular receptive field properties for assessing the quality of stereo images. Shao et al. [44] proposed a metric that simplifies the process of binocular quality prediction by dividing the problem into monocular feature encoding and binocular feature combination.
Lin et al. combine binocular integration behaviors such as binocular combination and binocular frequency integration with conventional 2D metrics in [45] to evaluate the quality of stereo images. Binocular spatial sensitivity influenced by binocular fusion and binocular rivalry properties was taken into consideration in [46]. The method proposed in [47] uses binocular responses, e.g., binocular energy response (BER), binocular rivalry response (BRR), and local structure distribution, for 3D-IQA. Quality assessment of asymmetrically distorted stereoscopic images was targeted in [48]. The method is inspired by binocular rivalry and it uses estimated disparity and Gabor filter responses to create an intermediate synthesized view whose quality is estimated using 2D-IQA algorithms. A multi-scale model using binocular rivalry is presented in [49] for quality assessment of 3D images. Numerous other 3D-IQA algorithms use binocular cues for evaluating the quality of 3D images, e.g., [50][51][52].
The Proposed Technique
In multiview video-plus-depth (MVD) format, depth-image-based rendering (DIBR) is used to generate virtual views at novel viewpoints to support 3D vision in stereoscopic and autostereoscopic displays. The DIBR obtains the virtual view by warping the original left and right views to a virtual viewpoint with the help of the corresponding depth maps. As discussed earlier, when the virtual view is generated its quality may degrade due to several structural or textural distortions introduced during synthesis. The major cause of these distortions is the inaccurate depth. This inaccuracy in the depth estimates and other compression-related artifacts can cause several distortions in the synthesized image, such as ghost artifacts, holes, and blurry regions, as shown in Figure 1. These distortions degrade the image quality and eventually result in poor overall user quality of experience (QoE). Estimating the quality of the synthesized image is therefore important to ensure better QoE. We propose a 3D-IQA metric that attempts to estimate the distortions introduced in synthesized images. Specifically, the proposed metric is a combination of two measures: one estimates the variations in the texture and the other calculates the deterioration in the structures in the image.
Estimating the Textural Distortion
Textures are complex visual patterns, composed of spatially organized entities that have characteristic brightness, color, shape, and size. The texture is an important discriminant characteristic of an image region [53] and can be used for various purposes such as segmentation, classification, and synthesis [54]. Image texture gives us information about the spatial arrangement of color or intensities in an image or a selected region of an image. During the process of DIBR, the texture of the synthesized image can be adversely affected due to object shifting, incorrect rendering of textured areas, and blurry regions [55]. Object shifting may cause translation or changes in the size of the region in the synthesized view. Due to the translation of objects, the occluded areas in the original view may become visible in the synthesized view, and these are known as holes. These holes are usually estimated using image inpainting techniques that do not always produce accurate reconstruction and result in the incorrect rendering of texture areas and blurry regions in the synthesized view. Given a DIBR-synthesized image and its corresponding reference, the proposed metric estimates the texture distortion by computing the local variations in their contrasts.
Image contrast is an important feature of texture, a basic perceptual attribute and also an important characteristic of the human visual system (HVS) [56,57]. Contrast sensitivity is one of the dominating factors in the research of visual perception [58]. It can be defined as the difference between luminance or color that makes the representation of an object distinguishable. The most famous contrast computation methods are the Michelson and Weber contrast formulas [58]. There are a few methods that use some form of contrast to assess the quality of images [36,[59][60][61].
The proposed metric captures the local variation in contrast of the synthesized image and its reference image. The two images are low-pass filtered to smooth their high spatial frequencies. This is achieved with a small Gaussian filter w of size 3 × 3.
where α g is a normalization term that ensures ∑ w i,j = 1 and σ g is Gaussian variance, which controls the weight distribution and the filter size. Let I and R be the filtered synthesized image and its reference image of size M × N. Let x ij represent a block of size m × n in image I centered at pixel (i, j), and y ij be its corresponding block in reference image R centered at pixel location (i, j). Let x i and y i represent the i-th corresponding blocks of I and R. The mean µ, variance σ 2 , and standard deviations σ of a block x ij are computed.
These statistics for y ij are computed analogously. The variation in contrast ψ ij between the blocks x ij and y ij is then computed.
where c is a small constant used to stabilize the equation. The ψ scores of all pixels in I are computed and averaged to obtain the texture distortion score T of the synthesized image.
Estimating the Structural Distortion
The study presented in [14] shows that the human visual system (HVS) is highly adapted for extracting structural information from the image. The inaccuracies and compression artifacts in the depth map adversely affect the structural details of the image during the process of DIBR, generally distorting the edges and gradients in the images [55,60]. The depth compression may cause the pixels to be lost or wrongly projected in the synthesized view. Similarly, the estimation inaccuracies in the depth cause ghost artifacts, inconsistent object shift, and distortion of edges in the synthesized view. These distortions in the image affect both the texture and the structure of the image. Therefore it is equally important to compute the structural dissimilarities in the image to assess its quality. Several methods are proposed to compute the structural similarity in 2D images, e.g., [29,60,[62][63][64][65].
We used the Hausdorff distance [66] to compute the structural similarity score. The Hausdorff distance measures the degree of mismatch between two sets [66,67]. Similar to a texture distortion metric, this mismatch is also computed locally. The Hausdorff distance can be computed for grayscale images, e.g., [68], and for binary images [66,67]. In the proposed metric, since we want to estimate the distortion in the structural details in the warped image compared to the reference image, the edges in the two images are detected and these edge images are used to estimate the degree of mismatch. Any edge detector can be used for this purpose, however, similar to [66], in our study we used the Canny edge detector [69] to compute the edge maps. The Hausdorff distance between two image blocks x ij and y ij of size m × n centered at location (i, j) in image I and R, respectively, as defined in the preceding section, is computed as follows: HD(x ij , y ij ) = max(hd(x ij , y ij ), hd(y ij , x ij )) The function hd(x ij , y ij ) is called the directed Hausdorff distance from x ij to y ij and it can be defined as Equation (7) identifies the point a in x ij that is the farthest from any point in y ij and measures its distance from the nearest neighboring point in y ij . The function hD(x ij , y ij ) then ranks each point of x ij according to its distance from the nearest neighbor in y ij and picks the largest distant point from these ranked distances because it is the most mismatched point between the reference and distorted image blocks. Similarly, the directed Hausdorff distance from y ij to x ij is computed. In hd(x ij , y ij ) and hd(y ij , x ij ), the former represents the degree of mismatch between the synthesized and original image block and the latter represents the degree of mismatch between the original and the synthesized image blocks. Then the largest of the two is chosen as the mismatch score (Equation (6)). The obtained value is normalized.
The value of HD ij falls in the interval [0, 1]. Recall that HD ij is the degree of mismatch, and the structural similarity score is computed by subtracting this normalized Hausdorff score (HD ij ) from 1.
The structural scores for all k blocks are computed and averaged to obtain a single score S.
Final Quality Score
The textural and structural scores of the synthesized image computed using Equations (5) and (10), respectively, are combined to compute the overall quality score Q.
The parameter α is used to adjust the relative importance of textural and structural scores. Its value is empirically set to 0.7. Figure 2 shows the results of the proposed metric on a few sample images from the testing dataset. In a stereoscopic environment, the quality scores obtained by the proposed metric for both views are averaged to get a single quality score. Figure 2b-e are images obtained by DIBR synthesis from the two source color and depth images, the source view images were artificially degraded by introducing additive white noise (AWN) at four different levels, with noise control parameters 5, 17, 33, and 53, respectively. Figure 2a shows the corresponding ground truth image. Below each image, the scores estimated by the proposed metric and the respective subjective scores are reported. It can be noted that the visual quality of the synthesized images degrades as the noise in the source color and depth images increases and our metric effectively captures this quality degradation.
Experiments and Results
The performance of the proposed 3D video quality assessment metric was evaluated on the benchmark stereoscopic Media Communications Lab -MCL-3D dataset [70] and compared with other 2D and 3D-IQA metrics. We conducted multiple experiments of different types to evaluate the performance and statistical significance of our proposed method. The results were also compared with existing 2D-and 3D-IQA algorithms.
Dataset
The MCL-3D dataset was used to evaluate the performance of the proposed quality metric. The dataset analyzes the impact of different distortions on the quality of depthimage-based rendering (DIBR) synthesized images. MCL-3D dataset was created by Media Communication Lab, University of Southern California, and is publicly available [70]. The dataset was created from 9 multiview-video-plus-depth (MVD) sequences. The resolution of 3 MVD sequences is 1024 × 768 whereas the remaining 6 sequences have a resolution of 1920 × 1080. The dataset reports 648 mean opinion scores (MOSs) of stereo image pairs generated using DIBR from distorted texture and/or depth images. Six different types of distortions (Gaussian blur (Gauss), additive white noise (AWN), down-sampling blur (Sample), JPEG compression (JPEG), JPEG 2000 compression (JP2K), and transmission loss (transloss)) with four different levels are applied either on texture images, depth images, or both. The distorted texture images and depth maps are used to generate the intermediate middle virtual images by using view synthesis reference software (VSRS) [71], a benchmark DIBR technique for generating synthesized views. Sample reference and distorted DIBR images are shown in Figure 3.
Performance Evaluation Parameters
To evaluate the performance of the proposed method we used different statistical tools, including the Pearson linear correlation coefficient (PLCC), Spearman rank-order correlation coefficient (SROCC), Kendall rank-order correlation coefficient (KROCC), rootmean-square error (RMSE), and mean absolute error (MAE). Before computing these parameters, the scores obtained by objective quality metrics were mapped to subjective deferential mean opinion score (DMOS) values using the nonlinear logistic regression described in [72]: where o is the score obtained by the objective quality metric, DMOS p is the mapped score, and β 1 , . . . , β 5 are the regression parameters. The Pearson linear correlation coefficient (PLCC) is used to determine the linear correlation between two continuous variables. Since this method is based on covariance computation, it is considered the best method for measuring statistical relationships. The method was used in the prediction accuracy test. Let x represent the MOS values, y represent the mapped scores, andx andȳ represent the mean values of x and y, respectively. PLCC is computed as The Pearson correlation coefficient describes how strong the relationship between subjective MOS and evaluated objective scores is. The value lies between −1 and 1. Values closer to 1 represent a strong relationship.
The Spearman rank-order correlation coefficient (SROCC) is a nonparametric measure of rank correlation. It assesses how well the relationship between two variables can be described using a monotonic function. The difference between PLCC and SROCC is that the former only assesses linear relationships whereas the latter assesses monotonic relationships that may or may not be linear. For n observations, the SROCC can be computed as The Kendall rank correlation coefficient (KROCC) is another nonparametric measure to determine the relationship between two continuous variables. Like SROCC, it assesses associations based on the ranks of data. It is used to test the similarities in data when they are ranked by quantities.
where n is the sample size, n c is the number of concordant pairs and n d is the number of discordant pairs. Root-mean-square error (RMSE) is the most widely used performance evaluation measure and it computes the prediction error [73]. Since the method takes the square of the error before computing the average, it gives a relatively high weight to a large error and that is why it is considered an important method for performance evaluation.
Mean absolute error (MAE) is another method to compute the difference between two continuous variables. MAE is a linear score that means that all the individual differences are weighted equally in average.
Since PLCC, SROCC, and KROCC are the correlations and MAE and RMSE are the errors, large values of correlations and small values of errors indicate a better performance of the quality metric.
Performance Comparison with 2D and 3D-IQA Metrics
To evaluate the effectiveness of the proposed method, we compared its performance with various existing 2D and 3D-IQA metrics. We compared the performance of the proposed metric with widely used 2D quality assessment metrics: PSNR, SSIM [60], VSNR [74], IFC [75], MSSIM [29], VIF [76], and UQI [77]. Before computing the performance parameters, the objective scores computed by these metrics were also mapped to MOS values by using the same logistic function mentioned in Equation (12). In all experiments, the implementation of the method provided by the authors or other parties was used. The comparison in terms of all five performance parameters is presented in Table 1. The best results are highlighted in bold for convenience. The results reveal that the proposed algorithm outperforms all the compared 2D-IQA algorithms in all performance parameters. Specifically, the proposed method achieves PLCC 0.8909, SROCC 0.8979, and KROCC 0.7095 with minimum RMSE 1.1816. We also evaluated the performance the proposed metric with thirteen well-known and recent 3D image quality assessment metrics. They include 3DSwIM [55], StSD [78], Chen [25], Benoit [79], Campisi [21], Ryu [50], PQM [80], Gorley [36], You l , You g [19], SIQM [73], NIQSV [81], and ST-SIAQ [82]. The evaluation results presented in Table 2 show that the proposed method outperforms all the compared methods in each performance parameter and achieves the best PLCC (0.8909) with the minimum RMSE (1.1816). The other measures, SROCC, KROCC, and MAE, also reveal that the proposed method performs better than other 3D-IQA metrics. To further investigate the effectiveness of the proposed method, its performance with the compared 2D and 3D-IQA metrics was also evaluated for individual distortion types, i.e., AWN, Gauss, Sample, Transloss, JPEG, and JP2K. Recall that the stereopair images in the dataset were generated through DIBR from the polluted depth and/or the color images with these types of noise. The results of the comparison with 2D and 3D quality metrics in terms of PLCC are reported in Tables 3 and 4. These results show that the proposed metric performs better than the compared method for most individual distortion types. Similar observations were made when evaluated using other performance parameters, i.e., SROCC, KROCC, RMSE, and MAE, which are not shown here to save space.
Variance of the Residual Analysis
Variance is the squared deviation of a measure from its mean. It is generally used in evaluating the efficiency of an image quality assessment metric by finding how much the scores computed by an objective IQA metric are closer to the subjective scores. This is achieved by computing the difference between the predicted scores and actual scores. A small difference indicates that the results computed by the metric are reliable and close to the actual scores. To compute this variance σ 2 , first the residual difference between the DMOS and predicted scores after non-linear mapping (DMOS p ) is computed: The variance σ 2 of each compared 2D and 3D quality metric was computed from its residuals R and the statistics are presented in Table 5. The results show that our method achieves the smallest variance among all compared methods, which means the scores estimated by the proposed method are highly correlated with the subjective ratings. Table 5. Variance of the residuals of subjective ratings and the mapped objective scores of the proposed and the compared 2D and 3D quality metrics. The best results are highlighted in bold.
Statistical Significance Test
The statistical significance test [16,72] helps to determine whether a quality metric is statistically better than the other metric. We conducted this test to statistically verify the performance of the proposed metric. In this experiment, we considered only the 3D-IQA metrics as the previous evaluations have shown that the 2D-IQA algorithms perform rather poorly compared to 3D-IQA approaches in assessing the quality of DIBRsynthesized images. The F-test procedure was used to test the significance of the difference between two quality assessment metrics. In the F-test, we compared the variances of residuals (Equation (18)) of two metrics i and j with the F-ratio threshold to find the statistical significance. The F-ratio threshold was obtained from the F-distribution look-up table with If the σ 2 j σ 2 i ratio is greater than the F-ratio then the metric i is said to be significantly superior to the metric j. Similarly, the metric i is said to be significantly inferior to the metric j if this ratio is less than the p-value. The two metrics are said to be statistically indistinguishable if this ratio lies between the p-value and the F-ratio threshold. The F-ratio is called the right-tail critical value and the p-value is called the left-tail critical value. Both of these values were obtained from the F-distribution look-up table at a 95% significance level. The results of the test are presented in Table 6. Each entry in the table is a codeword of 6 characters corresponding to symbols A, G, J, j, S, and T, which represent the distortions AWN, Gauss, JP2K, JPEG, Sample, and Transloss, respectively. In the codeword, the value '1' means that the performance of metric in the row is significantly superior to the metric in the column, the value '0' means that the metric in the row is significantly inferior to the metric in the column, and −' means that the performance of metric in the row and column is equivalent or statistically indistinguishable. The results demonstrate that except AWN and Transloss, the performance of the proposed metric is significantly superior or equivalent to all the compared 3D-IQA metrics in all distortion types. The experimental evaluations performed on the benchmark DIBR synthesized image dataset showed that the performance of the proposed 3D-IQA metric is appreciably better than the compared 2D and 3D-IQA algorithms. Moreover, the variance and the statistical significance tests also showed that our metric is significantly superior or equivalent to most of the compared 3D-IQA metrics. All these performance analyses reveal that the proposed metric is reliable and accurate in estimating the quality of the DIBR-synthesized views. Table 6. Statistical significance tests of proposed and other 3D-IQA metrics on MCL-3D dataset. Value '1' means the metric in the row is significantly superior to that of the column. Value '0' means the metric in the row is significantly inferior to that of the column and '-' means both the metrics are significantly equivalent. The symbols A, G, J, j, S, and T represent the distortions for AWN, Gauss, JP2K, JPEG, Sample, and Transloss, respectively.
Conclusions
In this paper, a novel 3D-IQA metric was proposed to assess the quality of DIBRsynthesized images. The proposed method merges two metrics, one computing the deterioration in the texture of the synthesized image and the other computing the structural distortions introduced in the synthesized image due to DIBR and various other types of noise. The two measures are weighted-averaged to obtain the overall quality indicator. Experimental evaluations were performed on the MCL-3D dataset, which contains DIBRsynthesized images generated from color and depth images that were subject to different types of noise. The experimental results and comparisons with existing 2D and 3D-IQA metrics demonstrate that the proposed metric is accurate and reliable in assessing the quality of DIBR-synthesized 3D images. | 2021-08-23T20:42:10.011Z | 2021-01-01T00:00:00.000 | {
"year": 2021,
"sha1": "a66bf9bfb39aa5c3256410893bce82a85f68c63a",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/6/2666/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "4fe7be654f2aee074ab97878686269bd04d29d5f",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
} |
56339946 | pes2o/s2orc | v3-fos-license | Effective patient safety education for novice RNs: A systematic review
Background and objective: The need is great for identifying effective evidence-based strategies that focus on increasing novice RN confidence for the application of skills used to care for patients safely. The purpose of this systematic review is to explore effective continuing education strategies that target novice RNs’ professional development, enhance clinical confidence, and focus on patient safety. Methods: The EBSCOhost database search was set to find recently published papers within the last ten years, sorted by relevance from January 2007 through August 2017. This search yielded twelve studies deemed eligible for inclusion by the databases CINAHL, Communication & Mass Media Complete, Education Full Text (H.W. Wilson), Health & Wellness Resource Center, and Science Direct. Commonalities and distinguishing features among the strategies are examined. Results: This systematic review identified 12 articles that describe effective training strategies aimed at improving novice RNs’ clinical practice confidence and skill. A thematic analysis of the data was used to systematically gain knowledge about strategies used to educate novice RNs working in the hospital setting. The majority of strategies employed a number of different types of simulation and reported varying degrees of success for improving novice RN ability to care for patients safely. Simulation, virtual reality, preceptored clinical experiences, and interdisciplinary experiences were found to be effective education strategies enhancing novice RN’s skill for providing safe care. Didactic instruction had positive results, but was not as effective as simulation for novice RNs learning safe patient care. Finally, written instruction was not as effective as simulation, and hard copy supplements provided no added value to novice RNs learning safe patient care. Conclusions: Findings from this review are foundational to address calls from the Institute of Medicine (IOM) and the National League for Nursing (NLN) to reform and support post-graduate nursing education. The development of novel education and training targeting novice RNs in the hospital setting is essential, but more research is needed to enhance safe patient care.
INTRODUCTION
In the healthcare setting, an adverse event refers to an injury resulting from medical intervention. [1] Over 33 million individuals are hospitalized annually in the United States (U.S.). [2] Of those hospitalized, one third experience at least one adverse medical event. Among hospitalized individuals that experience an adverse medical event, half will experience more than one event. [1] Each year approximately 210,000-440,000, or 44%, of hospitalized patients experience one or more preventable medical errors resulting in harm that contributed to their death. [3] Patient safety is the absence of preventable harm during the process of health care. [4] Yet despite deliberate attempts to improve quality of patient care, patient safety is still at the forefront of concerns for healthcare delivery. [5] The Institute of Medicine (IOM) reported on the dire problem of patient safety in healthcare nearly two decades ago, [6] nonetheless patient harm remains a significant concern for hospitals. [3] Nurses are at the forefront of health care and essential for excellence in patient care. [7][8][9] More than 10% of the acute care nursing workforce are newly licensed, novice registered nurses (RN) who have a notably higher risk of patient harm during their practice. [10] Novice RNs, those with less than 2 years of practice experience have formal education but limited practical experience. Evidence suggests a relationship between patient safety and number of years of practice. [11][12][13] Patient mortality is highest among RNs with 2 or fewer years of experience. [14] Background Confidence is one of the most influential motivators of behavior a novice RN can possess. Albert Bandura, social cognitive psychologist, believed that self-confidence and performance were inextricably related, and that experiences play a big part in confidence development. [15] Evidence supporting Bandura's research suggests that novice RNs' perception of ability, or self-confidence, is necessary for improving clinical performance. Likewise, the risk of preventable adverse events decreases the closer a novice RN moves towards competent clinical performance. [16,17] Nursing is a practice-based profession with clinical training a critical part of nursing education. Nursing education programs each contain their own set of curricular requirements. RNs are prepared for a wide range of roles and responsibilities caring for patients in a variety of institutional settings (Ballard, 2003). Novice RNs have this formal education but limited practical experience [11][12][13] often with insufficient exposure to a diverse set of clinical situations. [18][19][20][21][22][23] Novice RNs often lack confidence in their skills and find the transition from the role of student to working professional RN particularly challenging. [24] Confidence is a common theme in studies that have examined novice RNs' ability to learn skills required for safe patient care. [25][26][27][28] However, few studies focus on novice RN confidence and training strategies in the hospital setting. Investigations of hospital training strategies are needed to build the knowledge base for improving novice RN confidence for the application of clinical skills used towards safe patient care.
In the hospital setting, novice RNs have increased error rates and severity of error for the first six years of their practice. The risk of error falls by 10.9% and serious error decreases by 18.5% with each additional on-the-job year of experience up to six years at which time the risk is diminished to that of experienced RNs with six or more years of hospital practice. [29] Investigators report descriptions of novice RNs as individuals requiring extra advice and guidance for clinical procedure and technical skill deficits. [18,29] Furthermore, the skill mix of novice vs. experienced RNs during patient care can be a concern. [14,30] In the hospital setting, a staffing mix of 50% novice RNs can significantly degrade quality of care as compared to hospital units that have 20% novice RNs. [30] Hickey & Conner (2013) reported a much lower cut-off point for unsafe staffing mix where 20% novice RN staffing significantly increases the risk of harm to patients. Therefore, in today's fast-paced, complex clinical environment, training strategies in acute care are needed to facilitate the transition of newly graduated, novice RNs into practice to minimize error and patient harm. [11][12][13] The Joint Commission supports the use of planned, comprehensive training periods for newly graduated, novice RNs so that sufficient knowledge and skill may be acquired to deliver of safe, quality care that meet professional standards of practice. [31] The Robert Wood Johnson Foundation at the Institute of Medicine published recommendations for transforming healthcare that include achieving higher levels of RN continuing education training. [32] Numerous sets of training competencies designed to help bridge the gap from novice to practicing RN are available from a variety of sources. [33][34][35][36][37][38][39] However, continuing education is often fragmented and underdeveloped. [32] To complicate matters, research suggests that novice RNs solve ill-structured problems differently than experienced nurses. The design and support for training novice RNs is therefore more challenging than training experienced RNs. [40] Strategies that are meant to help transition novice RNs into real world practice need to be grounded in evidencebased education where training is integrated with best practice techniques. [41] Although patient safety in healthcare is often hampered by a variety of cultures and organizational changes, efforts to embed patient safety into continuing education must continue. [34,42] The identification of effective evidence-based strategies is central for supporting novice RNs in the profession, for example efforts are underway to replace passive learning experiences with experiential approaches. [43][44][45][46] The safety of patients depends on research driven, dedicated patient safety content being integrated into health professional curricula and training programs. [34,42] Previous systematic reviews addressing nurse education strategies with a focus on patient safety and confidence have discussed undergraduate RN education, [47] specific training strategies for experienced RNs, [48] and training programs designed for novice RNs such as nurse residency, internships, or orientation programs. [25] The purpose of this ystematic review was to synthesize findings from qualitative, quantitative, and mixed method investigations that examined effective continuing education and training strategies for improving patient safety in hospitals while enhancing confidence among novice RNs.
METHODS
A Boolean search using EBSCOhost search engine was applied for this review. The search was set to show recently published papers within the last ten years, sorted by relevance from January 2007 through August 2017, while searching for the most recent patient-safety focused research-based novice RN training. The author of this manuscript independently conducted the search and selection process. The articles of the initial search were critically reviewed for relevant data by the title and abstract. These articles were indexed as eligible, potentially eligible, and not eligible. Prospective eligible and potentially eligible full-text reports were reviewed for inclusion through predefined criteria and study quality indicators. Data were extracted, synthesized, summarized, and reported based on the reporting approach, the 'Preferred Reporting Items for Systematic Reviews and Meta-Analyses' (PRISMA). [49] Figure 1 summarizes the study selection process, including identification, screening, eligibility, and inclusion criteria. [50] 2.1 Eligibility criteria For this review, post-graduation education development programs refer to education and training programs in a hospital setting, and are synonymous. The terms novice nurse, new nurse, graduate nurse (GN) and newly licensed registered nurse (NLRN) are characterized in the literature as a newly graduated, novice RNs with up to 3 years of experience after graduation. To be included in this review, novice RNs must have experienced some type of continuing education or learning experiences and have hospital based employment. Reports on methods or strategies of education or training provided to undergraduate nursing students or primarily with other allied health learners are excluded. Education must have been delivered, facilitated, or monitored by an experienced educator or clinical team. This review includes peerreviewed, primary literature of original research published in English.
Search process
The following databases were used in this search: CINAHL, Communication & Mass Media Complete, Education Full Text (H.W. Wilson), Health & Wellness Resource Center, and Science Direct. Key terms included, ("teach method" OR "education strateg*" or "education method" OR "teach strateg*") AND ("self-efficacy" OR "confidence") AND ("novice nurs*" OR "new graduate nurs*" OR "newly licensed registered nurs*") AND ("patient safety" or "safe patient care").
Information sources and study selection
The initial EBSCOhost search yielded a total of 141 articles. Seven articles were removed as being duplicates. The titles of these remaining 134 potentially relevant publications were screened for eligibility. If eligibility could not be determined by the title, the article was further assessed by reading the abstract. If the eligibility could not be established by the content of the abstract, the entire article was accessed and reviewed for fit. One-hundred and seven records were excluded for not matching the purpose of the study. The remaining full text articles were assessed for eligibility. Fifteen articles were at this point excluded with six studies being undergraduate studies, one with indeterminate documentation of nursing experience, and seven not primary studies. Thus, twelve studies were deemed eligible for inclusion. Seven studies were conducted in the U.S. and five in other countries (see Figure 1). To ensure quality of primary studies with diverse designs within this mixed method systematic review, the evaluation tool "Mixed Methods Appraisal Tool" (MMAT) -Version 2011 was used to assess the eligible qualitative, quantitative, and mixed methods studies for study quality indicators. [51] 3. RESULTS
Education strategies
The Results section presents an analysis of education strategies, study design, theoretical framework and key outcomes (confidence, competence).
According to the literature there are multiple strategies available to design education programs for novice RNs focused on patient safety. The main education strategies, in order of high to lower frequency of use, were: (1) simulation-based learning, [26, 52-54, 56-58, 60-62] (2) didactic instruction and preceptored clinical experiences, [26,59] and (3) use of multi-media electronic technologies. [60,61] Case scenarios were reported as adjunct strategies and were used in a majority of studies. [26,[52][53][54][55][56][57][58][60][61][62] Student feedback was reported in most of the strategies as being provided through human debriefing experiences, [52-54, 56-59, 62] through real time computerized feedback, [60] and through real time live human feedback. [26,59] It is notable that when several strategies were applied together, simulation-based learning was most frequently used as the base strategy. For example, simulation in this review frequently included case study and debriefing. [52-54, 56-58, 62] In addition, simulation was found to have been combined with other strategies such as lecture, [26,58] skills stations, [26] clinical practice with preceptors [26] and self-directed learning packages. [58] Of the simulation strategies, multiple simulator tools were used. The most common simulator tool was the manikin, [26, 52-54, 56-58, 62] including both high and low fidelity simulator capabilities. The second most frequently used simulator tool was multi-media electronic technologies. [60,61] These can be separated into different types of electronic technologies: a virtual reality task trainer [60] and an audio visual case presentation. [61] High-fidelity simulator manikins are programmed to have a large degree of precision for replicating human clinical reaction. [26, 52-54, 56, 57] Low-fidelity manikins copy or reproduce physical findings but do not interact with the learner. [58,62] Simulations without manikins included multi-media technologies where simulation recreates a partial environment for education training where one or more targeted tasks are performed. [60,61] Physical task training tools combined with computer technology replaced real clinical procedure experiences with guided, direct participation. [60,61] Education without any type of simulation included investigation for effectiveness of collaborative interdisciplinary practice experiences, [55] and the effectiveness of classes and clinical experiences. [59] Overall, most studies had small sample sizes.
Study design
Research designs varied within the twelve studies reviewed. Study sample sizes ranged from 10-514 individual novice RN participants. Of these, three were experimental designs, [57,58,60] including one prospective experimental design, [57] one interventional study, [58] and one experimental pretest posttest, random assignment. [60] There were two mixed methods designs, including one explanatory se-quential mixed methods design [55] and one experimental, retrospective design. [62] Four investigations were quasiexperimental, including one prospective study design using pre-test, intervention, and post-test with a non-synchronized, non-equivalent control group, [61] two pretest-posttest cohort designs, [53,56] one pretest-posttest single group design, [52] and one longitudinal, non-randomized study. [26] There were two qualitative designs, including one exploratory, semistructured individual interview design, [54] and one individual, unstructured, open-ended interview design. [59] 3.3 Theoretical framework Theoretical frameworks and conceptual models serve as foundations for investigations and are used to describe the phenomenon under investigation. Theoretical foundations provide a systematic method for articulating an idea (theory) and the method by which that idea is turned into action (practice). Theory is therefore an important element of the systematic review as it assists the reader with understanding the interpretation of investigational outcomes. In essence, the inclusion of theory in the design of an investigation works as a guide when implementing interventions for clinical practice. The absence of a theoretical framework results in a lack of awareness of the underlying concepts and hinders data extraction and the methodological criteria used to interpret findings. Theory is foundational to scientific study; reports should clearly describe the logic of how the theory operates in the study. [63] About one-half of the studies reported methods guided by a theoretical framework. Beyea et al. (2010), [26] Jung, et al. (2017), [53] Kaddoura (2010), [54] Roche et al. (2013), [57] Spiva et al. (2013), [59] and Tsai et al. (2008) [60] cited no specific theory to describe the underlying concepts in their studies. Pfaff et al. (2014) [55] described the conceptual basis for the interventional strategy as being interprofessional collaboration. Yoo & Park (2014) [61] reported the intervention to be based on a constructivist framework. Fadale et al. (2014) [52] and Shepherd et al. (2007) [58] reported using Bandura's Self-efficacy theory. Fadale et al. (2014) [52] measured changes in self-efficacy and Shepherd used the theory to describe the impact of learning interventions. Rhodes et al. (2016) [56] found Dewey's experiential learning a good fit and used this theory as a foundation for examining nurse multidisciplinary simulation. Young & Burke (2010) [62] used Rogers's theoretical framework to guide the investigation for the exploration of students' selfactualization experiences with simulation. Rhodes, et al. (2016) [56] was the only investigator to have described a theoretical model for debriefing. In this study, Rudolph's advocacy/inquiry provided guidance for creating a psychologi-cally safe yet constructive learning environment.
Key outcomes
Outcomes in all studies were investigated by examining the perceptions of novice RNs. Findings were determined using a wide range of data collection and analysis techniques. For the most part, instruments were documented as valid and reliable, with the exception of the SSCS tool [26] and the ACES validation form [62] where the design of the instruments was described but presented without clear documentation of validation or reliability. In some studies, confidence was measured and discussed in terms of self-efficacy. Others discussed the improved confidence of novice RNs for conducting specific clinically-important skills. Competence improved in novice RNs across all studies.
Confidence
Thematic analysis of the data resulted in finding conceptual patterns among the sources. Although training strategies varied, confidence was found to be a strong theme in ten of the twelve studies. Notably, the investigation reported by Pfaff et al. (2014) [55] on interprofessional collaboration had particularly interesting findings that suggest certain services may help facilitate novice RN confidence development. Here, acute care RNs experiencing interprofessional collaboration developed higher confidence levels compared to those in community care and long term care employment.
Other factors related to enhanced confidence were found to be the novice RNs' proximity to the educator, accessibility to the educator, proximity to manager, accessibility of manager, number of team strategies, number of different disciplines worked with daily, and satisfaction with the team. [55] Interview data corroborated the participants' reported increases of confidence in supportive relationships, respect, knowledge and interprofessional collaborative experience. [55] Confidence in the novice RNs' ability to think critically in terms of priority setting, decision making, communication, and reporting improved during simulated experiences. [53] Elsewhere, confidence levels improved after high priority education was delivered in classes and though clinical rotations. Confidence also improved over time and within different themes of learning such as having experience, learning to manage time, and learning to communicate. [59] Several simulation studies also examined confidence. [26,53,56,[58][59][60] Novice RN confidence improved over time in simulation experiences [26,59] and in the clinical environment up to 18 months after the simulation training. [56] Analysis demonstrated positive feedback in confidence with mastering skills, [60] and gains in knowledge were also associated with the improvement of confidence. [58] Others found simulation experiences to boost confidence for using staffing resources. [62] Confidence improved with practice in solving clinical problems. In case-based learning (CBL), confidence levels improved for novice RNs actively engaged in problem-solving during the viewing of video re-enactment simulated casebased scenarios. [61] This study included a non-equivalent control group. The education was delivered by two different specialty groups of professionals during two different periods of time. The traditional lecture was delivered by the Quality Management Department to novice RNs in 2009 whereas the professional video case reenactment was created and delivered in 2010 by the investigators and a case-based learning education consultant. [61]
Competence
Confidence was a second thematic pattern found within the data. Of the twelve studies included in this review, all twelve found an improvement in competence levels among novice RNs. [26,[52][53][54][55][56][57][58][59][60][61][62] Competency in novice RNs was described as the ability to meet entry-level expectations of the nursing profession. [26] More frequently, however, competency was defined in terms of specific clinical skill or skills, such as in communication, [53,55,57,59,62] assessment skills [57][58][59] critical thinking skills, [53,54,61,62] prioritization skills, [62] advanced nursing skills, [52,60] modest, steady increases in knowledge, [56] and in overall clinical performance. [26,54] As noted, competence was measured in multiple skill types. For example, Pfaff et al. (2014) found that communication skills improved for participants who engaged in interprofessional educational opportunities, with qualitative data supporting the quantitative findings for improved communication skills. [55] Communication also improved during novice RNs orientation experiences. Professional growth was found to improve with time as novice RNs enhance their communication skills. [59] While exploring novice RN experiences of simulation, research findings suggest that both clinical and simulation experiences improve novice RN communication while fostering critical thinking skills, [62] and increasing knowledge. [56] Interestingly, communication performance data demonstrated no statistical significance between the simulation intervention group and the written case studies control group; [57] however, the intervention group (simulation) performed better on safety behaviors than the control group (written case studies). This suggests that practicing scenarios with hands-on experiences is more effective than discussion of the scenarios without hands-on practice.
In an investigation by Shepherd et al. (2007), [58] three learning learning interventions were analyzed for clinical reactions of novice RNs: a self-directed learning package, a self-directed learning package with two scenario-based didactic lectures using PowerPoint workshops, and a self-directed learning package with two low-fidelity manikin simulation sessions. Novice RNs' patient assessment skills improved significantly during simulation interventions as compared to control groups using scenario-based didactic lecture with PowerPoint. Novice RNs' patient assessment skills also improved significantly during simulation interventions as compared to the control groups using the self-directed learning package. [58] Critical thinking as a competency was also measured. Beyea (2010) found critical thinking proficiency to improve during manikin simulations as novice RNs learned to "think on the fly". [26] Young & Burke (2010) also found critical thinking to be enhanced using simulation during the Advanced Clinical Education and Simulation (ACES) course. In a mixed method study, novice RNs were surveyed on their course experiences. Novice RN participants reported that the manikin simulation course fostered their critical thinking competency during their transition into skilled and safe practicing RNs. [62] Other investigations reporting improved competency discussed enhanced prioritization skills, [62] advanced nursing skills, [52,60] and overall clinical performance. [26,54] Although novice RNs perceived the simulation portion of the Advanced Clinical Education and Simulation (ACES) course to have improved their prioritization skills, a larger majority thought that the post-simulation debriefing sessions were more valuable for learning to prioritize. In an investigation by Fadale et al. (2014), advanced nursing skills were found to improve during port-a-cath simulations using electronic multi-media. Self-directed learning during virtual reality simulations was found to be an effective process for improving advanced skills with knowledge gains and improved clinical procedure skills. [60] Simulation also proved to be helpful for learning vasopressor titration skills, an advanced nursing skill. [52] These are especially useful findings as they may aid in improving patient safety training as simulation experiences were found to effectively prepare novice RNs to care for very sick patients. [26,54] Figure 2 provides a representation of selected studies' education strategy with findings for confidence and competence. Improvements to novice RNs' confidence were found in seven out of eight investigations that included manikin simulation. The Discussion section herein presents key evidence from each of the education strategies, and discusses how confidence and competence play a role in each of the findings.
Figure 2. Illustration of commonalities and distinguishing features among education strategies reported to have improved confidence and competence in novice RNs
Novice RNs require specialized teaching strategies as they are at increased risk of medical error resulting in patient harm. Effective patient safety education for novice RNs working in hospitals is an understudied topic. The implications for the findings in this review could affect the direction of nurse training investments. Simulation in healthcare seems to be the overall favorite as an interactive technique replacing real experiences with guided experiences. [64] Depicting substantial aspects of patient care, simulated nursing experiences are one option among numerous strategies available to supplement education and training required to increase novice RN confidence and competence needed for safe patient care. [26, 52-54, 57, 58, 62] Simulation has been widely accepted and used in academia for the clinical education of undergraduate nursing students. [47,65] Few studies, however, have focused on novice RN hospital-based simulation training for continuing education. [53,56,62,66] Previous literature indicates that simulation could increase safety awareness by widening the scope of simulated experiences to include potential errors and strategies for resolution. [57,67] Moreover, a recent meta-analysis affirms the findings in this systematic review for simulation having a positive effect as a continuing education strategy with RNs, improving both knowledge and performance outcomes. [68] On a basic level, simulation is the interactive technique for enhancing the reaction of participants to a high risk skill. [64] It can be a substitution for the real thing, such as using a manikin or a human being for simulating a healthcare scenario. Although simulation does not require a manikin, training may be enhanced by its use. Simulation in any form must match learners' needs. [69] Scenarios using simulation methods can range from focused clinical education to mass trauma scenarios. Simulation, however, should be seen as an exercise not necessarily dependent how well the simulation matches the realism of the clinical situation. Instead, the objective should be for the trainee and the trainer to skillfully utilize simulation to gain knowledge and experience. [69] Simulated scenarios can be designed to be cost-effective strategies for providing continuing education. [69]
Low and high-fidelity manikin simulation
A key advantage to low fidelity simulation is that it is costeffective and portable. Low fidelity simulators can be mobilized to facilitate learning in a contextualized, real-world setting. Accessibility to training could improve training compliance and reduce time spent away from clinical care. Scenarios, when repeated frequently, may serve to reinforce training by decreasing the deterioration of learned concepts. Moreover, regularly scheduled simulations could help with the comfort level of managing critical patient events, improving novice RN ability for early recognition of patient deterioration and crisis situation interventions with increased Published by Sciedu Press frequency of training events. A disadvantage is that that low fidelity simulators do not respond to the actions of the novice RN because there is no life-like feedback. Thus, novice RNs learning in low fidelity situations must be given information verbally about the scenario with their patient during their care. [58,62] High fidelity simulation, on the other hand, allows for the realism of more complex patient care scenarios. High fidelity simulators provide immediate, life-like feedback to the learner. Confidence and competence were each found to be important results in simulation strategies. These improvements are likely due to the gains in experience novice RNs receive by role-playing and practicing critical clinical skills required for safe patient care. Through the fidelity of the simulation, just-in-time feedback assists with supporting health care concepts learned in the classroom. Moreover, high fidelity simulations seem to help with improving novice RN ability for early recognition of patient deterioration and management of crisis situation interventions to prevent failure to rescue situations.
However, high-fidelity simulation has its drawbacks. High fidelity simulators are expensive and less mobile than low fidelity simulators. Because of the realistic, life-like responses, high fidelity simulation can be intimidating for the novice RN learner. In addition, high fidelity simulators require more extensive educator training. Even with experienced nurse trainers, the complexity of a high fidelity simulator can be daunting as educators learn to use the computerized programing of the simulator. Feedback is essential to novice RN learning, and is best provided by an experienced and well trained simulation educator. [26, 52-54, 56, 57]
Multi-media electronic technologies
Virtual reality and audio-visual simulations provide costeffective user education that can be practiced over and over again without the need for an onsite trainer. This method can employ case scenarios for training purposes, or can demonstrate proper technique of a specific clinical skill. The downside is that the education must be created in advance using technology that may be challenging to learn, or costprohibitive as an initial purchase. Another limitation is that there is no learner feedback unless a trainer is onsite to interact with the students. [60,61] Simulation without learner feedback is likely a significant limitation, as necessary information required for effective learning is absent, potentially affecting learner outcomes.
Interdisciplinary experiences
Interdisciplinary experiences provide well rounded learning in the clinical setting. Connecting novice RNs with formal leaders and members of the interdisciplinary team can increase novice RN confidence in team communication. A limitation is that the experiences, when practiced on live hospitalized persons, produced safety risks. One way to minimize risk is to simulate interdisciplinary scenarios; however, simulated interdisciplinary scenarios require considerable planning and could remove some clinicians away from patient care. [55] 4.4 Didactic learning with preceptored experiences Preceptored experiences also served as non-traditional methods for training novice RNs. [59] Preceptored education strategies have similar outcomes to previous findings that "master apprenticeships" are valuable for the clinical training of novice RNs. [69] However, there are too few preceptors in the workplace, placing undue stress on the few that remain, further exacerbating risks to patient safety. Moreover, novice RNs can sometimes cause patient harm on patients by practicing on patients before they are safe care clinicians. Findings from this literature review, on the other hand, note that the use of simulated clinical experiences ensure that those initial novice RN high risk or safety focused experiences can be instead practiced in a way that cannot harm real patients.
For example, coupled with preceptored clinical experiences and individual training sessions, the effectiveness of scheduled classes for novice RN education can enhance overall clinical confidence and skill competence. [59] Interestingly, there is no added value with the addition of self-directed learning packets with didactic instruction for the improvement of competence and confidence while combined with simulated learning experiences. [58] Didactic learning is an efficient method for quickly dispersing small bits of information in short bursts of time such as for providing content in orientation. Preceptored experiences are an excellent resource for novice RNs, providing some protection from harm for novice RNs and the patient during the novice RNs' first weeks and months of patient care. Individual training is helpful for skills such as IV and port-a-cath insertions, pump programming, Foley catheterizations, or chest tube maintenance, where during school little to no practice was provided.
Implications for practice
Americans are older, sicker, and more expensive to care for than at any time in history. Almost 20 years after the Institute of Medicine reported on hospital safety, "To Err is Human: Building a Safer Health System", medical error is the third leading cause of death behind breast cancer, AIDS, an motor vehicle crashes. [70] Previous efforts to improved safety in hospitals has stalled. [71] In hospitals, novice RNs have the greatest risk for medical error and severity of error. Novice RNs learn differently than experienced RNs because they do not have experience to draw from. This makes their education that much more challenging. What remains unclear is how to best educate patient safety to the novice RN to minimize or prevent medical error.
Simulation strategies that work well for the transition from novice to practicing RN are an important goal for patient safety initiatives in novice RN education. Simulation has proven a positive alternative to traditional continuing education in novice RNs. The most successful combinations of education patterns, those that have not yet been recognized as proven strategies, might be uncovered as important features for professional development in the novice RN transition to competent RN. Moreover, novice RNs experiencing positive transitions to safe, practicing RN create happier novice RNs, potentially raising retention rates for this group. [72] In addition, overall improved quality of hospital care lowers the risk of medical error and has the potential for producing happier patients as evidenced with improved patient satisfaction scores. [73] Today's healthcare setting care is complex. Caring for patients safely is more challenging than ever before. In this review, multiple examples of learning strategies have helped describe effective training for post-graduate novice RNs, including manikin simulation, multi-media electronic technologies, interdisciplinary experiences, and didactic learning with preceptored experiences. Simulation as a non-traditional learning approach included low and high fidelity manikins, and multi-media electronic technology simulation. It appears that simulation is the most adaptable of the three dominant strategies reviewed. Simulation can be used independently or combined with other strategies; in addition, simulation seems to provide tremendous location flexibility and cost efficiency, and can be molded around objectives designed to fit the need of the novice RN.
In the hospital, both high and low fidelity simulation can help with improving novice RN ability for early recognition of patient deterioration and management of crisis situation interventions, and each strategy has its drawbacks. While high fidelity experiences provide immediate, life-like feedback to the learner, they are expensive and can require a laboratory setting with highly trained clinical educators. Low-fidelity simulation can be used to train novice RNs in a variety of settings for multiple patient safety scenarios. Simulation sessions where the novice RNs needs to have hands-on practical exercise is possible with both with lab and in-situ novice RN training exercises. For example, when a patient is transferred to a different level of care, multiple patient safety assessments need to be completed. Scenarios that include safety checks such as physical assessment, medication verification, IV assessment, IV pump and other equipment assessments can be completed by low-fidelity manikin simulation in either setting, lab or in-situ. Especially in the clinical setting, simulations are an effective strategy in preparing novice RNs for the unexpected clinical scenario. [58] Simulation was found to be the overwhelming predictor for novice RN gains in confidence and practice competence. Based on the articles analyzed for this systematic review, it seems clear that simulation is an effective choice for training novice RNs as they transition into confident, competent RNs who practice safely. Nurse educators with experience in acute care and simulation training techniques can add to novice RNs' patient care experiences by using simulation in the acute care setting. Simulation strategies in healthcare are a useful tool for addressing error and improving teamwork and communication. Novice RNs are able to improve their confidence and competency when provided simulated training, and view simulated scenarios as training experiences capable of producing a change in their behavior for the acquisition of new skills.
The use of effective teaching strategies has been found to be especially important for studying complex concepts such as patient safety in clinical education. Simulation for training novice RNs was demonstrated as a powerful and effective strategy, and each of the nine investigations that used simulation in some form experienced an improvement in competence, confidence, or both during novice RN training. Simulation has shown to be beneficial to novice RNs learning to care for patient safely, and simulation research with novice RNs is one unique course of action that might help prevent future medical error tragedies among novice RNs.
Recommendations for further research
Diverse forms of data sources and analysis methods are important in research design because multiple, varied data sources and perspectives help strengthen the validity and credibility of the systematic analysis. Therefore, suggestions for further research include a more comprehensive synthesis of data in the form of a mixed method systematic review using a team approach. Moreover, increasingly complex hospital environments, greater numbers of patients with multiple, chronic health care problems, limited preparation time, and scarce numbers of clinical sites result in novice RNs receiving limited hands-on opportunities-the very experiences required to prepare them to function as confident, competent, safe RNs in today's hospital setting. The findings from this review support Benner's understanding of novice RN professional development from novice to expert nurse. [74] Nurses may be better able to conceptualize and therefore identify appropriate courses of actions through repeated practice, including simulated experiences. Thus, education that is designed fill gaps in the preparation and readiness of novice RNs will most likely be fulfilled through the development of simulation interventions to be used in the hospital setting. Future research should additionally focus on the prevention of "failure to rescue" events by novice RNs. By gaining confidence and competence particularly in caring for lower volume emergent situations such as a patient experiencing clinical deterioration, novice RNs will greatly reduce their risk of medical error and hospitals will achieve improved safety for their patient population.
Strengths and limitations
This review examines strategies for teaching novice RNs learning to care for patients safely. A strength of the review is the structured search of available literature within peerreviewed primary studies for strategies used to train novice RNs. The review outlines strategies for supporting novice RNs, and describes the less successful strategies. Educators are able to take these results and use them for improving the development of unique, effective post-graduate novice RN training. The limitation of the ten-year search range and by limiting the sample to novice RNs in the hospital setting many have reduced evidence found on patient safety continuing education strategies. Also, the data field search strategy using EBSCOhost resulted in capturing a limited number of databases. In the future a broader set of keywords might help increase the return on the number of databases, therefore increasing the number and variety of potentially eligible articles. In addition, the inclusion of a second search engine for exploring biomedical literature such as PubMed, [75] and searching databases individually, could yield an increase in search results. [76] Another limitation is that approximately half of the studies were completed outside of the United States increasing the divergence of clinical practice and settings. This variability of locations may have contributed to potential differences in learner perspectives. Across studies, education strategy was also diverse and varied widely both in the base and adjunct education strategies thereby limiting generalizability of the results. In addition, the assessment of novice RNs clinical ability differed and in some cases was reported as challenging. A higher quality assessment of novice RN clinical ability could help support methods for evaluating novice RN learning achievements and serve as a starting point for a greater focus on post-graduate education development. Moreover, outcomes for each strategy were measured using different instruments and direct observation was described as not always accurate due to the techniques used in assessment and potential inter-rater differences. Novice RNs rating their outcome perceptions each have their own human history of clinical and life experiences and so perceptions when rating learning experiences are likely to vary as well. When data collection methods vary, outcomes comparisons across studies are challenging.
To reduce publication selection bias this systematic review encompasses qualitative, quantitative, and mixed methods research. A qualitative synthesis, this report does not include meta-analysis. The omission of a meta-analysis might be considered a limitation by some readers. However, the outcomes of the selected studies were derived by both quantitative and qualitative data. Qualitative methods are based on observation and do not use figures and numbers. In addition, the strategies reviewed covered multiple educational modalities. Thus, a mathematic summary meta-analysis of the outcomes using the available statistical results would provide little insight for readers. [77]
CONCLUSION
Continued evidence collection and the assessment surrounding safe patient care education facilitates a better understanding of how novice RNs can improve their practice confidence and effectively learn to keep patients safe. This literature review investigated strategies used to train novice RNs for safe patient care and spotlights the paucity of evidence focused on confidence and safety in hospital-based continuing education. A highlight of this review reveals several interesting findings. Simulation, far outnumbering other reported strategies, seems to be gaining acceptance as an effective option that can be used to increase novice RN confidence for the application of clinical skills used to care for patients safely. The pathway to get there, according to the literature, is to ensure simulated experiences include potential errors and strategies for resolving those errors. Combined with an increased focus on patient safety, hands-on simulated clinical experiences appear to positively affect novice RNs' abilities to obtain relevant clinical experience and, by making connections through repeated practice, develop higher levels of thinking. Serving as a guide for future research in the exploration for novel methods of delivering simulation training, this review supports the call by the Institute of Medicine, [32] the Agency for Healthcare Research and Quality, [78] and the National League for Nurses [39] to develop, test, and evaluate strategies with a focus on simulation for improving the safe delivery of health care. The overarching goal is to create a foundational knowledge base from which to draw from in planning the investigation of simulation for behavioral changes of novice RN at the patient bedside, and later for the impact of learning on patient outcomes. | 2019-03-17T13:08:46.799Z | 2017-11-20T00:00:00.000 | {
"year": 2017,
"sha1": "88685eec95821a0dcdf9edd239c38111e82c17ed",
"oa_license": null,
"oa_url": "http://www.sciedupress.com/journal/index.php/jnep/article/download/12141/7750",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "a2417dee50ae4965ac077a3c1f2cc9d300755971",
"s2fieldsofstudy": [
"Education",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212720370 | pes2o/s2orc | v3-fos-license | Pattern of Presentation and Socioeconomic Distribution of Patients Presenting With Impacted Third Molar at Lagos State University Teaching Hospital Nigeria
Tooth impaction occurs when a tooth is prevented from erupting into the oral cavity which is their functional position within an expected time. The mandibular third molars are the most frequently impacted tooth in the oral cavity. Some studies have reported a higher frequency in females than males. A search through the literature shows that no study has been done to ascertain the socioeconomic status of patients presenting with third molar impaction in Nigeria. This study is therefore aimed at determining the pattern of presentation and socioeconomic distribution of patients presenting at the Lagos state University Teaching Hospital for extraction of impacted third molar. 337 patients with impacted third molar who had indication for extraction and had met the inclusion criteria were included in the study after obtaining a signed consent. The positions of impacted third molar teeth on the panoramic radiographs were documented. Associated disorders and every other indication for surgical extraction was documented. Patients were stratified into three socioeconomic classes using the modified version of questionnaire by chukwuonye et al and Balogun et al. Data were analyzed using a Pearson chi-square test, performed using the Statistical Package for the Social Sciences (version 20; SPSS, Inc, Chicago, IL). A total of 337 patients met the inclusion criteria within the two year study period. The sample consisted of 129(40.9%) male and 208(59.1%) female. The difference between male and female was statistically significant (P= 0.04). Most third molar extraction that was recorded in the third decade of life where 172 (51.0%) cases were recorded. The most prevalent type of impaction recorded was the mesio-angular position (49.9%). The most common pathology associated with impacted third molars was pericoronitis which was recorded in 135cases out of total of 337 patients. Majority of the patients who had third molar extraction belong to the middle socioeconomic class (42.7%) followed by patients in high socio-economic class (40.9%).Third molar impaction was commoner in females than males with mesioangular impaction being the most common type. It was most commonly found in the third decade of life and pericoronitis was the most common associated pathology. Majority of the studied subjects belong to the middle socioeconomic class.
INTRODUCTION
Tooth impaction occurs when a tooth is prevented from erupting into the oral cavity which is their functional position within an expected time [1]. The mandibular third molars are the most frequently impacted tooth in the oral cavity [2]. Third molars eruption is variable, ranging from age 18 to 24 years [3]. Lack of space or physical barrier among other factors may be the cause of impaction [1]. Furthermore racial variation in facial growth, jaw and teeth size, nature of diet, extent of generalized tooth attrition, degree of use of masticatory apparatus and genetic inheritance are the crucial factors which determines the eruption pattern, impaction status and the incidence of agenesis of third molars [4]. Most studies have reported no gender differences in third molar impaction [5,6]. However, some studies have reported a higher frequency in females than males [7]. Mandibular third molar impaction has been classified, based on the level of impaction, the angulations of the third molars, and the relationship to the anterior border of the ramus of the mandible [8]. Winters and Pell [9] and Gregory [10] classifications are most commonly used to classify impacted mandibular third molars.
Third molar impaction has been directly or indirectly associated with numerous disorders in the mouth, jaw and facial regions. Therefore, their extraction is one of the most common surgical procedures for Oral and Maxillofacial surgeons [11]. This may be a simple forceps extraction or more complex surgical procedure. Surgical methods vary among surgeons depending upon their training and experience [12].
A search through the literature shows that no study has been done to ascertain the socioeconomic status of patients presenting with third molar impaction in Nigeria. Third molar impaction and other forms of malocclusion are common disorders in countries with a high standard of living [13,14]. However, there is no consensus on various socioeconomic classifications in Nigeria, because of the unstructured nature of the society [15]. This study is therefore aimed at determining the presentation pattern and socioeconomic distribution of patients presenting at the Lagos state University Teaching Hospital for extraction of impacted third molar.
METHODOLOGY
A prospective study conducted in the Department of Oral and maxillofacial Surgery. Lagos state university teaching hospital from june 2017 to june 2019 on patients who had indication for surgical removal of impacted third molars. This study is an extract of a larger study which was conducted in the same department after securing clearance from the ethical committee.
A signed consent was obtained after giving necessary information to the patients regarding the study and the surgical procedure. The age, sex, number of impacted third molar was obtained through history taking, clinical examination and radiographic study. Third molar was considered impacted if it did not have functional occlusion and at the same time, its roots were fully formed.
The positions of impacted third molar teeth on the panoramic radiographs were documented. The angulation of impacted third molar was documented based on winter's classification with reference to the angle formed between the intersected longitudinal axes of the second and third molars.
The presence of related symptoms including pain, pericoronitis, lymphadenopathy and trismus was noted for every patient. Associated disorders and every other indication for surgical extraction were documented.
Exclusion criteria-Patients with any systemic diseases like diabetes, or any craniofacial anomaly or syndrome, pathological dento-alveolar condition, absence of mandibular second molar A modified version of the socioeconomic status (SES) questionnaire used by chukwuonye et al. [15] and Balogun et al. [16] was used to collect information on the subjects' highest educational attainment and level of income. This was used to classify the subjects into the 3 different socioeconomic groups. Income included all possible sources of income available to the individual. For the unemployed patients the total per capita income of the head of the family was used. Therefore, respondents were categorized into three classes, according to their reported income. Low income earners received 18,500 Naira (₦) or less per monththe minimum wage in Nigeria. The middleincome class earned ₦85,000 or less per monthabout the salary level of a newly employed Nigerian graduate. The upper income class earned more than ₦85,000 per month). Educational level was defined as the highest level of individual education completed and was categorized into four groups: No formal education; primary (1-6 years); secondary (7-12 years); and, tertiary (≤13 years). The Incomes are scored 1 to 3, with 1 denoting low income, 2 denoting Middle income and 3 denoting High income. For educational level, 0 denotes no educaton, 1 denotes primary, 2 denotes secondary and 3 denotes tertiary education.
Based on the summative score, the participants were categorised into lower, middle, or upper socioeconomic class. Score1 to 2 was for low socioeconomic group, middle (3)(4) and upper socioeconomic group (5-6).
Data were analyzed using a Pearson chi-square test, performed using the Statistical Package for the Social Sciences (version 20; SPSS, Inc, Chicago, IL). The age, gender, number of impacted third molars and classification of impaction were displayed by frequency and percentage. The level of significance was 5% (p < 0.05) and data were presented with 95% confidence intervals where applicable. All assessment was done by a single examiner to eliminate the inter-examiner errors. All data regarding patient identification and medical conditions were kept confidential.
RESULT
A total of 337 patients met the inclusion criteria within the two year study period. The sample consisted of 129(40.9%) male and 208(59.1%) female, with age ranging from 18 to 47years and mean age of 25.95±6.6years. Most third molar extraction that was recorded in the third decade of life where 172 (51.0%) cases were recorded followed by the fourth decade 100(29.7%). Females had more third molar extraction done than male with female to male ratio of 1.6:1 .The difference between male and female was statistically significant (P= 0.04) Table 1.
The distributions of the third molar extractions done on the left and the right sides do not differ significantly, The most prevalent type of impaction recorded was the mesio-angular position (49.9%) (Figure 2), followed by vertical (25.2%), horizontal (14.8%) and distoangular impactions (9.5%) Table 2.
The most common pathology associated with impacted third molars was pericoronitis which was recorded in 135cases out of total of 337 patients seen within the study period, followed by dental caries 89 cases and 41 cases of apical pathologies were also recorded. The least indication for extraction was recorded in 9 cases of orthodontic treatment. Third molars with Mesio-angular impaction (49.6%) and vertical impactions (22.2%) were more distinctly involved in pericoronitis than the other types of impaction . Table 3.
Majority of the patients who had third molar extraction belong to the middle socioeconomic class(42.7%) followed by patients in high socioeconomic class (40.9%).Only 23(16.3%)patients who were in the low socio-economic group had third molar extraction Figure 1.
DISCUSSION
The third molar is the most commonly impacted tooth in the oral cavity accounting for 98% of all impactions [17] and mandibular third molars are the most frequently impacted [18,19]. There is variation in the frequency of third molar impaction amongst different populations; and these range between 18% and 70% [20,21]. This is attributed to racial variation in facial growth, jaw and tooth size [17]. Majority of patients who had third molar extraction in this study were in the third decade. This finding is in agreement with other studies in the literature [22,23]. There was very less number of patients above the age of 40 which is in contrast to the study results reported by Khan et al. This may be due to the removal of impacted mandibular third molar at an earlier age [24].
The result of this study showed a higher frequency of third molar impaction in females than males, this difference was statistically significant .This conforms to previous reports [3,7]. The higher frequency reported in females is due to the consequence of difference between the growth of males and females. Females usually stop growing when the third molars just begin to erupt, whereas in males, the growth of the jaws continues during the time of eruption of the third molars, creating more space for third molar eruption [25].
The commonest type of impaction in this study is mesioangular. This is similar to the previous reports from Nigeria Pakistan, USA, China, Thailand and Spain [3,22,26]. However, a study among Jordanians found that vertical impaction was the most common type (61.4%) and mesioangular type was only 18.1% [27]. The reason for the difference is not clear.
The present study, like most of the similar previous works on associated pathologies with impacted third molar reported higher frequencies in mesioangular and vertical impactions respectively. This may invariably presuppose that some angulations are more prone than the others However; Polat et al. suggested that this may just be because such positions have a higher frequency [28].
Similar to the works of Jamileh and Pedlar [29] and Khawaja [30] pericoronitis was the most common indication for removal of impacted mandibular third molars. We also observed that this pathological condition was more frequent in mesioangular impaction as also been reported by Güngörmüs [31] andKay [32] Conversely Leone et al. had reported vertical and slightly distoangular teeth to be the cause of pericoronitis [33]. However Prajapati et al. [34] in their study, recorded caries (especially of the adjacent tooth) and its sequelae as the major reason (63.2%) for the mandibular third molar extraction, followed by recurrent pericoronitis (26.3%) and periodontitis (9.2%). Diet may be a major reason for this difference.
Socioeconomic status (SES) is commonly measured as a combination of income, education, and occupation. However, in most cases, it is measured by the income and level of education [35]. Our study revealed a large number of patients in the middle and high socioeconomic class .Roslind et al. [36] in their study in Australia observed the young adult in a higher socioeconomic class to be four times more likely to have the impacted tooth removed in a hospital than a person from a lower socioeconomic group. Third molar impaction and other forms of malocclusion are common disorders in countries with a high standard of living [13,14].
We therefore inferred with this socioeconomic trend in our study that these patients who had extraction were those who could afford the cost of treatment since funding for dental services is predominantly out-of 154 pocket expense. Diet may also be another reason since people in these groups are those who may likely feed on diets that could predispose to impaction CONCLUSION This study revealed that majority of the third molar impaction was characterized by mesio-angular impaction and was more common in females than males. There was no significant difference between right and left impaction. Third molar impaction was found to be more common in subjects in the third decade of life and pericoronitis was the most common associated pathology. Majority of the studied subjects belong to the middle socioeconomic group. | 2020-03-16T00:42:27.267Z | 2020-03-25T00:00:00.000 | {
"year": 2020,
"sha1": "b3c471abc1263c660b10f24f34bd2c8596d35252",
"oa_license": null,
"oa_url": "https://doi.org/10.36348/sjodr.2020.v05i03.003",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "b3c471abc1263c660b10f24f34bd2c8596d35252",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
15354977 | pes2o/s2orc | v3-fos-license | A New Kind of Graded Lie Algebra and Parastatistical Supersymmetry
In this paper the usual $Z_2$ graded Lie algebra is generalized to a new form, which may be called $Z_{2,2}$ graded Lie algebra. It is shown that there exists close connections between the $Z_{2,2}$ graded Lie algebra and parastatistics, so the $Z_{2,2}$ can be used to study and analyse various symmetries and supersymmetries of the paraparticle systems.
Introduction
It is well known that the symmetry plays a fundamental role in theoretical physics research. When a new symmetry of investigating system is revealed, it will lead not only to better understanding of the system but sometimes give rise to establishing unexpected and important relations between different theories and lines of research. Parastatistics first was introduced as an exotic possibility extending the Bose and Fermi statistics [1], and for the long period of time the interest to it was rather academic. Recently it finds applications in the physics of the quantum Hall effect and it probably is relevant to high temperature superconductivity [2], so it draws more and more attentions from both theoretical and experimental physicists. Supersymmetry, established in the early 70's, unifies Bose and Fermi symmetry and leads to deeply developments of field and string theories which become the cornerstones of modern theoretical physics. A natural question is, can we unify parabosons and parafermions into some kind of supersymmetric theories? Though some recent researches indicated that the so-called parasupersymmetry (paraSUSY) maybe unifies these paraparticles [3], nevertheless, the two concepts, i.e., the parastatistics and the supersymmetry, seem to be still independent from the point of view of algebraic structure. In fact, the mathematical basis of usual supersymmetry is Z 2 graded Lie algebra [4], in which not only commutators but also anticommutators are involved. However, the characteristic of algebraic relations of parastatistics is trilinear commutation relations, or double commutation relations. Thus two paraparticles neither commute nor anticommute with each other, and in place of this, three paraparticles satisfy some complicated commutation relations, which certainly bring some intrinsic difficulties into the research of paraSUSY.
A new kind of graded Lie algebra (we call it Z 2,2 graded Lie algebra) is introduced in this paper to provide a suitable mathematical basis for the paraSUSY theories. This Z 2,2 graded Lie algebra is a natural extension or generalization of the usual Z 2 one. Especially, the trilinear commutation relations of parastatistics can automatically appear in the structure of Z 2,2 graded Lie algebra, therefore, supersymmetric theories based on the Z 2,2 graded Lie algebra may unify the concepts of parastatistics and supersymmetry. It can be shown that the algebraic structure of paraSUSY systems indeed is the Z 2,2 graded Lie algebra. So we can classify and analyse the possible symmetries and supersymmetries of paraparticle systems more systematically and more effectively from the point of view of Z 2,2 graded Lie algebra.
The paper is arranged as follows. In section 2 the mathematical definition of the Z 2,2 is introduced with some examples showing how to produce a Z 2,2 graded Lie algebra from a usual Lie algebra, especially how to derive the characteristic trilinear commutation relations of parastatistics. In section 3 we demonstrate that the algebraic structure of systems consisting of parabosons and parafermions is nothing but the Z 2,2 graded Lie algebra. We analyse the various symmetries and supersymmetries of paraparticle systems in section 4, and give some summary and discussion in the last section.
where mod2 means cutting 2 when i + m = 2. For instance, L 00 • L 00 → L 00 , L 10 • L 11 → L 01 , L 11 • L 11 → L 00 , · · ·. (iv) Supersymmetrization: For ∀u ∈ L ij , v ∈ L mn , we have here we assign to any u ∈ L ij a degree g(u) = (i, j) which satisfies where v ∈ L mn . Obviously, the g(u) looks like a two-dimensional vector and the above two expressions are exactly dot product and additive operations of the two-dimensional vectors.
(v) Generalized Jacobi identities: For ∀u ∈ L ij , v ∈ L kl , w ∈ L mn , (i, j, k, l, m, n = 0, 1), we have It is easily to know that there are totally 20 different possibilities for constructing the generalized Jacobi identities from 4 subspaces of L (C 1 4 + C 1 4 C 2 3 + C 3 4 = 20). Definition: A linear space satisfying the above conditions (i)-(v) is called the Z 2,2 graded Lie algebra.
For instance, one can define the product rule on L as for ∀u, v ∈ L, where the elements u, v can be treated as operators in some Hilbert space and the expression uv can be understood as a product of the two operators, it is straightforward to check this product rule satisfying the above conditions (ii), (iv) and (v). Furthermore, if one impose the closure and the grading on the product rule (7), the all elements in L will form a Z 2,2 graded Lie algebra according to commutation or anticommutation relations, and the generalized Jacobi identities will take the usual trilinear (or double brackets) form.
Thus we give the full definition of the Z 2,2 graded Lie algebra. It is easy to see that the Z 2,2 graded Lie algebra is a directly generalization or extension of the usual Z 2 graded Lie algebra in the following several aspects: their product rules are the same, and both of them have the three basic characters of graded Lie algebra, i.e., grading, supersymmetrization and generalized Jacobi identities. The only difference between them is, for Z 2 case, which is a direct sum of two subspaces, the grading is one-dimensional, and for Z 2,2 case, which is a direct sum of four subspaces, the grading is two-dimensional. Correspondingly, the degree of elements in Z 2 is only a number, and that in Z 2,2 is a two-dimensional vector.
It is worth mentioning that there are only four possibilities to construct the generalized Jacobi identities in terms of trilinear or double brackets mathematically, i.e., Here we want to point out that only the first three expressions of (8) appear in the generalized Jacobi identities of the usual Z 2 graded Lie algebra [4], however, all the four expressions of (8) will appear in the generalized Jacobi identities of Z 2,2 . This fact indicates that the Z 2,2 is more complete in the algebraic structure, and has higher symmetries than Z 2 . Their further connection will become clearer later in the symmetry analysis. Now we take the Lie algebra su(1, 1) as an example to show how to produce the Z 2,2 extension of su(1, 1). We take su(1, 1) as the L 00 subspace which is three-dimensional, however, the dimensions of subspaces L 01 , L 10 and L 11 are restricted by the generalized Jacobi identities. Simple calculations show that in a nontrivial maximum-dimensional extension, the above three subspaces can only contain 2, 2 and 1 elements respectively. Using the notations of elememts in different subspaces showing in the Table 1, we have where i = 1, 2, 3, m, n = 1, 2 and α, β = 1, 2. The first line in (9) is the Lie algebraic relations in L 00 subspace, and the others on (9) are the algebraic relations of the Z 2,2 extension. All the structure constants such as t i , c i , h i , d i are determined by the generalized Jacobi identities as where σ i and I are the Pauli matrices and (2 × 2) unit matrix respectively. It needs to be explained that the three undetermined constants λ i may be absorbed into redifinitions of the elements in L 01 , L 10 and L 11 subspaces. If we consider a unitary representation of the Z 2,2 extension of the su (1, 1), and take the constants as λ 1 = −i, λ 2 = 0 and λ 3 = 2, then under mappings −2iτ 3 → M (a), 2(−τ 1 + iτ 2 ) → B † , 2(τ 1 + iτ 2 ) → B, a 1 → a † and a 2 → a, we can derive from (9) the following relations Thus we see that the parastatistical algebraic relations can be automatically derived from the Z 2,2 algebraic relations.
Algebraic structure of parastatistics
Let us consider a paraparticle system consisting of M-mode parabosons (whose creation and annihilation operators are denoted by a † k and a k respectively) and N-mode parafermions (denoted by f † α and f α ), whose algebraic structure can be expressed in terms of the following 12 independent relations (the Latin indices take values from 1 to M, and the Greek ones from 1 to N) Other parastatistical relations of this system can be derived from the above 12 basic relations by taking Hermitian conjugate or by using the generalized Jacobi identities (8). Now we define the following 6 new operators Furthermore, by virtue of (13), (14) and (8) with their Hermitian conjugate ones. After observing these relations carfully, one can see that the 10 new operators form the usual Z 2 graded Lie algebra, whose Bose subspace includes the operators M kl (a), B kl (a), B † kl (a), M αβ (f ), B αβ (f ), B † αβ (f ), and whose Fermi subspace includes the operators F kα , F † kα , Q kα , Q † kα . Moreover, if considering the whole algebraic relations (13-19) together, one can find that actually the following 14 operators form a Z 2,2 graded Lie algebraic system according to the product rule (7) (we may call it para-Lie superalgebraic system), the oprators of whose four subspaces are showing in the Table 2. Obviously, from the point of view of the algebraic structure, this para-Lie superalgebraic system is exactly equivalent to the parasystem consisting of the four kinds of operators a k , a † k , f α , f † α according to the trilinear or double commutaion relations (12). Thus we arrive at a conclusion: The parastatistical algebraic relations (12) are equivalent to the para-Lie superalgebraic relations (13-19) on the algebraic structure, and the latter is the algebraic relations of z 2,2 graded Lie algebra mathematically. This conclusion can be written as following theorem Theorem: The algebraic structure of parastatistics is a Z 2,2 graded Lie algebra.
4 Symmetries and supersymmetries of a para-Lie superalgebraic system ¿From the point of view of the para-Lie superalgebra or the Z 2,2 graded Lie algebra, it is easily to analyse the symmetries and supersymmetries of a paraparticle system as follows: (1) From (15) and (16) it is clear that the two subset (M kl (a), B kl (a), B † kl (a)) and (M αβ (f ), B αβ (f, )B † αβ (f )) form the dynamical symmetry algebras of a pure parabose and a pure parafermi subsystems (sp(2M,R) and so(2N,R)) respectively. So the whole subspace L bose (see the Table 2) forms a Lie algebra sp(2M, R) ⊕ so(2N, R).
(2) From (15-19) it is clear that the whole subspace L Bose ⊕ L F ermi form a Z 2 greded Lie algebra, in which (M kl (a), M αβ (f ), F kα , F † kα ) and (M kl (a), M αβ (f ), Q kα , Q † kα ) form two dynamical supersymmetric algebras of the paraparticle system respectively. It should be pointed out that for the former there is no way to construct a dynamical model with a positive definite Hamiltonian, so it is a unaccepted supersymmetry in physics, however, for the latter it is indeed possible to realize dynamical sypersymmetric models with definite physical meanings.
Furthermore, besides the parabosons and parafermions (with p > 1 where p is the paraststistcs order), one can include ordinary bosons and fermions (p = 1) in the considering system, or more detailed, put creation and annhilation operators of the bosons (fermions) into the subspace L Bose (L F ermi ). Since according to the product rule of Z 2,2 , either the commutators or anticommutators between para-and non-para-particles will be zero, the whole algebraic structure of the enlarging system is not changed. In other words, for the system consisting of bosons, fermions, parabosons and parafermions, the algebraic structure of its statistical relations is still the Z 2,2 graded Lie algebra. Therefore, we can analyse the various potential supersymmetries of such a system unifiedly within the framework of Z 2,2 . Futher research shows that only the following three kinds of supersymmetries (including six different cases) are possible mathematically between the four kinds of particles: (i) The supersymmetries between boson and fermion or paraboson and parafermion, which are realized by some fermi-like supercharges Q F ; (ii) The supersymmetries between boson and parafermion or paraboson and fermion, which are realized by some parafermi-like supercharges Q P f ; (iii) The supersymmetries between boson and paraboson or fermion and parafermion, which are realized by some parabose-like supercharges Q P b .
After detailed analysis we find that it is not possible to write out a positive definite Hamiltonian in the case (iii), so we do not consider the case (iii) further. To our knowledge, so far only part of the above six possible supersymmetric cases has been studied in the literature. For instance, the sypersymmetry of boson and fermion in the case (i) is studied by ordinary supersymmetric quantum mechanics [5], and the sypersymmetry of boson and parafermion in the case (ii) is studied in the paraSUSY quantum mechanics [6]. Other supersymetric cases need to be studied further. Comparing with the ordinary statistics in which there is only one kind of supersymmetry, i.e., boson-fermion sypersymmetry, the parastatistics allows existance of more supersymmetries. This also indicates that the paraSUSY can be analysed and studied more conveniently within the framework of Z 2,2 graded Lie algebra.
Conclusion
The concept of Z 2,2 graded Lie algebra is introduced in this paper, which has intrinsic connection with the parastatistics and the paraSUSY. The Z 2,2 graded Lie algebra can unify not only paraboson and parafermion, but also boson, fermion, paraboson and parafermion within one algebraic structure. It is well-known that when the order p of parastatistics goes to 1, the parastatistics will reduce to the ordinary statistics, as well as the parabose and the parafermi subspaces will reduce to the ordinary bose and fermi subspaces, so the Z 2,2 graded Lie algebra will reduce to the ordinary Z 2 one. Therefore for the ordinary statistics only the Z 2 graded Lie algebra is needed. However, when p > 1, except original bose and fermi subspaces, one has to introduce two extra parabose and parafermi subspaces into the algebraic structure. In this sense we say that the Z 2,2 graded Lie algebra is more complete in the structure and with higher symmetry than the Z 2 one. Considering the discussion in section 2, we can also say that the Z 2 graded Lie algebra (or orsinary statistics) is an one-dimensional reduction of the Z 2,2 one (or parastatistics), and the Z 2,2 (or parastatistics) is a two-dimensional generalization of the Z 2 (or ordinary statistics). A high-dimensional system has higher symmetry and is more complete constructionally than a low-dimensional system, this is a common fact. Also in this sense, we may call the Z 2,2 graded Lie algebra ( para-Lie superalgebra) as "two-dimensional" Z 2 graded Lie algebra (ordinary Lie superalgebra). Of course, searching Fock space representations of the Z 2,2 graded Lie algebra and constructing concrete paraSUSY dynamical models are very important and urgent problems. We will present the Fock representation for a system with one-mode parabose and one-mode parafermi degrees of freedom and discuss relevant paraSUSY dynamical model in a separate paper. | 2014-10-01T00:00:00.000Z | 2002-12-02T00:00:00.000 | {
"year": 2002,
"sha1": "de7852e62687175f0a54542300af0e533bef9dc7",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/math-ph/0212004",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "69e5fe7ea3e2e0b930563ba5913aaa3eaf9fdedf",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
244637938 | pes2o/s2orc | v3-fos-license | Nasal therapy—The missing link in optimising strategies to improve prevention and treatment of COVID-19
Norman A. RatcliffeID *, Helena C. Castro, Izabel C. PaixãoID, Victor G. O. EvangelhoID , Patricia AzambujaID , Cicero B. Mello 1 Programa de Pós-Graduação em Ciências e Biotecnologia, IB, Universidade Federal Fluminense, Niterói, Brazil, 2 Department of Biosciences, Swansea University, Swansea, United Kingdom, 3 Laboratório de Virologia Molecular e Biotecnologia Marinha, IB, Universidade Federal Fluminense, Niterói, Brazil, 4 Laboratório de Bioquimica e Biotecnologia, IB, Universidade Federal Fluminense, Niterói, Brazil
COVID-19 entry portal
The main entry of SARS-CoV-2 occurs in the ciliated epithelium lining the nose [9][10][11]. The importance of the nasal epithelium in host invasion, involving the specific attachment of influenza and other viruses to the ciliated cells, was reported over 50 years ago [12]. These ciliated cells have the highest expression levels, in the airways of the body, of the SARS-CoV-2 entry receptors, angiotensin converting enzyme 2 (AAU : PleasenotethatACE2hasbeendefinedasangiotensinc CE2), and the viral entry-associated protease, transmembrane serine protease 2 (TAU : PleasenotethatTMPRSS2hasbeendefinedastransmembraneseri MPRSS2) [9-11]. SARS-CoV-2 binds to these using the receptor-binding domain (RBD) of the virus spike protein [13]. Following attachment and entry into the nasal epithelium, the virus multiplies, spreading around the body [11]. To emphasise, the ciliated cells of the nasal mucosa are the main host entry targets for the virus, so that denying access of SARS-CoV-2 to the entry receptors by intranasal drug prophylaxis needs prioritising.
New opportunities-Nasal therapy
Most SARS-CoV-2 vaccines are injected and mainly induce serum immunoglobulin G1 (IgG1), which enters and protects the lungs, leaving the nasal epithelia and upper respiratory tract largely unprotected. Any serum immunoglobulin A1 (IgA1) produced by vaccination is not effectively transported to the secretions of the upper respiratory tract including those of the nasal mucosa [14]. The dynamics of the mucosal immune response to COVID-19 is largely neglected, although the IgA secreted is 7 times more potent than IgG at neutralising SARS-CoV-2 [13][14][15]. Only natural infections induce both IgG1to protect the lungs as well as IgA1 to protect the upper respiratory tract, including the nasal passages [16]. Thus, injected vaccines fail to fully address the main portal of virus entry into the body through the nose, and, yet, few, if any, drugs have been developed to kill the virus in this early stage.
The nose is therefore likely to remain a source of infective virus transmission even after parenteral vaccination, which fails to completely eliminate the virus in the nose [1,17]. A single intranasal vaccination in rhesus macaques prevented SARS-CoV-2 infection in both the upper and lower respiratory tracts [18]. Parenteral vaccination and nasal therapy combined could realise the ultimate goal of completely eliminating these viral pathogens and sterilising the nose.
Intranasal drug candidates
Drugs for nasal pharmacological prophylaxis against COVID-19 are under development and include (1) those blocking virus attachment to the host entry receptors without involving host immunity; and (2) intranasal vaccines or immune stimulants eliciting antiviral antibodies and memory cells at the mucosal surface.
• Category 1: Include povidone-iodine [19], nitric oxide [20], ethyl lauroyl arginate hydrochloride [21], astodrimer sodium (SPL7013) [22,23], iota-carrageenan [24][25][26], and many others. These utilise nasal sprays and are at different stages of development globally. One very significant study for prevention of the early phase of SARS-CoV-2 entry into the body utilises poly(lactic-co-glycolic acid) nanoparticles to deliver and confine drugs specifically to treat the nasal sinuses with slow release over one week [27]. Stringent published clinical trials of these drugs are needed to satisfy the regulatory bodies as these may become available for sale to the public. Once approved, however, they could have enormous impacts on COVID-19 prophylaxis and therapy, particularly in deprived countries, as they are cheap and convenient and could also deal with breakthrough virus to sterilise the nose. They might be more acceptable too to those refusing injected vaccines.
• Category 2: Intranasal vaccines are also being developed, inducing IgA since dimeric forms of these antibodies are particularly potent and found at the mucosal surfaces where SARS-CoV-2 targets the cells [14].
Previous studies to develop nasal therapy for respiratory viruses have met with variable success. For example, a live attenuated flu nasal spray vaccine, called Flu Mist, has been approved by the US Food and Drug Administration (FDA), although the results of clinical trials have been discordant [28]. Developing nasal sprays with some respiratory viruses can be problematic, epitomised by the common cold and the work of David Tyrell [29] who showed that more than 100 different viruses may be involved. SARS-CoV-2, however, is more promising since few variants dominate the pandemic and parenteral vaccines have already been produced. Preclinical and clinical trials with a variety of drugs for nasal therapy against COVID-19 are also underway. For example, the nasal delivery of IgG monoclonal antibodies against SARS-CoV-2 engineered into immunoglobulin M (IgM) antibodies protect against virus variants in rats [30], while intranasal vaccination with the AstraZeneca vaccine, AZD1222, reduces virus concentrations in nasal swabs in 2 different SARS-CoV-2 animal models [31]. Furthermore, transgenic mice receiving one intranasal dose of an adenovirus-vectored vaccine, ChAd-SARS-CoV-2-S, also conferred superior immunity to SARS-CoV-2 than 2 intramuscular injections and evidenced sterilisation immunity in the upper respiratory tract [32]. Additional progress has been made in India with the approval of a human Phase II clinical trial of a COVID-19 nasal vaccine [33]. There will inevitably be delays and setbacks due to our lack of understanding of the dynamics of intranasal vaccination for COVID-19 so that additional research is urgently required [14,34,35]. Meanwhile, some Category 1 drugs may be approved more rapidly and available to prevent viral shedding following full vaccination against Delta and other variants [23][24][25].
In conclusion, nasal therapy has great potential to prevent and treat a variety of respiratory viruses. As patients present at different stages of COVID-19 or with other viral infections, we will need a selection of therapeutic strategies from vaccines to broad-spectrum antiviral drugs, delivered in different ways from injection, sprays/inhalations, and tablets alone or in combinations, to counter these threats. | 2021-11-26T05:18:39.386Z | 2021-11-01T00:00:00.000 | {
"year": 2021,
"sha1": "3e4e3504ebb33381700d5a46fb457abc6ec42581",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plospathogens/article/file?id=10.1371/journal.ppat.1010079&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3e4e3504ebb33381700d5a46fb457abc6ec42581",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
237803814 | pes2o/s2orc | v3-fos-license | Bridgman Growth and Physical Properties Anisotropy of CeF 3 Single Crystals
: Bulk c -oriented CeF 3 single crystals (sp. gr. P 3 c 1) were grown successfully by the vertical Bridgman technique in a fluorinating atmosphere. A description of the crystal growth procedure and the solution of the difficulties during the growth process are presented in detail. The anisotropy of the mechanical, thermal and electrophysical properties were studied for the first time. The maximum values of the thermal conductivity coefficient ( α = 2.51 ± 0.12 W · m − 1 · K − 1 ) and the ionic conductivity ( σ dc = 2.7 × 10 − 6 S/cm) at room temperature are observed in the [0001] direction for the CeF 3 crystals. The Vickers ( H V ) and Berkovich ( H B ) microhardnesses for the ( 0001 ) , ( 1010 ) and ( 1120 ) crystallographic planes were investigated. The H B values were higher than the H V ones and decreased from 3.8 to 2.9 GPa with an increase in the load in the range of 0.5–0.98 N for the hardest ( 0001 ) plane. The (cid:8) 1120 (cid:9) , (cid:8) 1010 (cid:9) and { 0001 } cleavage planes were observed during the indentation process of the CeF 3 crystals. The variability of Young’s, the shear modules and Poisson’s ratio were analyzed. A significant correlation between the shapes of the Vickers indentation patterns with Young’s modulus anisotropy was found. The relationship between the anisotropy of the studied properties and the features of the CeF 3 trigonal crystal structure is discussed.
Introduction
Among bulk single crystals of the rare-earth trifluorides [1], cerium fluoride attracts special attention due to a wide range of promising applications. The undoped and rareearth doped CeF 3 single crystals (and solid solutions based on it) are a multifunctional material for various fields of science and technology.
In recent years, a huge potential for the utilization of CeF 3 crystals as one of the promising magneto-optical materials for the development of high-power Faraday optical isolators for a broad variety of laser wavelengths was demonstrated [23][24][25][26][27].
The temperature gradient in the growth zone was 95-100 K/cm, and the crucible pulling rate was~3 mm/h. After growth, the crystals were cooled down to room temperature (RT) at a rate of 100 K/h. The c-axes oriented ([0001] direction) seeds were utilized for CeF 3 crystal growth. In the case of the spontaneous seeding of these crystals, it was noted that the optical c-axis was located at an acute angle to the growth axis of the crystalline cylindrical boules. The (0001) plane had a characteristic metallic luster. The losses of the substance due to evaporation during the crystallization process were 2 wt. %. The c-oriented CeF 3 single crystals of optical quality with a diameter of 30 mm and a length of up to 60 mm were successfully grown under the above process parameters. An analysis of the grown crystals for their oxygen contents was carried out with vacuum induction fusion in graphite capsules (The ONH836 Analyzer, LECO Corp., St. Joseph, MI, USA).
The single-crystal orientation was determined by the back-reflection Laue method. Crystal samples with a thickness of about 2 mm were cut and polished along the main crystallographic (0001), 10 10 and (1120) planes (orientation accuracy up to 1 • ). The CeF3 crystal structure was described within the hexagonal coordinate system in this article ( Figure 1).
X-ray Diffraction (XRD) Analysis
XRD patterns of the CeF 3 crystal samples were carried out on an X-ray powder diffractometer MiniFlex 600 (Rigaku, Japan) with CuKα radiation. The diffraction peaks were recorded within the angle range 2θ from 10 • to 140 • . Phases were identified using the ICDD PDF-2 (2014). The unit cell parameters were calculated by the Le Bail full-profile fitting (Jana2006 software).
Microhardness of Crystal
The microhardness was measured with a PMT-3 hardness tester (Russia) at RT under applied loads P in the range of 50-100 gf (1 gf = 9.807 × 10 −3 N) using both square-based Vickers and triangular Berkovich pyramid diamond indenters.
The Vickers microhardness value H V was calculated by the formula: where P [H] is the applied indenter load, and d [µm] is the diagonal length of the indentation imprint.
The Berkovich microhardness value H B was calculated using the following equation: where L (µm) is the indentation side length. The time for the initial load application is 10-15 s, and the test force is maintained for 5 s. The measurement errors for the microhardness did not exceed 10% and 5% for the H V and H B values, respectively. The indentation patterns for the different planes were studied by optical microscopy methods.
The Thermal Conductivity
The thermal conductivity k(T) of undoped CeF 3 crystals was measured by an absolute stationary technique of longitudinal thermal flux in the temperature range 50-300 K. A description of the equipment and measurement techniques are given in reference [42]. The samples were parallelepipeds with dimensions of 8 × 6 × 23 mm 3 and 6 × 6 × 22 mm 3 for samples with cand a-orientations (relative to the long axis), respectively. Thermal conductivity coefficient values were calculated by the Fourier equation within a ± 5% error interval.
The Electric Conductivity
The CeF 3 single crystals conductivity was measured by means of impedance spectroscopy (in dry Ar atmosphere) in the temperature range of 300-600 K and the frequency range of 1-1.4 × 10 7 Hz (a Novoterm-1200 installation was employed with an impedance analyser Alpha-AN, Novocontrol, Germany). The impedance Z*(ω) frequency dependencies were measured with step heating under conditions of temperature stabilization. Pt paint were used as current-conducting electrodes. Static total bulk conductivity σ dc was calculated from the impedance hodographs using the equivalent electrical circuit method by ZView software (Scribner Associates, Inc.). The relative error in σ dc did not exceed 0.5%.
Crystal Growth and Characterization
The as-grown c-oriented CeF 3 crystals were colorless and transparent, and no lightscattering inclusions were observed in the bulk of the crystals (Figure 2a). A milky foreign substance was detected at the top of the crystals. The reason for its appearance was associated with the low isomorphic capacity of the structure of these crystals in relation to the residual amount of oxygen-containing impurities, which were pushed into the melt during the growth process and accumulated in the upper parts of the boules. Note that a similar effect may be due to the presence of monovalent cation (such as Li + and Na + ) traces. The oxygen contents in the bottom and center parts of the grown CeF 3 crystals measured by the vacuum melting technique were at the level of 30-35 ppm. The concentration of oxygen-containing impurities increased strongly up to 200 ppm in the top milky crystal parts. Thus, the effective segregation of oxygen impurities was observed in the process of direction crystallization of the CeF 3 melt.
The assignment of crystals to the tysonite-type structure (P3c1 space group) was confirmed by XRD analysis (Figure 2b). The CeF 3 crystals were single-phase with lattice parameters coinciding with the crystallochemical data [43]. The crystal density ρ = 6.080(6) g/cm 3 (measured by hydrostatic weighing in distilled water) was insignificantly lower than the theoretical density value. The ordinary refractive index n o = 1.6179 (2), measured by the refractometric method for the wavelength λ = 0.589 µm at RT, was close to the data published in reference [4]. Absorption spectra, especially in the short wavelength range, reflect the optical quality of crystals in general. The absorption spectrum of the as-grown CeF 3 single crystals is shown in Figure 3. Excellent optical characteristics, close to the theoretical value [19], indicate the "oxygen-free" nature of the grown CeF 3 crystals. The optical transparency limit, determined by the onset of interconfigurational 4f -5d transitions in the Ce 3+ ion, was about 0.285 µm for samples with a thickness of about 10 mm. No additional absorption bands associated with oxygen-containing or other impurities were detected. These crystals exhibited a characteristic transparency window from the UV spectral edge to 3.0 µm. The long-wavelength edge of this region was assigned to the electric dipole intraconfigurational 4f ( 2 F 5/2 -2 F 7/2 ) transition in the Ce 3+ ion [5,20]. The IR cut-off edge, determined by the phonon lattice vibrations, was located at about 12 µm [2,4,5].
Elastic Properties of the CeF 3 Crystals
Below, the elastic characteristics of the CeF 3 crystal will be analyzed, which will allow to relate the degree of anisotropy of the elastic characteristics to the shape of the indentation. The engineering elastic coefficients (Young's modulus E, Poisson's ratio ν and shear modulus G) depend on the orientation of the deformed crystal. In the case of the hexagonal crystal family, the expressions for Young's modulus E, Poisson's ratio ν and the shear modulus G have the forms [44]: − ν s 13 E = 1 + Π 2 sin 2 ψ + Π 02 cos 2 θ cos 2 ψ sin 2 θ, 1 s 44 G = 1 + Π 3 sin 2 ψ + 4Π 03 cos 2 θ cos 2 ψ sin 2 θ, where s ij are matrix compliance coefficients. Dimensionless parameters Π 1 , Π 2 , Π 3 , Π 01 , Π 02 , Π 03 and dimensional coefficient δ are the characteristics of the degree of crystal anisotropy.
The elastic properties of the selected rare-earth trifluorides crystals with tysonite-type structures (namely, LaF 3 , CeF 3 , PrF 3 and NdF 3 ) described in hexagonal symmetry were investigated in references [37,38]. It is noted that the elastic constants of these crystals increase with an increase in the atomic number of the rare-earth element. The CeF 3 crystals are described by five independent compliance coefficients: s 11 = 7.64, s 33 = 5.14, s 44 = 29.20, s 12 = −3.30 and s 13 = −1.22 TPa −1 [38] and an additional condition, s 66 = 2(s 11 − s 12 ).
Young's modulus of the CeF 3 crystals depends only on one Euler angle θ, which is measured from the main crystallographic third-order axis ([0001] direction) in the (1010) plane. These crystals can have three stationary values of the Young's modulus: which correspond to the stretching in the directions [0001] and 0110 , respectively, and [45].
The third value can be achieved under an additional condition: A numerical analysis of Young's modulus variability shows that For CeF 3 crystal, Young's modules E 1 = E [0001] and E 3 are the global maximum and minimum, respectively, and E 2 = E [0110] is the local maximum. The orientational dependences of Young's modulus in the planes (0001) and (1010) for the CeF 3 crystal are shown in Figure 4. In the plane (0001), Young's modulus takes a constant value of E 2 = 131 GPa, since this plane is an isotropic one (Figure 4a). A significant anisotropy of Young's modulus E max /E min = E 1 /E 3 with an anisotropy coefficient close to 2 was observed for the CeF 3 crystals (Figure 4b). Poisson's ratio ν(θ, ψ) depends on the two Euler angles. In this case, a sequence of two rotations of the crystallographic coordinate system was used, which can be written as a matrix product: The Euler angle θ is calculated from the main crystallographic third-order axis ([0001] direction) in the plane (1010). The same sequence of rotations will be used to analyze the variability of the shear modulus.
In reference [45], it was demonstrated that hexagonal crystals can generally have eight stationary values of Poisson's ratio. The surface of Poisson's ratio as a function of the two Euler angles (θ, ψ) is shown in Figure 5a. Only six stationary values were detected for the CeF 3 crystal. A numerical analysis of the variability of Poisson's ratio showed that it varied from 0.16 to 0.48. The three stationary values have a simple analytical form: In these formulas for the stationary values of Poisson's ratio, the last four digits in square brackets indicate the direction of tension (compression), and the first four digits indicate the direction of transverse deformation. As can be seen from Figure 5a, the value ν 2 is the global minimum.
For the shear modulus, the number of stationary values in the case of the hexagonal crystals can be four [44]. Four stationary values can be identified for the CeF 3 . crystal. A numerical analysis of the variability of the shear modulus shows that it varies in the range from 34.2 to 65.7 GPa. The surface of the shear modulus as a function of the two Euler angles (θ, ψ) is shown in Figure 5b.
Two stationary values of the shear modulus have the form: In this case, the four digits in the second square bracket indicate the direction of the normal to the sliding plane, and the first four digits indicate the sliding direction. The other two stationary values take the following values: Comparing these values shows that G 3 is the global maximum and G 1 is the global minimum. The ratio of the extremes G max /G min is close to 2, which shows a high anisotropy for the CeF 3 single crystals.
Microhardness Testing of the CeF 3 Crystals
Hardness is a mechanical parameter that is strongly related to the crystal structure and composition of the solids and is defined as the resistance to deformations or damage under an applied load. Hardness testing provides significant information on the strength and deformation characteristics of the material. Microhardness investigation is one of the best methods for understanding the mechanical properties of materials, such as the plasticity, hardness anisotropy, fracture behavior, etc.
There has been a lack of data on the mechanical properties of CeF 3 (with the exception of some preliminary data [46]) and other isostructural tysonite rare-earth fluoride crystals. The averaged microhardness values of La 1-x Sr x F 3-x (0 < x < 0.15) crystals (without taking into account the orientation of the crystal samples) were reported in reference [41]. For undoped LaF 3 crystal, the value of H V = 2.43(1) GPa was obtained under the load of P = 35 gf. A natural analog of the investigated crystal is the fluocerite-(Ce) (tysonite) mineral, which is a solid solution of rare-earth trifluorides with a predominance of CeF 3 . This mineral is brittle; it is characterized by a Mohs hardness of 4.5-5 and has an average cleavage along the {0001} and 1120 planes [47]. As we noted, under an impact, the CeF 3 crystals exhibit a pronounced cleavage along the 1120 at RT. According to the data of reference [48], La x R 1−x F 3 (R = Ce, Pr, Nd) solid-solution crystals are also easily cleaved upon cooling in liquid nitrogen along the {0001} and 1120 planes. The positions of these planes in the unit cell of the CeF 3 crystal are shown in Figure 6. It is obvious that the characteristic cracking of tysonite crystals under thermal or mechanical stress is observed along the planes consisting only of fluorine ions and having a minimum number of different chemical bonds in the perpendicular directions. Table 1. (Figure 7a). Table 1). The second-order anisotropy coefficient
The Vickers Microhardness of the (0001) Crystallographic Plane
The indentation patterns of the (0001) basic planes for the CeF 3 crystals are presented in Figure 8. The recovered Vickers indentation is undistorted and has a shape of a regular square at any indenter diagonal orientation, which points out the absence of first-order hardness anisotropy, i.e., the absence of the dependence of microhardness on the indenter position relative to the crystallographic directions, and this fact is consistent with the Young's modulus isotropy in the (0001) plane (See Figure 4a).
The indentations on the (0001) plane are always precise. Characteristic cracks arise along the < 1010 > directions in which the corresponding 1210 primary cleavage planes (Figure 8a) are located [46]. Thus, fracture upon indentation occurs only along the cleavage planes, and no random cracks are observed for the (0001) base plane of the CeF 3 crystals. Sometimes, along with these cracks, cracks on the 1010 planes in the < 1210 > directions arise, which suggests the existence of a secondary cleavage on these planes in these crystals, when the fracture along the primary cleavage plane is hindered probably by sample defects (Figure 8b). To verify this assumption, a Berkovich indenter was used to indent the (0001) plane of the CeF 3 crystals, because its geometry corresponds to the symmetry of this plane.
The Berkovich Microhardness of the (0001) Plane of the CeF 3 Crystal
The Berkovich indentation patterns of the (0001) plane for the CeF 3 crystals at different indentor orientations under the load P = 70 gf are presented in Figure 9. The fracture occurs clearly along the 1120 cleavage planes (< 1100 > directions) of the CeF 3 crystal in the case when the indenter sharp edges coincide with the < 1100 > crystallographic directions. In the case when the indenter sharp corners coincide with the < 1010 > directions and the maximum load acts in these directions, some cracks are located in the cleavage (1010) planes, but cracks along the 1120 planes are also observed, despite the fact the fact that the indenter geometry in this case facilitates crystal destruction predominantly along the 1010 planes. Thus, even when the geometry indenter facilitates the formation of cracks along the secondary cleavage 1010 planes, the 1120 planes, which are the primary cleavage planes in the CeF 3 crystal, participate in the fracture process during indenter penetration.
In the range of loads P from 0.5 to 0.98 N, the Berkovich microhardness H B value changes from 3.8 to 2.9 GPa for the (0001) plane of the CeF 3 crystal (see Figure 9). The H B value is higher than H V one (see Table 1). This fact agrees with the data of reference [49] on the study of the surface relief during the Berkovich and Vickers indentation, where it was shown that, at the same load during the Berkovich indentation, the degree of deformation is greater than with the Vickers indentation. This phenomenon is determined by the spiky shape of the Berkovich indenter. The H B value decreases with the increasing load P. Such a variation in H B (P) dependance is described by the indentation size effect (ISE). The model of indentation with the cracks formation does not provide load-independent microhardness. The ISE onset is assumed to be related to the crack formation upon indentation, which obeys the fundamentals of destruction mechanics and elastic recovery of an indentation impression after removal of the indenter [50]. The relative value of the elastic recovery increases in the region of low loads, and the elastic recovery of the indentation, depending on the direction, agrees with the character of the anisotropy of Young's modulus [51,52]. The measurement of linear dimensions of the cracks C around the indenter imprint (see Figure 9) as a function of the indenter load P provides a quantitative characterization of the fracture toughness-coefficient K C . The resistance to fracture indicates the toughness of a material, and the fracture toughness K C determines how much fracture stress is applied under uniform loading. The value of K C is determined not only by the values of C but, also, by the indenter geometry and the material properties. In accordance with reference [53], the fracture toughness K C is given by a relation: where E is the Young's modulus (E = 131 GPa for the (0001) plane of the CeF 3 crystal), φ = 76 • 54 is the half-angle at the vertex of the Berkovich indenter and C is the crack length under the load (C = 36 µm under P = 70 gf)-thus, for the plane (0001) of the CeF 3 crystal K C = 0.25 MPa m 1/2 for cracks in the < 1100 > direction (i.e., along the primary cleavage 1120 planes). Direct measurements of the crack dimensions around the point of action of a concentrated load make it possible to estimate the value of the surface energy of fracture γ [54]: where ν = 0.16 is Poisson's ratio for (0001) plane, and thus, the surface energy of the fracture γ = 0.23 J/m 2 . A theoretical estimate of the value of the surface fracture energy: was proposed in reference [55] based on the values of the relaxation distance of the repulsive forces between the ions λ (λ is taken as the radius of the larger ion), the interplanar distance between the spreading planes y 0 and Young's modulus E. For the CeF 3 crystal: λ = 1.246 Å (estimated fluorine ionic radius [56]), y 0 = c = 7.283 Å and E = E [0110] = 131 GPa. Accord-ingly, γ = 0.28 J/m 2 . This value γ agrees with the direct calculations of the surface energy of the fracture for the cracks in the < 1100 > direction (i.e., along the primary cleavage 1120 planes) and indicates the reality of the value of the surface fracture energy obtained by the method of measuring the linear dimensions of the cracks during indentation.
Due to the pointed shape of the Berkovich indenter, the degree of deformation in the Berkovich indentation was greater than in the Vickers one. The coincidence of the geometric symmetry of the Berkovich indenter with the crystallographic symmetry of the investigated (0001) plane of the CeF 3 crystal makes it possible to increase the measurement accuracy and more fully determine the nature of the fracture and mechanical anisotropy of this crystal. As a result, it was possible to confirm the data on the presence of the primary 1120 and secondary 1010 cleavage planes in the CeF 3 crystals and quantify the fracture toughness coefficient K C and the surface fracture energy.
Ionic Conductivity Measurements
The effect of anisotropy of anionic conductivity in rare-earth fluorides with a tysonitetype structure (type LaF 3 , space group P3c1) and, in particular, in grown undoped CeF 3 single crystals is of great interest for the study of the mechanism of ionic transfer in fluoride superionic conductors and makes it possible to reveal the structural paths of ionic transport and determine the relationship of ionic conductivity and features of the crystal structure.
From the point of view of trigonal symmetry of the CeF 3 crystals, the temperature dependences of the conductivity, studied only along the aand c-axes, are shown in Figure 10. The preferable conduction path is determined, and the highest electrical conductivity is observed along the c-axis. The temperature dependences of conductivity σ dc are divided into three sections: 300-355 (low temperatures LT), 355-545 (middle temperatures) and 545-600 K (high temperatures HT). The LT and HT regions of the σ(T) dependences are described by the Arrhenius equation: where A is the preexponential factor, E is the activation energy of the ion conductivity process (further, we will denote E a and E c as the activation energy along the aand c-axes, respectively), k B is the Boltzmann constant and T is the temperature. The parameters calculated by this equation in the LT and HT regions are given in Table 2. The obtained values of the activation energies of conductivity E a and E c for the CeF 3 are in good agreement with the corresponding data for the closest crystal chemical isostructural analog LaF 3 (E a = 0.46 and 0.27 eV, E c = 0.44 and 0.27 eV and the LT and HT regions, respectively) [57].
In general, ionic conductivity in the tysonite-type rare-earth trifluorides (LaF 3 , CeF 3 , PrF 3 and NdF 3 ) is due to the translational hopping of Fions in the crystal lattice by the vacancy mechanism [58]. The high value of fluorine-ionic conductivity of the tysonite-type crystals in comparison with the other fluorides is associated with the high coordination of the cations with Fions (Figure 10b), which leads to a weakening of the chemical bonds "metal-fluorine" and increases the anions' mobility.
The crystal structure of CeF 3 is characterized by three crystallographic positions of the fluorine atoms: F1:F2:F3 = 12:4:2 ( Figure 10b) [34,35]. In the trigonal motive of CeF 3 along the c-axis, purely anionic layers formed by F1 atoms and cation-anionic layers containing F2 and F3 atoms with similar dynamic properties (F2 ≈ F3) alternate. With the temperature increasing, anionic transfer occurs first in the fluorine sublattice F1; then, there is an exchange of fluorine vacancies between subsystems F1 and (F2 + F3) [59]. In individual rare-earth tysonite fluorides, thermally stimulated defects are formed according to the Schottky mechanism (cationic and fluorine vacancies). The LT region corresponds to the region of the impurity association, and the HT region corresponds to the region of impurity conductivity [60,61]. In the middle-temperature region, a change in the mechanism of conductivity occurs, accompanied by a monotonic decrease in the activation energy of ion transport with the increasing temperature.
The anisotropy of σ dc is determined by the nonequivalence of the fluorine positions and the hopping frequencies of the F-ions in these positions. The transfer of the Fions along the c-axis occurs more rapidly, since it is carried out by fluorine vacancies with the participation of F1 positions, for which the hopping frequency is higher and shorter structural segments are involved in the conduction paths. The insignificant (max value along the c-axis) anisotropy of σ dc in the CeF 3 crystal indicates that there are no distinguished conduction channels in its structure, and it has quasi-three-dimensional (3D) electrical conductivity (crystallographic analysis fluorine motion in the tysonite structure was given in reference [62]). The ratios of the conductivity values measured along the cand a-axes are σ ||c /σ ⊥c = 3.4, 2.4 and 2.1 at 300, 500 and 600 K, respectively-that is, it decreases with the increasing temperature. Experimental data on the anisotropy of the conductivity of a number of tysonite crystals are given in Table 3. The anisotropic effect of ion transfer in undoped tysonite-type rare-earth trifluorides is weak and practically disappears upon heavy doping with alkaline earth impurities.
Anisotropy of the Thermal Conductivity of the CeF 3 Crystals
The results of the measurements of the temperature dependence of the thermal conductivity coefficient are shown in Figure 11a and, for some selected temperatures, are given in numerical terms in Table 4. Note that the values of the CeF 3 thermal conductivity obtained in reference [10] are dramatically lower. The reason for this phenomenon can be assumed to be qualitative differences in the impurity composition of the grown crystal samples-for example, in the oxygen content.
The CeF 3 crystals have a rather low thermal conductivity. It can be seen that, in comparison with many other optical materials, these crystals are a weak thermal conductor. This circumstance corresponds to the high values of the fluoride-ion conductivity in the CeF 3 crystal given in this work (see Section 3.3. above). Inverse correlations between the thermal conductivity and ionic conductivity were established for many fluoride crystals [66,67] and are associated with partial disordering of the anion sublattice and the inelastic interaction of phonons with Fions migrating over the crystal. This phonon-defect scattering is added to the phonon-phonon one associated with the thermal vibrations of ions in the crystal. As a result, the value of thermal conductivity k decreases, and its temperature dependence k(T) becomes weaker. Indeed, the revealed dependence k(T) for the CeF 3 crystal is weak, and the thermal conductivity value changes by only half an order of magnitude in the studied temperature range.
The anisotropy of the thermal conductivity of the CeF 3 crystals is significant. The temperature dependence of the anisotropy coefficient K, defined as the ratio of the thermal conductivity coefficients K = k ||c / k ⊥c along two main orthogonal directions, is shown in Figure 11b. This coefficient K changes insignificantly with the temperature, except for the region of the lowest temperatures, where the thermal conductivity is especially sensitive to various kinds of structure defects. Taking into account the strong temperature dependence of the average phonons mean free path, it can be assumed that the main reason for the revealed thermal conductivity anisotropy is the difference in the velocity of the elastic wave propagation in these two main directions of the CeF 3 crystal.
In terms of the absolute values of the thermal conductivity coefficient and its temperature behavior, CeF 3 is close to its isostructural analog, the LaF 3 crystals [65] (see Figure 11a). This compound is also characterized by high fluorine ionic conductivity [68]. Compared with the LaF 3 crystal, the thermal conductivity of CeF 3 is slightly lower up to T = 250 K. However, a monotonic decrease in thermal conductivity was observed for both crystals in the studied temperature range.
The thermal activation model of two-level systems was used earlier to describe the monotonic increase k(T) for superionic conductors, including the LaF 3 and (presumably) CeF 3 crystals [69]. According to our data, the antibate behavior of the k(T) dependences was observed in comparison with reference [69], and no special signs of the manifestation of the thermal activation two-level systems are detected. Counterarguments on the applicability of such models to the heat conduction phenomenon were given in reference [68]. The temperature dependence of the phonons mean free path l(T) along the c-axis of the CeF 3 crystals is shown in Figure 12a. The data for the LaF 3 crystals [65] are shown for comparison. The calculation was carried out from the well-known Debye expression: where C V is the heat capacity per unit volume of a crystal, and ν is the average propagation speed of the phonons (sound). The value ν = 2.66 km/s, obtained by taking into account the results of measurements of the longitudinal and transverse ultrasonic waves in various crystallographic directions, was utilized as the average speed of phonons propagation [69]. Calorimetric data for the calculation were taken from reference [70]. As in the case of tysonite LaF 3 crystals [65], the revealed temperature dependence l(T) was significantly weaker than the l(T) dependences observed for other fluorides with a fluorite-type structure [71]. Extrapolation of the dependence l(T) to the melting temperature region of CeF 3 crystals (T m = 1716 K) gave a value that did not exceed its unit cell parameters and was comparable to the interstitial distances in this crystal. The reason for the different behaviors of the dependences k(T) and l(T) for isostructural CeF 3 and LaF 3 single crystals in the temperature range T > 200 K was probably related to the different temperature behaviors of their specific heats C p (T) (see Figure 12b) [72][73][74][75]. Despite the fact that literature data are somewhat different, it was seen that the increase in C p (T) was significant, and this indicated an additional contribution to the lattice heat capacity. The heat capacity of CeF 3 crystal above room temperature increased much more steeply than that of LaF 3 , and this fact can determine different degrees of the temperature dependence of thermal conductivity for studied crystals in a low-temperature region.
Conclusions
Bulk CeF 3 single crystals (sp. gr. P3c1) were grown by the vertical Bridgman technique. The study of the mechanical, thermal and electrophysical properties demonstrated their pronounced anisotropy due to the crystal structural features. This must be taken into account when developing the design of the active or passive optical construction elements based on CeF 3 single crystal. The utilization of the c-oriented crystal samples to achieve the maximum values of hardness, ionic conductivity and thermal conductivity is preferred. | 2021-09-28T01:09:49.101Z | 2021-07-07T00:00:00.000 | {
"year": 2021,
"sha1": "43c3ec76c6a4147d2ac006a51f16faab2848a731",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2073-4352/11/7/793/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "6193c12a74fc07f22346e7e32359ee1207dfe222",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
237349354 | pes2o/s2orc | v3-fos-license | Wnt/β-catenin signaling in cancers and targeted therapies
Wnt/β-catenin signaling has been broadly implicated in human cancers and experimental cancer models of animals. Aberrant activation of Wnt/β-catenin signaling is tightly linked with the increment of prevalence, advancement of malignant progression, development of poor prognostics, and even ascendence of the cancer-associated mortality. Early experimental investigations have proposed the theoretical potential that efficient repression of this signaling might provide promising therapeutic choices in managing various types of cancers. Up to date, many therapies targeting Wnt/β-catenin signaling in cancers have been developed, which is assumed to endow clinicians with new opportunities of developing more satisfactory and precise remedies for cancer patients with aberrant Wnt/β-catenin signaling. However, current facts indicate that the clinical translations of Wnt/β-catenin signaling-dependent targeted therapies have faced un-neglectable crises and challenges. Therefore, in this study, we systematically reviewed the most updated knowledge of Wnt/β-catenin signaling in cancers and relatively targeted therapies to generate a clearer and more accurate awareness of both the developmental stage and underlying limitations of Wnt/β-catenin-targeted therapies in cancers. Insights of this study will help readers better understand the roles of Wnt/β-catenin signaling in cancers and provide insights to acknowledge the current opportunities and challenges of targeting this signaling in cancers.
INTRODUCTION
As an evolutionarily conserved signaling that governs numerously vital embryonic and somatic processes, such as cell fate determination, organogenesis, tissue homeostasis, and a variety of pathological conditions, Wnt/β-catenin signaling also plays crucial roles in cancers. 1 Aberrant Wnt/β-catenin signaling has been uncovered to be tightly woven with many aspects of cancers, including the onset, progression, malignant transformation, and so on. 2,3 Evidence-based medicine has further proved that the abnormal activation of this signaling showed nonnegligible effects on cancer-associated mortality. [4][5][6] Despite these broad acknowledgments of the significant impact of Wnt/ β-catenin signaling on cancers, advances in targeted therapies remain largely incipient. Considering the extremely heavy clinical burden of Wnt/β-catenin-associated cancers globally, it is urgent to comprehensively summarize the up-to-date knowledge of Wnt/ β-catenin signaling in cancers and to clarify the current status and challenges of developing Wnt/β-catenin-associated targeted therapies.
Collectively, it is necessary to systematically review the experimental and clinical knowledge of Wnt/β-catenin signaling in cancers and present the status quo of Wnt/β-catenin-targeted therapies in cancers. With the rapid development of modern pharmacology and evidence-based medicine, certain approaches of targeting Wnt/β-catenin signaling in cancers have already achieved the path of clinical trials. These promising advances will endow researchers and clinicians with more choices and fundaments to better manage aberrant Wnt/β-catenin-associated cancers. In this article, we have systematically reviewed the most updated knowledge of Wnt/β-catenin signaling in caners and targeted therapies in accordance. Therefore, we sought to provide readers with the latest progress of Wnt/β-catenin signaling in cancers and demonstrate both opportunities and challenges of Wnt/β-catenin signaling-dependent targeted therapies in cancers.
BRIEF INTRO OF WNT/Β-CATENIN
The nomination of Wnt is after wingless in drosophila and int1 in mammalians. 7,8 As the prerequisite of understanding the roles of Wnt/β-catenin in cancer, in this part, we will briefly introduce the signaling transduction. Numerous researches have already depicted an integral scene of the core components of β-catenindependent Wnt signaling. It consists of extracellular ligands, agonists; trans-membraned receptors/co-receptors; intracellular compounds including disheveled (Dsh in drosophila and Dvl in mammals), degradation complex comprising glycogen synthase kinase 3 β (Gsk3β), casein kinase1α (CK1α), Axin/conductin, and adenomatous polyposis coli (Apc), β-catenin, and transcription factors. 9 Extracellular Wnt ligands are inevitable to switch-on downstream signaling cascade. The extracellular secretion and functions of Wnt ligands depend on post-transcriptional modifications. 10 The post-transcriptional modifications of Wnt ligands mainly include glycosylation, palmitoylation, and acylation. [11][12][13] Especially, the acylation is necessary for extracellular transport and receptor/co-receptors recognition and bounding. 14,15 With respect to the receptor/co-receptors of Wnt/β-catenin signaling, including frizzled (Fzd) receptors and Lrp co-receptors. For Fzds family, it owns at least 10 members of G protein-coupled receptor (GPCR). 16 The highly conserved cysteine-rich domain (CRD) of Fzds manipulates the ligand recognition and binding. 15,17 In addition, as co-receptors to Wnt ligands, Lrps consist of Lrp5 and 6, whose extracellular domains interact with Fzds and then its intracellular domains trigger further signaling transduction. 18,19 In addition to receptor/coreceptors, there also exist certain extracellular regulators that can influence the ligand-receptor/co-receptors interaction. For instance, the R-spondins (Rspo), member 1/2/3/4, 20 which coordinates with leucine-rich repeat-containing GPCR (Lgr) 4/ 5/6 to enhance Wnt/β-catenin signaling. [21][22][23][24] Specifically, the Rspo-Lgr complex increases Lrp5/6 phosphorylation and inactivates Wnt repressors Rnf43 and Znrf3. 23,25,26 Rnf43 and Znrf3 are E3 ubiquitin ligases that mediate the degradation of Fzds. 26,27 Furthermore, certain intracellular regulators can also abolish the signaling cascade downstream of ligand-receptor interaction. Dsh, a cytoplasmic protein with three highly conserved sections, 28 directly interacts with the C-terminal of Fzds via its PDZ region. 29,30 Finally, after receiving the upstream activation signals, β-catenin functions as the ultimate effector. 31,32 Without canonical Wnt ligands, β-catenin binds to cadherin of cytoplasmic sides rather than being transported into the nucleus, further phosphorylated and eliminated by degradation complex. [32][33][34] Phosphorylated β-catenin is degraded by the ubiquitinproteasome system to keep the low level of free β-catenin in the cytoplasm. 32,33 Conversely, once being activated, intracellular β-catenin is rapidly enriched, and then trans-localized into the nucleus to regulate gene expressions. 32,33 Apart from directly manipulating gene transcription as a TF, β-catenin can also form a transcriptional complex with Lef/Tcf via its armadillo repeats regions. 32,35 Intriguingly, the degradation complex of β-catenin is concise and complicated. As soon as the degradation complex converts into the active form, it phosphorylates β-catenin for ubiquitination, which sends it to the proteasome. When the degradation complex is deactivated, β-catenin accumulates and influxes into the nucleus to initiate the transcription of downstream genes.
WNT/Β-CATENIN SIGNALING IN CANCERS
Basing on the subtle summary of Wnt/β-catenin signaling, in this part, we aimed to further demonstrate the roles of Wnt/β-catenin signaling in cancers. Specifically, we will follow the progressively organized structure going behind the signal conduction cascade of this pathway: extracellular, membrane-linked, and intracellular (cytoplasmatic and nuclear) compositions.
EXTRACELLULAR COMPOSITIONS Porcupine (Porc)
Porc is a membrane-bound O-acyltransferase (MBOAT), by whose palmitoylation function Wnt ligands can be subsequently secreted and recognized. 36 After being palmitoylated, Wnt ligands will bind to wntless and be transported into cell membrane from Golgi apparatus 37 (Fig. 1). Thereout repressing Porc is a candidate way against tumors with aberrant Wnt/β-catenin activation. Nowadays, inhibitors targeting Porc have been uncovered to be underlyingly beneficial for diverse types of cancers. 38 Ligands The Fzds are responsible for recognizing and receiving canonical Wnt ligands, which are defined as β-catenin-dependent ligands. In this review, we focus on the canonical Wnt ligands, including Wnt1, 2, 3, and 3a.
Wnt1. Previous studies have shown that knockout of wg, the homolog gene of mammalian Wnt1, caused the wingless phenotype in Drosophila. 7,8 Experimental and clinical analysis further revealed the frequent abnormal upregulations of Wnt1 in massive cancers. [39][40][41][42] Abnormal Wnt1 expressing patients comprised the majority of cancer patients of non-small cell lung cancer (NSCLC). 43 Nevertheless, a Wnt1-dependent gene, Wnt1-inducible signaling pathway protein 2 (WISP2), was reported to effectively undermine the immunologic evasion of cancer cells. 39,44 In addition to the direct involvements of Wnt1 in cancers, certain upstream modulators of Wnt1 can repress the viability of a few sorts of cancer cells, containing RU484, MicroRNA-140-5p, and SJ26. [45][46][47] Encouragingly, inhibiting Wnt1 relieved the growth and progression of breast cancer in a transgenic murine model. Conversely, overexpression of Wnt1 promoted the growth of cancers. 48,49 Therapeutically, Blocking Wnt1 has been found to strengthen the apoptosis of colorectal cancer (CRC) cells via the utilization of Wif1, Wnt1-specific siRNAs, and neutralizing antibodies. 40 Wnt2. Similarly, the overexpression of Wnt2 has also been detected in human fibroadenomas, breast cancer, pancreatic cancer, and CRC. [50][51][52][53] Despite the concerns that in CRC (CRC) Wnt2 perhaps accelerated its migration and invasion, 52,54 on the contrary, in other kinds of cancers such as gastric, pancreatic, and NSCLC, Wnt2 aggregates the cancer progression. [55][56][57] With respect to its therapeutic potentials, direct silence of Wnt2 alleviated the xenograft breast cancer growth and rescued the malignance and the chemo-drugs resistance in breast cancers. 58,59 Wnt3. Wnt3 is a homologous gene of Wnt3a, and the similarity is up to 84.2% total amino-acid identity in humans. 60 The level of Wnt3 mRNA was obviously upregulated in primary breast and rectal cancer. 60 Wnt3 plays a cacoethic role in multiple cancers, such as CRC, 61 breast cancer, 62,63 NSCLC, 64,65 and prostate cancer. 66 Specifically, the excitation of Wnt3 speeds up the tumorigenesis of CRC. 61 Furthermore, it has been implied to elevate the epithelium-mesenchyme transit (EMT) of breast cancers through Wnt/β-catenin. 62,63 In terms of therapeutic choices, the downregulation relieved the progression of CRC by reducing cancer cell proliferation and migration. 61 For NSCLC, Fig. 1 The extracellular components and signaling transduction of Wnt/β-catenin signaling. In this figure, we do not distinguish the autocrine or paracrine patterns of Wnt ligands knockdown of Wnt3 could increase drug sensitivity. 64,65 In prostate cancer, inhibiting Wnt3 signaling by the deficiency of trophinin-associated protein attenuated the cancer growth. 66 The demethylation of non-coding RNA miR-1247-5p provided an optional remedy for human hepatocellular carcinoma (HCC) via inhibiting Wnt3. 67 Wnt3a. As the strongest Wnt/β-catenin stimulator, Wnt3a has been proposed to participate in numerous cancers. For instance, in most solid tumors, Wnt3a promoted the tumorigenesis and progression of CRC, prostate, liver, and lung cancers. [68][69][70][71] Mechanistically, Wnt3a enhanced cancer cells proliferation, differentiation, migration, and self-renewal, [72][73][74] and conversely inhibits cell apoptosis depending on activating Wnt/β-catenin. 68 In leukemia, a study indicated that Wnt3a, activating Wnt/β-catenin, suppressed the proliferation of cancer cells. 75 Therapeutically, in prostate cancers, targeting Wnt3a through Traf6 either Tmem64 restrained tumor development. 76,77 For liver cancers, targeting Wnt3a by miRNA-195 and miRNA-214 presented the possibility of cancer management. 78,79
MEMBRANE-LINKED COMPOSITIONS
Receptors and co-receptors Fzds. Fzds, a subset of seven-transmembrane protein, are the principal receptors of canonical Wnt ligands. 80 Fzds, mainly including subfamilies Fzd1/2/7, Fzd3/6, Fzd4, Fzd5/8, and Fzd9/ 10, are ubiquitously expressed in most animal species but not in plants and single-cell eukaryotes. 80 The N-terminal CRD domain of Fzds spontaneously binds to Wnt ligands and Lrp5/6 coreceptors. 15,18 The C-terminus of Fzds is localized in the cytosol which recruits and binds to Dsh to intracellularly trigger subsequent signal cascades. 81 Fzds and Lrp5/6 are indispensable for Wnt/β-catenin activation, and non-eligibly, Fzds receptors, and Lrp5/6 co-receptors are both oncogenic under certain conditions. [82][83][84][85] Except for main canonical Wnt ligands compromising Wnt1, 3, and 3a, there still exists several non-canonical Wnt ligands including Wnt5a, 7a, and 7b known to interact with Fzds under specialized circumstances. [86][87][88] Fzd3 could be promotive for the development of Ewing sarcoma and breast cancer. [89][90][91] Also, Fzd4 and Fzd5 were reported to be significantly increased in prostate cancer. 92,93 Fzd6 highly emerged in CRC, breast cancer, and bladder cancer. [94][95][96] Most importantly, Fzd7 is vitally essential in numerous cancers like HCC, breast cancer, gastric cancer, CRC and so on. 97 Detailedly, Wnt3a-Fzd7 dependent Wnt/β-catenin signaling exaggerated the tumorigenesis and advancement of HCC. 98 Prostate cancer metastasis is related to ERG-induced Fzd8 up-regulation. 99 Besides, Fzd10 plays an important positive role in various cancers, at least in CRC development, but the level of Fzd10 is down in metastatic cancers. 100 The Fzd10 expression may be a prognostic marker in CRC, 100 and the same is true in synovial sarcoma. 101 Next, the following part is about to concisely summarize the potential therapeutic values of targeting Fzds in cancers. Fzd1 is related to drug resistance in cancers, subsequently, inhibiting Fzd1 attenuating the resistance. [102][103][104] Moreover, repressing Fzd1 via knocking down Fzd1, using rosiglitazone or miR-135b decreased the metastasis of breast cancer. [102][103][104][105] For targeting Fzd3, by which some non-coding RNAs, like miRNA-505, miRNA-493, and HOXD cluster antisense RNA 1 relieve cancer progression. [106][107][108] And Fzd7 deficiency is enough to combat Apc carcinogenesising. 84 Fzd3 has a certain promotion function on Ewing sarcoma, breast cancer. [89][90][91] Research shows that Fzd4 expression is extremely high in human prostate cancer cells and some non-coding RNA, like microRNA-505, microRNA-493, and HOXD cluster antisense RNA 1 (HOXD-AS1), relieves cancer progression by Fzd3. 92,[106][107][108] Fzd5 functions in prostate and gastric cancer, 93,94 and Fzd6 in colorectal, breast, and bladder cancer. [94][95][96] Fzd7 overexpression is detected in numerous cancers, like hepatocellular, breast, and CRC. 97 WNT3A-Fzd7 interaction activates Wnt/β-catenin signaling in human hepatocellular carcinoma cells and promotes tumorigenesis. 98 Fzd7 is vitally essential for the development of gastric cancer and it is the major receptor of Wnt ligands. 84 Inhibitor targeting Fzd7 is effective for treating gastric cancer, no matter with Apc mutations or not. 84 Fzd7 may be an extraordinarily effective therapy for gastric and CRCs. 84,109 ERG is overexpression in most prostate cancer and Wnt/β-catenin signaling is activated. 99 Prostate cancer metastasis is related to ERG inducing Fzd8 upregulation. 99 Besides, Fzd10 plays an important positive role in various cancers, at least in CRC development, but the level of Fzd10 is down in metastatic cancers. 100 The Fzd10 expression may be a prognostic marker in CRC, 100 and the same is true in synovial sarcoma. 101 Lrps. As co-receptors of Wnt ligands Lrp5/6 are tightly associated with the growth of Wnt-hypersensitive tumors. The single-domain antibody fragments of Lrp5/6 can effectively relieve the development of intestinal cancers. 110 Generally, the truncated LRP5 amplified Wnt/β-catenin signaling to severely promote the growth of parathyroid tumors. 111 Herein, stabilizing of Lrp5 by Hsp90ab1 enhanced gastric cancer progression. 112 Moreover, LRP6 also plays a positive role in various cancers, such as CRC, breast cancer, prostate cancer, and so on. 83,[113][114][115] Basing on these facts, LRP6 neutralizing antibodies were demonstrated to repress tumorigenesis. 116 It is of note that contrarily targeting Lrp5/6 may be not eligible for managing metastasis of breast cancer.
EXTRACELLULAR REPRESSORS
The abolishers of receptor/co-receptors Znrf3 and Rnf43, as transmembrane E3 ubiquitin ligases on cell surface, are negative regulators of Wnt/β-catenin signaling. 26,27 Researches indicated that Znrf3/Rnf43 attenuated Wnt signaling by selectively ubiquitinating receptors/co-receptors of Wnts (Fzds and Lrps) to advance proteins degradation. 26,27 Rspos, binding with Lgrs, can activate Wnt/β-catenin signaling by dually enhancing Lrp5/6 activity and removing Znrf3/Rnf43 from cell membrane 26,27 (Fig. 2). Rspos could as well activate Wnt/β-catenin signaling via interacting with HSPGs independent of Lgrs. 117 Interestingly, Rspo2 mutations were unraveled to associate with tetra-amelia syndrome, contributing to the destruction of Rspo2-Lgr binding. 118 Current evidence depicted the crucial roles of somatic mutations of Rspos in cancers. In detail, Rspo fusions are regarded as important for tumorigenesis in CRC, by activating Wnt/β-catenin signaling. 119 Rspo2/3 chromosome rearrangements can initiate and maintain tumor development absolutely through Wnt signals. 120 Apart from Rspos, the mutations of Znrf3/Rnf43 were also proposed to participate in CRC. The most common mutations of Znrf3/Rnf43 were missense and truncating mutations, respectively. 121 Znrf3 mutations were frequently detected in adrenocortical carcinoma, uterine corpus endometrial carcinoma, and skin cutaneous melanoma. Similarly, Rnf43 mutations were overall found in uterine corpus endometrial carcinoma, stomach adenocarcinoma, colorectal adenocarcinoma, ovarian cancer, and pancreatic ductal adenocarcinoma. 121,122 Znrf3 and Rnf43 were not essential in the intestine but dysregulation of Znrf3/Rnf43 was important for the growth of CRC. 26,27,123 Mutations of Rnf43, resulting in E3 ubiquitin ligases function loss, promoted CRC development and poor prognosis. 38 Inactivating mutations of Rnf43 were related to Wnt dependency. LGK974, targeting on Wnt/β-catenin signaling, alleviated Rnf43 mutation-associated pancreatic cancer cell proliferation, however, which did not affect the non-mutant cancers. 124 For CRC, Rspo/Rnf43 dysregulation plays a positive role in development and dominates over Znrf3. 125,126 There still needs further researches to explore the relation between Wnt/β-catenin and Znrf3/Rnf43, as well as them in cancers.
The antagonists of receptor/co-receptors. Endogenous repressors of Wnt/β-catenin signaling can be divided into two groups: the reversible and the irreversible ones. The former group including competitively binds to the receptor/co-receptor to block the ligand-receptors interaction, like Dkks, Sost, sfrp, and so on. The latter group functions through different mechanisms. Notum deacetylates Wnts and permanently invalidates the recognition. Tikis hydrolyze Wnts to polymerize ligands and finally abolish ligand function. Considering that there are bare therapeutic analyses of targeting these irreversible repressors in cancer until now, in this part we will not further discuss relative aspects. (for more details, please find in Table 1)
INTRACELLULAR COMPOSITIONS
Adenomatous polyposis coli (Apc) Apc gene locates on chromosome 5q21-q22, containing 8535 amino acids and encoding a cytoplasmic protein around 310 kDa. More than 1500 mutations of Apc were detected over multiple tumors, and majorly in CRC. 127 Meanwhile, the most mutant sites were located on exon 15. Over 700 somatic mutations of Apc resulted in various types of cancers by truncating Apc protein that was dependent on nonsense (34%) or frameshift mutation (62%). 127,128 Mutations of Apc can cause familial adenomatous polyposis (FAP), which is also the major hereditary carcinogenic factor in CRC progression. 129,130 A study showed that around 72% of Apc mutations were detected spreading throughout the Apc gene in early-onset CRC. 131 Apc gene has been identified as a vital suppressor in CRC genesis by inactivating Wnt/β-catenin signaling and stabilizing chromosomes. And mutations of Apc directly or indirectly lead to tumorigenesis as well. 132 Mechanistically, Apc regulates β-catenin phosphorylation and restrains the nuclear trans-localization of β-catenin 95,133 (Fig. 3). The three-hit hypothesis suggests that the abnormal mutation of Apc results in β-catenin abundance and abnormal activation of Wnt/β-catenin signaling in CRC. 134 In contrast, the abnormal expression or truncation of Apc weakened the ability of tumor inhibition. 135 Studies indicated that R2 and B motifs of Apc were the binding sites of Apc-Gsk3β/Axin, by which complex the diversity and structural stability of Axin were promoted. 60 The loss-of-function (LoF) of Apc may be an essential contributor to various cancers, especially CRC. 136 At present, experimental rodent models of CRC were initiated by knocking out of Apc. [137][138][139] Apc also suppresses other cancers, such as lung, breast, gastric, and prostate cancers, excepting for CRC. [140][141][142][143][144] The DNA methylation of Apc promoter is closely associated with various cancers, like lung cancer and prostatic cancer. 145,146 The methylation reduces the normal expression of Apc in cancers, resulting in abnormal activation of the Wnt/β-catenin signaling pathway. Overall, this evidence indicated the repressive role of Apc in cancers, thus making it a promising way to remedy cancers via enhancing Apc function or restore the normal function of Apc. Fig. 3 The intracellular components and signaling transduction of Wnt/β-catenin signaling Axin. Axin proteins, including Axin1 and Axin2, maintain β-catenin phosphorylation, thereby inhibiting signaling pathways by assembling the degradation complex with Gsk3β, Apc, and Ck1 ( Fig. 3). Intriguingly, even though the Apc has been blocked, therapies targeting Axin could still be effective. 147,148 Axin was identified as a suppressor in various cancers through majorly inhibiting Wnt/β-catenin signaling. Axins consist of three functional domains, the RGS domain of amino-terminal, the DIX domain of carboxyl-terminal, and the central region (AxinCR). 149 Separately, the RGS domain is responsible for interacting with Apc to phosphorylate β-catenin. 150 The DIX domain directly affects Dsh protein, 149,151 and AxinCR binds to β-catenin and Gsk3β to regulate Wnt/β-catenin signaling. 33,149,152 Some researches indicated that a single point mutation of Axin destroyed the stabilization of the RGS domain, resulting in Axin polymerization. The function of the Apc complex is affected by Axin self-polymerization. Inhibition of Axin and Apc complex together promoted tumor genesis and progression by enhancing Wnt/β-catenin signaling. 153,154 Controlling Axin polymerization may be a potential therapeutic choice to suppress cancer development. 154,155 Mutations of AxinCR were reported to accelerate the tumorigenesis in the following cancers: HCC, colorectal adenomas, ovarian carcinomas, lung carcinomas, and sporadic medulloblastomas. Therefore, erasing or eliminating the mutations of Axin could a promising method to combat diverse cancers.
Axin1/2 mutation in cancers. Axin1 mutations were detected widely in HCC. 156,157 This mutation could phenocopy various tumors in animal models. 158,159 Axin2 can partially compensate for the functional impairment caused by Axin1 mutation. 155,158,160 Axin2 dysfunction was associated with a variety of tumors, including endometrial cancer, 161 CRC, 162-165 lung cancer, 166,167 and breast cancer. [168][169][170] In these cancers, Axin2 performed suppressive roles by mainly constraining the level of β-catenin.
CK1.
First encountered for phosphorylating casein in vitro, the CK1s are serine/threonine kinases that serve as pathway signal transductor in most eukaryotic cells. They are ubiquitously expressed in human tissues all through the developmental and adult period, and the Wnt/β-catenin signaling pathway is just one of the pathways that CK1s impact. 189 The mechanisms of CK1 isoforms on regulating Wnt/β-catenin pathway are complicated. Depending on the substrates and subcellular location, CK1s play distinguished roles. To begin with, CK1α is the typical isoform for down-regulates Wnt/β-catenin signaling. As a composition of the degradation complex, CK1α initiates the phosphorylation of β-catenin. 190 In fact, CK1δ/ε are also involved in this process by phosphorylating Apc (which strengthens the affinity of Apc to β-catenin), in collaboration with Gsk3. 34,191 Furthermore, CK1ε phosphorylates and activities Dvl; however, this action also provokes negative feedback to inhibit Wnt/β-catenin pathway (vide infra). 192 In terms of activation, on receiving Wnt ligands, firstly CK1ε phosphorylates Dvl, then CK1γ phosphorylates the cytoplasmatic domain of Lrp6, finally recruiting CK1α and Axin to bind with Lrp6 in the signalosomes, 193 thus delivering the signal to downstream. Other mobilization effect includes the phosphorylation of CK1ε and CK1α to Tcf3 and Pygo, respectively. 194,195 Dvl. The Dvls family is the transportation hub of Wnt/β-catenin signaling pathway. To adapt to this responsibility, the three highly evolutionally conserved domains of Dvls are the binding sites of various proteins. DIX, a highly conserved domain, is indispensable for the recognition of Axin. 196 Moreover, DIX mediated the polymerization of Dvl monomer, which assembles as the anchoring site for Axin and Gsk3β. 197 This process may have significant effects on the ligand-receptor/co-receptor internalization. CK1ε phosphorylating mediated DIX-E3 ligase interaction ubiquitinates Dvl, abolishing the polymerization of Dvls, ultimately inhibiting Wnt/β-catenin signaling. 192 Next, the PDZ domain lying in the central part of Dvls, essential for the signal conduct from Fzd to downstream molecules, is also the most druggable region in Dvl. CK1 also targets this site. 190 Near the C-terminal region is the DEP domain, whose role is still obscure in the canonical Wnt signaling pathway. Despite these three classic domains, Dvls also have a basic region and a proline-rich region, which may have effects in protein-protein interactions (PPI). 197 Newish founds also indicated that Dvls shuttled between the membrane and the nucleus, and chances were that two distinct Dvls pools may exist. 198,199 However, these discoveries were vague about the druggability of Dvl. So in this review, we suspended this topic aside.
β-catenin and Lef/Tcf in cancers. β-catenin is a multifunctional protein, which is versatile in various cellular events and human diseases. The very core of Wnt/β-catenin is the balance between the phosphorylation/dephosphorylation and degradation of β-catenin. In brief, the destruction complex phosphorylates β-catenin and to degradeβ-catenin by ubiquitin-proteasome system. There are some other kinases are associated with protein phosphorylation, including PP1, PP2A, and PP2C in Wnt/β-catenin signaling pathway. PP1 and PP2C play a positive role in Wnt/ β-catenin signaling by dephosphorylating Axin. 200,201 PP2A, a principal Ser/Thr phosphatase, involves multiple proteins phosphorylation, and importantly, the kinase malfunction could result in several cancers. [202][203][204] PR55α, as a regulatory subunit of PP2A, can enhance the activity of Wnt/β-catenin signaling by regulating PP2A to suppress β-catenin phosphorylation. 205 Hsp105, a PP2A regulator, is overexpressed in various tumors to reduce β-catenin degradation. 206 Researches showed that PP2A dephosphorylated β-catenin to increase β-catenin accumulation. [205][206][207] Therefore, it may be an alternative way to target PP2A in aberrant Wnt/ β-catenin signaling cancers.
The stabilization of β-catenin is heavily associated with various cancers. In detail, the mutations of β-catenin are of great significance in tumorigenesis, progression, and prognosis of cancers. 208,209 In blow, we will detailedly discuss the mutations of β-catenin in cancers. The constitutive activation form of β-catenin, the exon 3 mutations, is believed to regulate the genesis of hereditary non-polyposis CRC. 208 In CRC, there are several mutations of β-catenin, leading to abnormally activated Wnt/β-catenin signaling. 209 The mutations of β-catenin are commonly detected in HCC, uterine corpus endometrial carcinoma, adrenocortical carcinoma, and so on. [210][211][212][213][214] The mutations of β-catenin majorly are missense that block Gsk3β consensus sites to activate Wnt/β-catenin signaling. 215,216 Besides, β-catenin mutations may be a significant carcinogenic factor in endometrial carcinoma. 215,216 Apart from the direct regulatory functions on tumorigenesis, the characteristics of β-catenin can also provide an approach to estimate the stage of low-graded, early-staged endometrial cancer recurrence. 215,216 In addition to the direct transcription regulation of β-catenin as a transcription factor (TF), it can also form diverse types of transcription complex. β-catenin-Lef/Tcf includes Tcf/Lef, p300/ CBP, and other proteins to assist β-catenin in binding to specific DNA sequence [217][218][219] (Fig. 3). Tcf binds to Groucho/TLE, CtBP, and histone deacetylase proteins when β-catenin signaling is not activated. [217][218][219] Tcf/Lef separates from Groucho/TLE and then composites β-catenin-Lef/Tcf complex, depending on X-linked inhibitor of apoptosis (XIAP) monoubiquitylating Groucho (Gro)/ TLE. 220 CBP promotes the transactivation of β-catenin/Tcf cooperating with thymine DNA glycosylase. 221 In addition, the transcription complex recruits p300/CBP, 222,223 Bcl9, 224 Pon-tin522, 225 Reptin52, 225 Brg-1, 226,227 Mllt/Af10-Dot1, 228 SOX10, 229 p68/p72, 230 βTrCp1/Fbw1a, 231 FOXM1 232-234 , and yes-associated protein 1 (YAP1). 235,236 However, in CRC, HIFα can competitively bound to β-catenin to abolish the Tcf4/β-catenin complex, then enhancing the hypoxia tolerance of cancer cells to increase the survival in an anoxic environment. 237 Interestingly, HIF2α binds to the β-catenin-Tcf complex at a different site from HIF1α to recruit p300 and then enhances Wnt/β-catenin signaling. 210 The synergistic actions of HIF2α and β-catenin increase the proliferation of renal cell carcinoma cells. 210 Accordingly, utilizing Tanshinone IIA can inhibit CRC angiogenesis by means of interrupting HIF-1α/β-catenin/Tcf3/ Lef1 signaling. 238,239 Besides, HOXB13, SOX4, RUNX3, CDK8, TCTP, and Daxx participate in regulating Wnt/β-catenin signaling to target Tcf in numerous cancers. Specifically, HOXB13 expression was downregulated in colorectal and prostate cancer. 240,241 And it inhibited the growth of cancers by reducing Tcf and c-myc protein levels. 240,241 RUNX3 inhibited Wnt/β-catenin signaling by comprising of a ternary complex with β-catenin-Tcf and attenuates growth and progression of multiple cancers, especially in gastric cancer. 242,243 SOX4 can increase β-catenin-Lef/Tcf transcriptional activity through upregulating Tcf4. 244,245 CDK8, an oncogene in CRC, partly functions by co-activating β-catenin-Tcf complex. 246 Thus, CDK8 may be a promising target for β-catenin-associated cancers. 246 The translationally controlled tumor protein (TCTP) can enhance the transcription complex activity by increasing the ability of β-catenin-Tcf binding and then inducing the growth of glioma tumor. 247 In summary, HOXB13 and RUNX3 are identified as suppressors of Wnt/β-catenin signaling by obstructing Tcf4 activity to inhibit tumorigenesis and cancer progression. 248,249 Tnik. Traf2-and Nck-interacting kinase (Tnik) is one member of germinal center kinases (GCKs) that can activate the c-Jun Nterminal kinase pathway. 139 Tnik is an essential component for Wnt/β-catenin signaling to maintain physiological cell homeostasis. 250,251 Tnik can directly interact with β-catenin and Tcf to modulate Wnt/β-catenin signaling 250,251 (Fig. 3). Therapeutically, Tnik is a crucial target for the treatment of CRC. In CRC cells, Tnik activates the transcriptional capability of Tcf4 through phosphorylation. 250,252 It has been shown that the growth of CRC cells was strictly dependent on Tnik stimulation. Indeed, after knocking down Tnik, the growth of xenograft CRC cells was brought to stall. 251 And patients with overexpression of Tnik were manifested with poor postsurgical outcomes. 253 Over 80% of CRCs have mutations in Apc, 254 which makes the only molecule downstream of Apc a therapeutic target.
THE CROSSTALK OF WNT/Β-CATENIN SIGNALING IN CANCERS
Wnt/β-catenin and Notch signaling Notch signaling widely interacts with Wnt/β-catenin signaling in cell homeostasis and embryo development. 255 Notch signaling also involves in the wingless development with Wnt/β-catenin signaling. 256,257 Notch can directly inhibit Armadillo/β-catenin to enhance destruction complex activity. 256,257 Besides, LNX2, a regulator of Notch, enhances the cell vitality in Wnt/β-cateninassociated CRC. 258 In turn, β-catenin-Lef/Tcf can reciprocally activate Notch signaling by inducing the expression of Jagged1 and Dll1, which are the ligands of Notch signaling. 255,259 In addition, β-catenin reduces Notch1 ubiquitination and increases the expression of Hes1 to interfere with Notch signaling transduction. 255,259 The synergistic action of Wnt/β-catenin and Notch signalings promotes tumorigenesis and cancer progression. Apparently, the potential remedies of inhibiting this synergistic function can be beneficial in cancer therapies. [260][261][262][263] Wnt/β-catenin and Sonic hedgehog (Shh) signaling The Sonic hedgehog signaling is another important interactor of β-catenin-dependent Wnt signaling. 9,264 Smo, a receptor of Shh ligands, 265 once being triggered via Shh, can activate Gli activities. [264][265][266] Glis, including Gli1-3, are the ultimate effectors of Shh signaling as transcriptional regulators. 266 Intriguingly, Gsk3β and CK1α can both phosphorylate and active Glis to reciprocally enhance Wnt/β-catenin signaling activity. 267,268 The crosstalk of Shh-Wnt/β-catenin involves the relapse, invasion, and metastasis of certain cancers, so that the repression of this crosstalk may alleviate the progression, migration, and invasion of cancers. 269 For instance, cyclopamine, as an inhibitor of Shh signaling, postpones the invasion of CRC by suppressing β-catenin-Tcf transcriptional activity. 270
TARGETED THERAPIES OF WNT/Β-CATENIN IN CANCERS
Categories of pharmacology Drugs targeting canonical Wnt signaling can be divided into two major subtypes, which are small-molecule inhibitor (SMI) and monoclonal antibody (mAb). SMIs usually refer to chemically synthesized compounds with a molecular weight of less than 1 kDa. In recent years, taking advantage of the in-depth learning of molecular and biological mechanisms in cancers, targeted drugs with fewer side effects plus better specificity than traditional chemo drugs have been developed. Theirs strengthens include but are not limited to (1) pronounced permeability into tissues and cells owing to the small-molecule weight; (2) larger volume of distribution; (3) diversified methods of drug delivery; (4) generally better oral tolerance and oral bioavailability after chemical modification. In contrast, mAbs cannot be taken orally, and are often administered by injection, which results in poor patient compliance. Even being directly injected, the distribution of mAbs in vivo is relatively limited. They cannot often easily reach therapeutic concentrations in some specialized tissues (such as the brain) as SMIs do. And as the inherent antigenicity of mAbs, they are likely to cause immune responses. mAbs are generally dominantly distributed in the kidneys, followed by the liver and spleen. In addition, they appear with non-linear pharmacokinetic characteristics, longer half-life, and smaller volume of distribution. Though mAbs can only act on extracellular targets, they have still achieved remarkable antitumor performances in comparison to SMIs, which is attributing to their different pharmacological mechanisms.
SMIs can bind to receptors with stronger affinity than the original ligands. On some occasions, they modify the structures of the receptors so that the ligands cannot be recognized. Moreover, certain SMIs may act as ATP analogs and bind to kinase sites in the cytoplasm, making the receptors unrecognizable.
The advantage of mAbs is not only to directly act on the extracellular and membrane-linked targets but also to activate the intrinsic internal immune system to indirectly exert antitumor effects. With respect to the direct functions, mAbs bind to receptors or ligands to block signal recognition or mediate internalization to reduce the density of receptors on the cell membrane surface. 271 When acting indirectly, mAbs induce complement-dependent cytotoxicity (CDC), antibody-dependent cellular cytotoxicity (ADCC), or complement-dependent cellmediated cytotoxicity (CDCC). 272,273 Therefore, in in vivo experiments with a complete immune system, mAbs may show better antitumor effects than in in vitro experiments. 274 In addition, mAbs can serve as the vehicle to achieve precision medicine, such as targeting radiogen to tumors. 275 To sum up, in practical scenarios, mAbs are more competent for the treatment of hematological tumors than solid tumors as a consequence of their antigenicity and large particles. 274 SMIs can target proteins even in the nucleus and penetrate the blood-brain barrier easily, thus preferentially suitable for treating solid tumors. Even if being humanized, mAbs are inevitable to arouse unintended immune reactions. Meanwhile, SMIs have poor antigenicity, but low specificity, 276 which also brings a higher risk of side effects. Briefly, SMIs and mAbs exhibit diverse characteristics and various indications of cancers.
In addition to SMIs and mAbs, an under-developed category of Wnt/β-catenin-targeted therapy goes to peptides and peptideassociated modified drugs. Peptides have been put into use as drugs long ago, but obvious limitations delayed their clinical advancements. Peptidomimetic, a kind of peptide-associated modified drug, is much smaller than the parental molecule, yet having low antigenicity, good oral bioavailability, good permeability, and better diffusing to the target. Different from SMIs, their half-life is generally very short and has very rapid excretion. Although this feature reduces the risk of side effects, it contrarily causes difficulties to reach satisfactory concentration. 277 PROMISING PRECLINICAL TARGETED THERAPIES SMIs targeting Porc Porc inhibitors are well-acknowledged to block Wnt signaling with a low risk of off-target. Moreover, it was revealed that the exhaustion of canonical Wnt ligands prevented cancer cell proliferation but induced differentiation. Therefore, inhibiting Porc may be a mild therapy that could prevent tumor growth rather than directly causing lethal effects. It has been previously described that Znrf3/Rnf43 are important negative regulators of Wnt signaling. The LoF mutation of Znrf3/Rnf43 was reported to be oncogenic, and Porc inhibitors had shown a remarkable potency in this aspect. 27,124,278 Unfortunately, the absence of Apc is likely to induce a Wnt ligand-independent signaling, so tumors with Apc LoF mutations may be resistant to Porc inhibitors. 279 And Picco et al. reported that in a CRC cell line (VACO6) with RSPO3 fusion, long-term exposure (3 months) of a Porc inhibitor (LGK974) with incremental doses can induce drug resistance. This kind of drug resistance is accomplished by LoF of Axin1 in the VACO6. 160 Contemporarily, there are a growing number of reports showing the side effects of Porc inhibitors, among which bone loss is the most frequent outcome. 280,281 Two studies indicated that LGK974, WNT-C5, or ETC159 can lead to bone loss by diminished osteogenesis and increased osteolysis. These bone loss phenotypes were reported to even appear under the condition of effective dosages of Porc inhibitors. [282][283][284] Hopefully, this side effect can be relieved by co-administrating diphosphonate (alendronate, zoledronic acid for instance). 280 And other adverse effects may be solved by lower doses through the incorporation of other antitumor drugs.
LGK974. Also known as WNT974, LGK974 is a potent SMI targeting Porc. A preclinical study showed the effective performance of LGK974 among variable neoplasm models with good oral tolerance. 282 Even this study systematically examined many organs such as the intestine, stomach, and skin, but unfortunately giving an ignorance to bone tissue. Besides, it was discovered that all human head and neck squamous cell carcinoma (HNSCC) cell lines with Notch nonsense mutation were more sensitive to LGK974. 282 Inspired by this, a team signed up for a clinical trial attempting to use LGK974 to treat HNSCC patients with Notch LoF mutation 285 (Table 2). But this project was abandoned as found retrieval. Another study showed the incredible use of LGK974 in Rnf43 nonsense mutation cell lines in pancreatic ductal adenocarcinoma (PDAC). 124 Both of the two clinical studies illustrated a delayed effect of proliferation inhibition by LGK974. Specifically, after a single administration, there still remains some β-catenin in the cytoplasm to initiate the downstream transcriptional activities.
LGK974 also preliminarily demonstrated a reliable tumor-suppressive effect in vitro on neuroendocrine tumor and ovarian cancer. 286,287 A study using uncovered similar impairment of bone mass in two different doses of LGK974 in mice (3 and 6 mg/kg/d for 7 days). 281 Another study expanded the dose range (from 1to 30 mg/kg/d) and showed an all ranges covered bone loss after continuous treatment for 4 weeks. 280 What's worrying is that the dosage of 3 mg/kg/d for mice is necessary to reach a significant tumor repression. 282 And the exposure to a high dose of LGK974 (20 mg/kg/d) results in prominent intestinal toxicity. 282 Fortunately, as the sustaining treatment of LKG974 is not required for tumor treatment, the practical clinical dosage may not bring so many side effects. Notably, the clinical dosage of LGK974 is still in exploration without explicit results posted. 288,289 ETC159. ETC159, originally named ETC-1922159, is a potent orally available SMI for Porc. Madan et al. identified its high efficiency in CRC with Rspo translocations. This study also inspected intestinal tissues, but still neglected bones. 283 ETC159 can perform effectively synergistic effects with PI3k/mTor inhibitor GDC-0941 in PDAC with Rnf43 LoF mutation. 278 Based on the outstanding behavior of ETC159 in a preclinical experiment, another phase II clinical trial targeting advanced solid tumors has been carried out. Interestingly, part B of this trial evaluates the combination usage of ETC159 and pembrolizumab. But no results have been posted up to date 290 (Tabl. 2).
Bone loss has been observed after 4 weeks of administration in a dosage of 3-30 mg/kg/d. The research team adjusts administration frequency from qd to qod to validate the hypothesis of whether the bone loss could be attenuated after a full metabolic cycle of ETC159. Results showed that there were no significant differences in several estimated outcomes of bone between treated daily and treated every other day. 280 By the way, ETC131, a similar compound of ETC159, is only used for in vitro assays as a result of its inferior oral bioavailability. 283 CGX1321. CGX1321 is a novel Porc inhibitor, yet only a few studies have researched into this small molecule. For experimental analysis, it has been discovered for a good performance in CRC with fused Rspo. 291 Further in-depth studies respectively included the combined usage of CGX1321 and immune checkpoint blockade therapy to treat ovarian cancer (but the results were unsatisfactory), 292 and liposome encapsulation as a vehicle to deliver CGX1321 to treat cancer stem cells (with expected results). 293 As for the clinical trial of CGX1321, it was solely used in solid tumors including gastrointestinal tumors (Table 2). And the combination of CGX1321 with pembrolizumab in CRCs was constructed to evaluate the safety. 294 Yet no results have been posted currently.
WNT-C59. This drug is not well-studied when compared with LGK974 or ETC159, but it exclusively targets mammalian Porc. It owns good oral tolerance and efficacy in the MMTV-Wnt1 mouse mammary cancer model, 295 the Znrf3/Rnf43 −/− mouse CRC model. 284 And WNT-C59 satisfactorily prevented tumor growth in mice xenografted with SUNE1 or HNE1 (two nasopharyngeal carcinoma cell lines). 296 In addition, WNT-C59 can reverse the resistance of trichostatin A in human pancreatic cells. 297 However, when administrated at 10 mg/kg/d for 7 or 21 days, bone loss was observed in mice. 281 This should be taken into consideration as the potential side effect of WNT-C59 in cancer therapies.
GNF-1331/GNF-6231. GNF-1331 is a precursor drug, and the optimization of it brings on the discoveries of LGK974 and GNF-6231. 298 Currently, GNF-6231 is still a brand-new molecule without clinical studies. But it worked well in MMTV-Wnt1 patient-derived xenograft (PDX) mice model, by reducing Axin2 expression. 298 IWP. Relying on the screening approach of the phenotype of inhibition of Wnt production, this kind of molecule was named as "inhibitors of Wnt production (IWP)". In this study, the research group then verified the exclusive activity of inhibiting Wnt-related Porc function by using IWPs. 299 IWP has three family members, IWP-1, −2, and -L6, all of which oppress Wnt/β-catenin signaling by competing for the active sites of Porc with Wntless. 299 IWP-2 and IWP-L6 had poor metabolic stability in mice, but at least IWP-L6 showed relatively better stability in human. 300 IWPs may synergist with PRI-724 to induce apoptosis in HNSCC through inhibition of both Porc and CBP/β-catenin. 301 However, as IWPs have a similar structure to certain CK1 inhibitors, much research have proposed the effects on CK1 isoform suppression of IWPs. 302,303 IWP-2 and IWP-4 inhibited the proliferation of various cancer cells via antagonizing Porc and CK1α. 303 The growth of gastric cancer cells was restrained via the IWPs-mediated downregulation of Wnt/β-catenin signaling. 304 A similar situation occurred in HNSCC cells as well. 301 mAbs targeting Wnt receptors and co-receptors mAbs are remarkably targeted drugs especially for blocking extracellular or membrane-linked proteins. They are characterized by high affinity and low off-target accidents but have longer plasma half-life and lessened clearance rate. 305 OMP-18R5/OMP-54F28. Also known as vanticumab, OMP-18R5 is a monoclonal antibody that targets human Fzd1, 2, 5, 7, and 8. And it was reported to inhibit the growth of gastric adenomas in mice models, either with or without Apc LoF mutation. 84 OMP-54F28, namely ipafricept, is the chimera of truncated Fzd8 and IgG1 Fc region. 306 Ipafricept shares many features with vanticumab. A study has proved the high antitumor effect of ipafricept in MMTV-Wnt1 cancer models. Moreover, the synergistic utilization of ipafricept gemcitabine demonstrated inhibitory function in a pancreatic PDX model, and significantly reduced the cancer stem cells (CSC) frequency. This combined strategy showed superior capabilities of tumor arrest over the sole usage of ipafricept either or gemcitabine. 307 Encouragingly, either vanticumab or ipafricept exhibited impressive synergistic therapeutic effects with taxanes (paclitaxel, docetaxel, and cabazitaxel). Fischer et al. discovered that there existed a population of taxane-resistant cancer cells after using paclitaxel. And the successional administration of ipafricept/ vanticumab prior to paclitaxel can overcome the drug-resistant effect via a sole administration of paclitaxel. Mechanically, this combined usage of ipafricept/vanticumab and paclitaxel strengthened the mitotic catastrophe. 308 The adverse effects of OMPs are similar to Porc inhibitors, but OMPs showed worse impairment of bone mass. As for ipafricept, in a clinical trial in patients with advanced solid tumors, fragility fractures were reported in two participants at 20 mg/kg q3w. 309 Another trial recruited patients inflicted with recurrent ovarian cancer, in which a patient experienced a pelvic fracture at 5 mg/ kg q3w. 310 Studies pointed out that vanticumab had a more severe side effect on bone than ipafricept. In detail, in a clinical trial with metastatic pancreatic cancer, two participants experienced fragility fractures at 7 mg/kg q2w. 311 Additionally, in a trial attempted to treat advanced or metastatic HER2breast cancer, vanticumab led to fragility fractures in 3 patients, including three Grade 2 events and one Grade 3 events of fracture ranking, at the regimen of 7, 14 mg/kg q2w and 8 mg/ kg q4w. Surprisingly, this unintended outcome appeared under the critical supervision of bone metabolism outcomes and bone anabolic remedy of diphosphonate treatment. At last, a total of 6 patients experienced fragility fractures, leading to the abolishment of this trial. 312 All those fracture events mentioned above were not reported as dose-limited toxicity, because they took place after the 1st 28d-window-phase. Whereas, still no studies could reach the maximum administered dose (MAD) in consideration of bone safety. Overall, though preclinical experiments exhibited a good performance of OMPs, these trials revealed a crisis of applying ipafricept/vanticumab in clinical ( Table 2).
OMP-131R10. Also named as rosmantuzumab, OMP-131R10 is a humanized mAb targeting Rspo3. Some malignant hematopoiesis cancers have aberrant Wnt/β-catenin signaling independent to Wnt ligands, due to the redundancy of Rspos in triggering Wnt/ β-catenin signaling. Based on this, Salik et al. demonstrated that rosmantuzumab can impair the self-renewal and differentiation of acute myeloid leukemia cells in the PDX model, meanwhile free from influencing normal hematopoietic stem cells. 313 For the clinical trial, a phase 1a/b trial about rosmantuzumab in advanced solid tumors and previously treated metastatic CRC is still going on 314 (Table 2).
F2.A. F2.
A is a newly developed antibody that targets 6 of the 10 human Fzds (Fzd1/2/4/5/7/8). The developer synthesized this compound by firstly identification of anti-Fzd antibodies (F2) with a specific profile matching to OMP-18R5 and secondly using combinatorial antibody engineering to find a variant F2.A with specificity to bounding Fzd4. According to their study, F2.A could selectively bound to Fzd4 without competition with Norrin. Moreover, F2.A had a much better potency when treating Rnf43 mutated PDAC when compared to OMP-18R5 and OMP-54F28. 315 DKN-01. DKN-01 is a humanized IgG4 mAb that can bound and block the activity of Dkk1. Physiologically, Wnt/β-catenin signaling activates the transcription of Dkk1, which in turn bounds LRP5/6 and block the recognition of Wnt ligands, forming a negative feedback loop. However, some tumors featured with overwhelmed Wnt/β-catenin signaling are insensitive to Dkk1. In contrast, the superfluous Dkk1 can promote tumor cell proliferation. Though the specific mechanisms are unknown, it was hypothesized that Dkk1 helped tumors escape from immune supervision. 316 Indeed, it was reported that an intact immune system was needed for DKN-01 function in a murine model. 317 Most clinical trials are researching on DKN-01, most of which focused on the digestive system such as the gastroesophageal, intestine, liver, and biliary tract cancers, and the rest focused on NSCLC, gynecologic malignancies, and multiple myeloma. DKN-01 showed satisfactory tolerance in all clinical trials. Intriguingly, some trials reported a better effect of DKN-01 in Dkk1overexpressed patients 318 (Table 2).
UC-961. UC-961, also called cirmtuzumab, is a first-in-class mAb that targets ROR1. ROR1 is often highly expressed in chronic leukemia lymphoma (CLL) cells. As normal B lymphocytes do not express ROR1, UC-961 has precise and good effects in treating CLL. 319 OTSA-101. Fzd10 was found ubiquitously upregulated in synovial sarcoma (SS), but scarcely detectable in normal tissues except the placenta. 101 A group of radioimmunoconjugate humanized antibody OTSA-101 was designed to target Fzd10, whose derivations include 211 At-OTSA-101, 111 In-OTSA-101, and 90 Y-OTSA-101. Among them, 111 In-OTSA-101 is routinely used as a diagnostic tool. Whereas OTSA-101 only showed a weak antagonistic activity on the growth of SS cells. Afterall, OTSA-101 merely became a putative carrier for targeted SS radiotherapy. In preclinical experiments, a large number of cell death occurred in the PDX murine model on the first day after injection of 211 At-OTSA-101. All mice in this interventional group survived, and the tumor volume was greatly diminished. However, 211 At-OTSA-101 was easily accumulated in the stomach and its uptake rate of tumor cells was not as good as 111 In-OTSA-101. But the inhibitory effect of 211 At-OTSA-101 was much better than 90 Y-OTSA-101 281 . Furthermore, 90 Y-OTSA-101 showed obvious bone marrow suppression and significant hematotoxicity. 320 Peptide mimetics Foxy-5. Sponsored and developed by WntResearch (https:// www.wntresearch.com), foxy-5 is a mimic of WNT5A. Preclinical studies showed that a low level of WNT5A was correlated with a more metastatic or advanced outcome in breast and prostate cancer. In accordance, foxy-5 could prevent metastasis to some extent. 321,322 At present, WntResearch has supported five clinical trials registered on NIH and EUCTR, yet with no results posted ( Table 2).
CWP232291. CWP232291, abbreviated as CWP291, can decline the transcriptional effect of canonical Wnt signaling. CWP291 was reported to suppress the growth of castration-resistant prostate cancer through the degradation of β-catenin via apoptosisinduced ER stress. 323 In fact, CWP232204 is the active form of CWP291 in serum. Currently, CWP291 was put forward in clinical trials of treating acute myeloid leukemia (AML) and refractory myeloma without results updated (Table 2).
SMIs targeting cytoplasmic proteins Tnks inhibitors. Tankyrase is a poly-ADP polymerase (PARP), which can make Axin1/2 poly-ADP ribosylation, leading to the ubiquitination and degradation of the latter. Therefore, the silence of Tnks will result in the inhibition of the Wnt/β-catenin signaling. According to the binding sites, Tnks inhibitors can be divided into two categories: targeted nicotinamide subsites and adenosine subsites. The nicotinamide domain is ubiquitous in the PARP enzymes, and its specific site inhibitor is XAV939. For the adenosine domain, it is unique to Tnks, and many Tnks inhibitors (G007-LK, NVP-Tnks656, JW55/74, and IWR) target this site. 324 Having been developed for a long time, Tnks inhibitors have shown outstanding tumor-suppressive effects in preclinical experiments, but none of them have entered into clinical trials. Of note, they were reported to possibly cause bone loss. Inhibition of Tnks (using either XAV939, IWR-1, or G007-LK) in murine models led to the accumulation of a substrate, SH3 domain-binding protein 2 (SH3BP2), which subsequently enhanced the Ranklmediated osteoclast formation. 325,326 In the human genome, the gain of function (GoF) of SH3BP2 promoted osteoclastogenesis so which led to bone loss. 327 Although SH3BP2 can also promote the differentiation and maturation of osteoblasts, 328 Tnks inhibitors dominantly showed an osteoclastogenesis effect in vivo, because the concentration needed for stimulating osteoblast is about 10 times higher than that of osteoclasts. 326 Not only Tnks, but the family of PARP plays an important role in bone homeostasis. Though theoretically, pan-PARP inhibitors are presumed to have a greater influence on bone, there is currently no convincing evidence to prove their regulations of bone mass.
In addition, Tanaka et al. found that CRC cells (whether established cell lines or patient-derived cells) with Apc-truncated mutations responded well to Tnks inhibitors (XAV939, IWR-1, and G007-LK), especially the mutations with all the 20-amino-acid repeats (the β-catenin binding sites of Apc) obliterated. Conversely, the longer Apc mutation may lead to resistance to Tnks inhibitors. These results suggested an underlying therapeutic value of Tnks for dealing with truncated Apc mutant cancers. 329 XAV939 was identified as an inhibitory factor of Wnt/β-catenin signaling, originally in CRC cell lines. This SMI stabilized Axin to increase β-catenin degradation by suppressing Tnks1/2. 171 It has been investigated that beside from CRC, XAV939 can constrain certain cancers by inhibiting the Wnt/β-catenin signaling. The combined utilization of low-dose paclitaxel and XAV939 inhibited breast cancer metastasis and the growth of triple-negative breast cancer. This mechanism was contributed to inhibiting Wnt/ β-catenin signaling to enhance cancer cell apoptosis and attenuate EMT and angiogenesis. 330 In gastric cancer, XAV939 can inhibit the invasion and metastasis of cancer cells. 331 XAV939 and RNAi-Tnks1 inhibited the stemness and migration of cancer stem cells and accelerated cell apoptosis in neuroblastoma by attenuating the abnormal status of Wnt/β-catenin signaling. 332,333 XAV939 and IWR, the inhibitors of Tnks, repressed the growth of lung cancer and reduced tumorigenesis. Experimental data proved the inhibitory effect of XAV939 and IWR on cancer cell growth in murine lung cancer models. 334 XAV939 enhanced the ability of CD4 + lymphocytes in biochemically recurrent prostate cancer cell lines, LNCaP and PC-3. 335 JW67 and JW74, two compounds as inhibitory molecules of Wnt/β-catenin signaling, can inhibit the growth of CRC by Axin2 accumulation and β-catenin degradation. 336 JW74 and JW55 are Tnks inhibitors that bind to the lower part of donor NAD + cleft, instead of mimicking nicotinamide. JW55 also worked well in the murine PDX CRC model with Apc mutation. 337 G007-LK is an analog of JW74, and G244-LM is analogous to XAV939. 171,336 Compound G007-LK and G244-LM can damage the proliferation, colony formation, and growth of CRC cells via activating Axins to suppress Wnt/β-catenin signaling. 338 As an adjuvant, G007-LK can also enhance the sensitivity of glioma stem cells to a chemo drug temozolomide and CRC cells to PI3K/EGFR inhibitors. 339,340 As mentioned above, IWR compounds were defined based on their anti-Wnt pathway activities in a phenotypic screening assay. 299 IWRs, including five molecules, target Wnt-dependent cancers by enhancing Axin capability to suppress Wnt/β-catenin signaling. 299,341 The inhibition of IWRs has been identified in various cancers like osteosarcoma, colorectal, breast, lung, and hepatocellular cancers. 169,299,[342][343][344][345] IWR-1 served as a good adjuvant that reversed the resistance of osteosarcoma to doxorubicin, and in vivo inhibited the growth of subcutaneous PDX osteosarcoma. 342 Moreover, IWR-1 could inhibit the EMT of CRC by inhibiting Wnt/β-catenin transduction. 343 In the following part, we are going to discuss several agents, NVP-TNKS656, AZ1366, RK-287107, and HLY78, that are Tnks inhibitors with very limited preclinical reports. For instance, NVP-TNKS656 increased the sensitivity of CRC cells to PI3K/AKT inhibitor in vivo, but the increment effect could be reversed by high FOXO3A. 346 And NVP-TNKS656 also inhibited the metastatic and invasive EMT hallmarks of hepatoma carcinoma cells. 347 AZ1366 is a novel inhibitor of Tnks to constrain NSCLC growth. EGFR-driven NSCLC was suppressed by the synergistic function of AZ1366 and EGFR inhibitors. 348 AZ1366 could erase the insensitivity of CRC cells to irinotecan, 349 and it coordinated with EGFR inhibitor to control the growth of Wnt-responsive lung cancers in the murine models. 348 RK-287107 was claimed to inhibit Tnks1/2 four or eight times more than G007-LK. It could down-regulate the Wnt/β-catenin signaling in Apc-truncated CRC cells, but had little effects in wild-type cancer cells. 350 An SMI from the synthetic chemical library of lycorine derivatives, 4-ethyl-5-methyl-5,6dihydro- [1,3]dioxolo [4,5-j]phenanthridine (HLY78), activated Wnt/ β-catenin signaling by targeting the DIX domain of Axin and enhanced the effect of Axin-Lrp5/6. 351 CK1 inhibitors. The family of CK1 proteins has several isoforms, of which CK1α/δ/ε are involved in the canonical Wnt signaling, but their roles are distinguished. CK1α serves as a component of the destructive complex and phosphorylates β-catenin. Instead, CK1δ/ ε phosphorylates Dvl leading to the stabilization of β-catenin. And CK1δ and CK1ε are also moonlighting proteins that control the circadian clock. IC261 is a selective, ATP-competitive CK1 inhibitor that has shown its high efficiency in handling CRC and glioblastoma cells. 352,353 Not only that, IC261 has alternatively biological effects that may lead to the death of cancer cells independent of inhibiting the canonical Wnt pathway. 352,354 In an in vitro study, IC261 induced centrosome fragmentation during mitosis independent of CK1δ. 355 Other CK1 inhibitors include PF670 and PF480 that do not kill cancer cells.
Also known as umbralisib, TGR-1202 is a dual kinase inhibitor that targets both PI3Kδ and CK1ε. The non-canonical Wnt pathway has been demonstrated to play an important role in CLL with CK1δ/ε overexpression. 356 TGR-1202 is exclusively used in hematological malignancies and is currently recruited in clinical trials for treating CLL, Hodgkin's lymphoma, mantle cell lymphoma, and so on (Table 2).
Pyrvinium, known as an antiparasitic drug, has been approved by FDA as an orphan drug for FAP because of its exclusive CK1α agitation capability. Earlier experiments have shown that pyrvinium inhibited the proliferation of HCT116 and SW480 cell lines by selectively activating CK1α and inhibiting canonical Wnt signaling. 194 This activation may be achieved by allosteric regulation, thereby improving the catalytic ability of CK1 without affecting the binding of substrates. 357 Although pyrvinium also showed inhibitory ability in other Wnt/β-catenin signaling-driven tumor cell lines (SUM-149/SUM-159, 358 and A2278 359 ), its bioavailability was low in tissues other than the intestine. 360 In addition, pyrvinium also stimulated CK1γ, leading to Wnt signaling suppression. However, some scholars indicated pyrvinium pamoate lacked the efficacy on CK1, instead, it downregulated AKT by an undiscovered mechanism to activate Gsk3β. 361 Gsk3 inhibitors. Ambiguously serving as carcinogenic or cancer suppressor, 362 Gsk3 lies downstream of diverse signaling pathways, including the Wnt/β-catenin signaling pathway. Therefore, targeting Gsk3 has very restricted specificity to produce unexpected off-targets. Thus, this review will not thoroughly discuss Gsk3 inhibitors due to their low specificity of Wnt/β-catenin signaling. Therefore, here only lists a typical Gsk3 inhibitor that has shown good performance in inhibiting Wnt/β-catenin signaling.
LY2090314 is a potent Gsk3α/β inhibitor that has demonstrated good cooperation with platinum agents in vitro in the treatment of melanoma, 363 AR-V7 + prostate cancer, 364 and neuroblastoma. 365 But in a phase I clinical trial, the combination of LY2090314 and pemetrexed/carboplatin showed eleven DLTs in ten enrolled patients 366 (Table 2). Other Gsk3 inhibitors like ABC1183 and CHIR-99021 are currently under exploring.
Dvl inhibitors. Though serving as a critical conductor in the canonical Wnt pathway, only few drugs and studies targeting Dvl are available. Till now, targeting Dvl has already shown a notable potentiality for tumor treatment. For instance, Dvl2 was overexpressed in HCC and reported to link with poor prognosis. 367 In comparison to normal adult bronchial/alveolar epithelial and peripheral blood mononuclear cells, Dvl1-3 were found to be exclusively expressed in NSCLC and CLL cells, respectively. 368,369 Most drugs targeting Dvls currently are developed to selectively inhibit PDZ-Fzd interaction. Such as FJ9, it was significantly reported to cause apoptosis in melanoma and NSCLC cell lines. 370 3289-8625, another Dvl inhibitor, suppressed the growth of PC-3 cells. 371 Taken together, the druggability of Dvls remains largely unknown. Urgent studies are demanded to unearth more applicable chances for drugs targeting Dvls in the future.
Agents targeting protein-protein interaction (PPI) in the nucleus Nuclear localization of β-catenin will initiate downstream gene expressions. This process involves the formation of a key transcription complex, β-catenin-Lef/Tcf. β-catenin-dependent transcriptional regulation is also modulated by the phosphorylation protein Tnik, and various transcription co-factors like CBP, BCL9, CREB, BRG1, etc. These proteins listed above are all potential therapeutic targets of Wnt/β-catenin signaling. Despite the most core position of β-catenin in canonical Wnt signaling, pityingly, among all previously screened SMIs, few can directly bind to β-catenin. This is because, unlike most enzymes or receptors with identifiable binding pockets, the PPI surface of β-catenin is relatively large and flat for small molecules. And the Tcf-binding domain within β-catenin is overlapped with many other proteins, making it difficult to specifically interfere with Tcf/Lef 372 (Fig. 4). Moreover, among the limited SMIs (CWP232228 for example) that can directly bind to β-catenin, there lacks evidence of Wnt signaling inhibitory in vivo. In contrast, targeting Tnik showed more stable inhibition of canonical Wnt signaling.
Tnik inhibitors. As mentioned above, Tnik is a critical therapeutic target of CRC. Among the Tnik inhibitors, the most commonly used are aminothiazole-based. Masuda et al. discovered a series of SMIs called NCB, classified as ATP-competitive inhibitors. After a high-throughput screening of the small-molecule compound library, they firstly found NCB-0001 (with a moderate inhibitory effect on canonical Wnt signaling in HEK293 cells), which led to the discovery of N5355 following the structural optimization. 252 N5355 significantly reduced the expression of Wnt/β-catenindependent genes, such as Axin2 and cMYC, in HCT-116 cell line. 373 What's worth noting, N5355 did not affect the vitality of canonical Wnt signaling independent cell lines, HELA and HEL299. 252 This research group subsequently discovered NCB-0005 (or called KY-05009), which markedly inhibited the activation of Wnt/β-catenin signaling mediated by TGF-β1 in A549 cells. 373 Also, the NCB-0005 cooperated with the RTK inhibitor dovitinib to impede the growth of multiple myeloma cells. 374 Their latest discovery was NCB-0846, which inhibited the growth in a variety of patient-derived CRC xenografted tumors, 375 and abrogated the EMT of DLD-1, HCT-116, and A549 cell lines. 376,377 As ATP-competitive inhibitors, NCB series drugs may also have inhibitory effects on other kinases. Therefore, it should be concerned that the clinical effects of NCB drugs may not be limited to target canonical Wnt signaling. In addition to CRC, inhibitory effects of NCB drugs have also been reported in other tumors. 378,379 Inhibitors targeting β-catenin--Lef/Tcf complex. Such inhibitors block the PPI between β-catenin and Tcf/Lef. However, the Tcf/Lef recognition domain on β-catenin highly overlaps with that of Axin, Apc, E-cadherin, and other proteins, which may bring potential adverse effects derived from off-targets of canonical Wnt signaling.
Transcription co-factor inhibitors. For less superposition and narrower PPI surface in the recognition domain, inhibitors targeting the terminal domains (unstructured or intrinsically disordered protein regions) are putative to be more specific and druggable. These regions involve the binding of transcriptional cofactors like BCL9/B9C and CBP. Indeed, the SMI 40-fluoro-Nphenyl-[1,10-biphenyl]−3-carboxamide and its derivatives showed a much greater affinity toward β-catenin/BCL9 over β-catenin/E-cadherin. 389 For targeting CBP, PRI-724, and ICG-001, a pair of enantiomers, are potent inhibitors of canonical Wnt signaling that antagonize the binding of β-catenin to CBP but not affecting the viability of p300 meanwhile. They have similar effects and sometimes can be used interchangeably. Their preclinical efficiency has been demonstrated in a variable of cancer cells such as HNSCC, 301 hepatoma carcinoma, 390 and neuroendocrine tumor cells. 286 A clinical trial of cancer research is still engaging but without results posted (Table 2, two of the three trials were terminated). However, they seemed to be more applicable in antifibrosis than in antitumor. 391,392 BCL9 provides an excellent opportunity for targeting Wnt signaling therapy. In normal cells, BCL9 expression is almost undetectable, 393 but it reigns a set of EMT-regulated Wnt target genes in cancer cells. 394 And after knocking out Bcl9 and B9l in mice, no obvious harmful phenotypes were found, indicating that these genes are of little importance for mammals to balance Wnt signaling in normal tissues. 394 Based on this, Takada et al. developed the stapled peptide stabilized α helix of BCL9 (SAH-BCL9), which can snatch β-catenin from the endogenous Bcl9/Tcf/ β-catenin complex with greater affinity. SAH-BCL9 showed a significant antitumor effect in both colo320 in vitro and intraperitoneally injected murine model in vivo. 395 Fig. 4 The structural illustration of human β-catenin protein. In this figure we mainly demonstrated the important PPI binding domains and phosphorylation sites of β-catenin. This image was modified from a published research. 372 Other drugs may target Wnt/β-catenin signaling. Apart from the synthetic compounds, in recent years more and more natural products have been revealed to impact on repressing Wnt/ β-catenin signaling. [396][397][398][399] The most studied may be resveratrol, a phytoalexin produced when plants are inflicted with injuries or pathogenic attacks. Resveratrol has been enrolled into two phase I clinical trials, investigating its value in the dietary supplement in CRC prophylaxis. 400,401 Moreover, silibinin, an extract from the milk thistle seeds, also showed its potentiality in preventing tumorigenesis in a preclinical model. 402,403 In addition to natural products, many FDA-approved drugs have expanded their indications to crosstalk with Wnt/β-catenin signaling. Just as mentioned above, as an antiparasitic agent pyrvinium also inhibited Wnt/β-catenin pathway via activating CK1α, or Gsk3β. 194,361 Other old drugs like niclosamide to inhibit canonical Wnt signaling by promoting Fzd1 endocytosis, 404 celecoxib to induce the degradation of Tcf7, 405 and salinomycin to target LRP6. 406 Due to the functions of these old drugs for targeting Wnt/β-catenin signaling are still limitedly known, we will not continue to discuss these drugs. For more details, please find in Table 3.
CHALLENGES OF WNT/Β-CATENIN-TARGETED THERAPIES
Even though drug development in the field of Wnt targeted therapies in cancers has been so prosperous, no exclusive drug has yet been approved by the FDA. This dilemma has aroused an un-neglectable crisis for moving forward the targeted therapies of canonical Wnt signaling in cancers. There are still many challenges before they could be put into large-scale clinical trials. This review crudely divided these drugs into three categories: SMIs, mAbs, and modified peptides. The molecular and structural properties determine the potential diversity of the clinical applications of these three categories. mAbs and peptidomimetics are more suitable for the targets on cell membrane surface, accordingly, SMIs are more suitable for targeting Wnt receptors and intracytoplasmic kinases, and stapled peptides are suitable for disrupting PPI in the nucleus. For SMI, they are known for their good permeability and oral bioavailability, but off-target effects are common. This is especially inevitable for ATP-competitive drugs. mAbs are known for their unparalleled specificity, but their oral availability is poor. The mAbs have a slow clearance rate and a long half-life, which increases the burden on the liver and kidneys of the patient and increases the side effects. Besides, mAbs also have poor permeability and cannot target intracellular sites. Most importantly, current modifications of mAbs failed to decrease the antigenicity and increase tolerance. For modified peptides, they can achieve good permeability and tolerability, but a larger dose is required for them to maintain the therapeutic concentration.
Considering the ubiquitous and extensive involvements of Wnt/ β-catenin signaling in regulating various normal tissue functions, it is ineluctable to accept the unintended byproducts of targeting canonical Wnt signaling. Although it provides an ideal approach to target nuclear β-catenin in aberrant canonical Wnt signalingassociated cancers, it is currently unprocurable to develop such kinds of targeted agents because of the very limited druggability. This crisis of developing the targeted therapies of Wnt/β-catenin signaling in cancers may possibly be overcome via the in-depth analysis of the preexisting molecules. Also, more development strategies such as immunotherapy, the synergistic medications of drugs, the decrement of mAbs weight, and so on could endow us with underlying breakthroughs to realize the canonical Wnt signaling targeted therapies in cancers.
CONCLUSION AND DISCUSSION
Identified decades ago, Wnt/β-catenin signaling immediately generated substantial interest in the field of cancer research because of their extensive involvements and intensive roles in regulating numerous aspects of cancers, including the initiation, development, progression, diagnosis. In this study, we revealed that the status quo investigations have depicted an unneglectable crisis of the Wnt/β-catenin-dependent targeted therapies in cancers. In detail, most of the medications targeting Wnt/β-catenin signaling in cancers have not been enrolled into From the second to the fifth lane, we provided the types of drugs as SMIs, mAbs, Peptides, and others, and the parts that follow the colon indicted the names of drugs and the targeted components of Wnt/β-catenin signaling were shown in the brackets clinical trials, and even the registered clinical trials remain in the very early phases of which most have not demonstrated satisfactory outcomes (Fig. 5). However, this dilemma clashed with the early evidence in in vitro and in vivo with the optimal performance of targeting Wnt/β-catenin signaling in cancers. The reasons for these poor therapeutic benefits rely on the fact that the current remedies of Wnt/β-catenin signaling-associated targeted therapies in cancers often lack satisfactory efficacy, specificity, and safety. For instance, due to the crucial roles of Wnt/ β-catenin signaling in bone, many targeted therapies demonstrated obvious side effects of severe bone loss. Furthermore, certain SMIs and mAbs targeting Wnt/β-catenin signaling showed limited specificity because of the difficulty of identifying the druggable structures and sites of Wnt/β-catenin signaling components. These facts suggest that Wnt/β-catenin signaling targeted therapies in cancers are still lagging behind for a solid clinical translation. Nevertheless, as massive early experimental investigations have already proven the benefits of targeting Wnt/β-catenin signaling in cancers it is still worthy to further analyze the capabilities of developing Wnt/β-catenin signaling targeted therapies in the future (Fig. 6). It should be noticed that future studies are called to solve the current crisis via decreasing the side effects but improving the specificity and safety of Wnt/β-catenin signaling targeted therapies in cancers. To sum up, our review systematically exhibited the strengths and weaknesses of the most updated targeted therapies of Wnt/β-catenin signaling in cancers, aiming to generate a thorough awareness of current challenges and crises. Ultimately, this study sought to provide future studies with the issues and insights that should be taken into account for developing better-targeted therapies of Wnt/β-catenin signaling in cancers. | 2021-08-30T13:39:30.134Z | 2021-08-30T00:00:00.000 | {
"year": 2021,
"sha1": "d9c59f940ea0d38cbcc63a5302820ef80e158dbf",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41392-021-00701-5.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9c59f940ea0d38cbcc63a5302820ef80e158dbf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
263433413 | pes2o/s2orc | v3-fos-license | Every Little Bit Counts: The Impact of High-Speed Internet on the Transition to College
This paper investigates the effects of high-speed Internet on students' college application decisions. We link the diffusion of zip code-level residential broadband Internet to millions of PSAT and SAT takers' college testing and application outcomes and find that students with access to high-speed Internet in their junior year of high school perform better on the SAT and apply to a higher number and more expansive set of colleges. Effects appear to be concentrated among higher-SES students, indicating that while, on average, high-speed Internet improved students' postsecondary outcomes, it may have increased pre-existing inequities by primarily benefiting those with more resources.
I. Introduction
College planning is complicated: there are thousands of colleges in the United States, each with countless attributes, and students face uncertainty in both admission and completion.
Moreover, students encounter hurdles and barriers throughout the lengthy college planning and application process. These barriers could be procedural, such as requirements that students register for exams by certain deadlines, as well as informational, such as the wealth of college characteristics that students are expected to obtain and distill to help produce a better fit. These complications, along with the many steps involved in the application process (Klasik, 2012), have contributed to widespread inequality in college access (Bowen, Chingos, and McPherson, 2009;Hoxby and Avery, 2013;. While college access efforts, programs, and organizations are designed to help students overcome the optimization problem they face, there is an emerging literature that demonstrates that students often do not have full or correct information when making this consequential decision (Dillon and Smith 2013;Hoxby and Turner, 2015). A natural question follows -how can we encourage students to apply to and attend colleges that best fit their needs? Prior successful interventions have aimed to provide students with additional information (Avery and Kane, 2004;Carrell and Sacerdote, 2013) and change the way information is presented (Bettinger et al., 2012;Turner, 2013 and2015). Other studies have found that students' decisions are quite sensitive to small changes in information or costs. 2 Thus, in theory, a technology that increased the availability or improved the presentation of information could also generate large changes in college-going.
This paper investigates whether the dramatic and conditionally-random national diffusion of broadband Internet over the last decade affected students' college application behavior. Between 2000 and 2013, high-speed Internet usage increased from 3 percent to 70 percent. 3 Recent research concludes that the rollout of broadband to households had an impact on a host of outcomes, such as academic achievement (Vigdor et al., 2014;Faber et al., 2015), labor force participation (Dettling, forthcoming), voter turnout (Falck, Gold and Heblich, 2014), and 2 Evidence of students' responsiveness to small changes in information or costs include interventions that change: 1) rules of thumb (Pallais, 2015), 2) the salience of college rankings (Luca and Smith, 2013), 3) financial aid offers (Cohodes and Goodman, 2014), 4) application fees and essays , and 5) admissions exam taking (Bulman, forthcoming;Goodman, 2013;Klasik 2014;Hurwitz et al. 2015). 3 High-speed internet usage rates were obtained from PEW Research at www.pewinternet.org/data-trend/internetuse/connection-type/.
criminal behavior (Bhuller et al., 2011). An important mechanism emphasized in this research is ease and speed of information acquisition. In this paper, we tie the two strands of literature together and examine whether the diffusion of high-speed Internetwhich could make it easier and less costly to obtain information about the college application processaffected students' application decisions.
Our main empirical strategy links administrative test-taker data from the College Board to a zip code-level measure of residential broadband Internet availability during students' junior year of high school. 4 We derive a new measure of broadband Internet availability from data collected by the Federal Communications Commission (FCC) on the number of broadband Internet Service Providers (ISPs) in each zip code from 1999 to 2007, a time when broadband Internet prevalence was skyrocketing. To our knowledge, ours is the first measure of availability at the zip code-level that reflects aggregate usage patterns. 5 Using information on millions of students' Preliminary SAT/National Merit Scholarship Qualifying Test® (PSAT) scores, SAT scores, and SAT Score Sends (as a proxy for applications), we first probe the extent to which broadband Internet availability induced SAT score changes and then examine a range of college application outcomes. Specifically, we investigate whether students with broadband Internet are more likely to apply to a four-year college, relatively many colleges, relatively selective colleges, an academically matched college, top liberal arts colleges, the flagship university within a student's state, and an out-of-state college.
Our findings indicate that increased availability of broadband Internet in a student's zip code unambiguously improves her admissions exam scores and her application set. On average, her test score improves by 0.7 scale points and her application portfolio increases in both size and quality, with the magnitude of these changes ranging from 0.2 to 0.4 percentage point, depending on the specific outcome we consider. Some of these results are particularly striking when we scale by mean application rates; for example, about 7.2 percent of the SAT-taking population apply to a very selective liberal arts college, but we find that high-speed Internet availability increased that rate by almost 0.2 percentage point (or 3 percent). Further, while an applicant's performance on the SAT and her application set are undoubtedly intertwined, we find that the improvements in her application portfolio persist beyond what can be explained by her improved score: less than one-fifth of changes in her application outcomes can be traced to Internet-driven increases in average test scores.
For a causal interpretation of our estimates, we must assume that high-speed Internet is exogenous to student's testing and application outcomes, conditional on zip code fixed effects and student and time-varying zip code-level controls. We examine the validity of this assumption in several ways. First, we confirm that early and late adopting zip codes had similar trends in outcomes before high-speed Internet was available in the late 1990s. Second, we show that the availability of high-speed Internet in the student's home zip code in subsequent yearsi.e., when she is presumably in collegehas no effect on her application outcomes. Third, we examine broadband availability in an event-study framework and estimate application effects in years prior to availability that are generally indistinguishable from zero, a pronounced jump in the year broadband becomes available, and generally positive and statistically significant effects thereafter. Last, because we observe outcomes only for students who take an exam administered by the College Board, to mitigate selection concerns, we demonstrate that our main estimates are robust to the exclusion of states in which the ACT-a college admissions exam that competes with the SAT-prevails among college-going students.
Even as high-speed Internet became more accessible across the United States, it is likely that not every type of student benefited equally from its availability. 7 In particular, it is well known that high-speed Internet take-up rates and computer accessibility vary by socioeconomic status. 8 That said, the research on college under-match suggests teens from a lower socioeconomic background may be the least informed about college, and hence, may stand to gain the most from high-speed Internet. When we estimate the effects of high-speed Internet availability separately by group, we find that application improvements appear to be driven by higher-income students, 7 Moreover, even if there were equally improved access to the efficiencies of high-speed Internet-which there is not -the general equilibrium effects are less clear, since students may be competing with one another for spots. This is beyond the scope of this paper. 8 For example, the October 2003 CPS indicates that among 15-18 year olds whose mother did not complete high school, 8 percent had access to broadband at home and 55 percent had access to a computer at home. Among 15-18 years olds whose mother has a post-graduate degree, 46 percent had access to broadband at home and 97 percent had access to a computer at home. By 2009, when high speed Internet access was nearly universal, only 58 percent of 15-18 year olds with a mother who did not complete high school had access to broadband at home, compared with 95 percent of 15-18 year olds whose mother has a post-graduate degree. Computer ownership was not asked in 2009. students in urban areas, white students, and those with more-educated parents. These findings likely reflect an array of known impediments between availability and effective use for lowerresource groupsi.e., differential household adoption rates, within-household access to a computer, and use of the Internet to acquire and distill information on colleges. Altogether, our results indicate that while, on average, high-speed Internet improved students' postsecondary outcomes, it may have widened existing inequities, favoring those with more resources.
The contribution of our paper is fourfold. First, we provide the first causal estimates of the effects of high-speed Internet availability on college admissions testing and application behavior. 9 Second, we add to a literature which finds that small cost reductions, and, specifically, improved access to information, can improve postsecondary outcomes. We demonstrate that significantly many students, if given the opportunity, appear to be able to obtain and distill information on colleges and universities for themselves. We also show that some students are potentially being left behind by the information age; because unobserved barriers limit their access, because these students require more guidance on how to benefit from their access, or some combination of the two. Third, given that universal broadband Internet is a central policy goal, our finding that broadband Internet availability can change college-going outcomes, but that the benefits may unequally accrue to higher-resource students, suggests a more intensive intervention than deployment alone may be necessary to realize the full benefits of increased availability in underserved areas. Last, we make a methodological contribution by developing a way to measure broadband Internet availability at the local level that matches aggregate usage patterns.
II. Conceptual Framework and Related Literature
There is a large and growing literature in economics on college choice from which we derive three key findings that serve as the bedrock of our analysis. One, there are substantial returns to college quality (e.g., Card, 1995;Black and Smith, 2006;and more recently, Zimmerman, 2014), particularly among disadvantaged students Krueger, 2002, 2011). Two, despite the potential for large returns, many students, especially disadvantaged students, do not apply to or 9 There exists a related literature on the effects of computer and Internet technology on academic achievement (e.g., Faber et al. 2015;Vigdor, Ladd, and Martinez, 2014;Belo et al 2013;Fairlie and Robinson, forthcoming). We view our paper as complementary to those papers since we examine the role of Internet technology in reducing informational and transactional frictions in the college application process, which could also indirectly improve academic achievement if students attend better matched or higher quality colleges.
attend a college commensurate with their abilities (Bowen, Chingos, and McPherson, 2009;Pallais and Turner, 2006;Spies, 2001). In the literature, this is typically referred to as "undermatching." Three, information constraints appear to play a sizable role in under-matching (Hoxby and Avery, 2012;Turner, 2013 and2015).
Several recent papers indicate that high school students exhibit large changes in behavior in response to small changes in costs and pieces of information, and the strongest responses usually occur among subsets of students lacking sound pipelines to college. Smith, Hurwitz, and Howell (2015), for instance, find that a small increase in an application fee or an additional essay heavily influences application behavior. Further, students' application sets are seemingly guided by defaults or perceived "recommendations." Pallais (2015) finds that students apply to more schools when given an additional free Score Send, a cost savings of $6. Cohodes and Goodman (2014) find that students forego large expected earnings for small offers of financial aid from the state. Several papers also find that state mandated admissions exams and proximity to test centers induce large enrollment changes (Bulman, forthcoming;Goodman, 2013;Klasik 2014;Hurwitz et al. 2015).
Such large behavioral responses to defaults, nudges, and small changes in costs suggest students are not well informed of optimal strategies in the application process. Several recent experimental studies have explored this idea by examining whether targeted information provision can improve application behavior. These experiments have directly intervened with the college application process and made tailored recommendations to particular student groups.
Examples of interventions include filling out financial aid forms, counseling on the application process generally, and helping students obtain fee waivers that they were already eligible to receive (Avery and Kane, 2004;Bettinger et al., 2012;Carrell and Sacerdote, 2013;Turner, 2013 and2015). 10 Each intervention has generated large improvements in students' attendance outcomes.
The diffusion of broadband Internet serves as a hands-off experiment that could reduce some of the transactional frictions in the college search. High-speed Internet allows prospective 10 Hoxby and Turner's experiment, in particular, produced large enrollment effects from targeted mailings even though their student sample was drawn from the universe of college admissions test-takers, so that both their treatment and control groups were equally college-bound and eligible to receive less-streamlined marketing materials from colleges. Consistent with prior observational studies, a survey of both groups found that untreated students were dramatically under-informed about college qualityparticularly at top-ranked liberal arts schools and out-of-state schools but also at state flagshipsrelative to students who received their intervention.
applicants to quickly and easily conduct tailored searches for the information they desire - Compared with the more targeted interventions described above, the nature of the relationship between high-speed Internet access and our outcomes is more complex. Students with Internet access must search for information and distill it largely by themselves. If students do not use the Internet to study for exams and search for information on college application, we would see no effect on student outcomes. If students primarily use the Internet for leisure activities, and the Internet serves as a distraction leading students to substitute time towards leisure, we could see worse outcomes. Ultimately, whether high-speed Internet availability, on average, generates positive, negative, or no effects on student outcomes is an empirical question we strive to answer through the paper. 11 Driven in part by this ambiguity, we also investigate heterogeneity by demographics and geography. We consider two broad hypotheses. First, effects could be concentrated among groups found to under-match or who have been otherwise shown to be sensitive to information 11 Indeed, surveys offer ambiguous and inconclusive evidence on whether students use the Internet to effectively search for information on college application. A 2005 survey of Internet users found that 45 percent of Internet users, or 30 percent of all adults, had used the Internet to search for information on prospective college or universities for themselves or a family member (PEW 2005). At that time, this was similar to reported usage rates for banking online (44 percent) and looking for information about a job (44 percent), and higher than reported usage rates for reading a blog (27 percent), playing online games (32 percent), looking at online classifieds (36 percent), or using social networking sites (7 percent) (PEW, 2014). A 2012 survey found that three-fourths of teachers think the Internet and digital search tools have a mostly positive impact on students' research habits, suggesting students may be able to conduct effective online searches (Purcell et al., 2012). However, 64 percent of teachers think digital technologies do more to distract students than help them academically (Purcell et al., 2012), suggesting that students may use the Internet for counterproductive activities that detract from study and search activities.
nudges. Per the literature, these tend to be students with the fewest inroads to elite colleges. As such, we examine whether minority students or students in low-income neighborhoods, students with less-educated parents, or students from remote places are relatively more affected by highspeed Internet access.
Still, the literature on under-match is fairly new, and many of the most relevant studies were conducted after high-speed Internet had become commonplace. This alone suggests that the mere prevalence of high-speed Internet may not have been particularly beneficial to these traditionally-underserved populations, either because of a lack of take-up or differential usage of Internet services. (Certainly, these populations on a whole are demonstrably less likely to have and use Internet services.) Indeed, within North Carolina, Vigdor, Ladd, and Martinez (2014) recently found that for younger students, Internet access increased the wedge in testing outcomes between high-and low-income students, which they attribute to a digital divide in how productively the Internet is used by the two groups at home. Thus, another hypothesis is that the greatest beneficiaries of high-speed Internet as it rolled out were the students with the most initiative and/or the greatest resources, especially if such students tend to use the Internet for education-related activities more than their peers. This might imply a concentration of effects among groups with greater inroads to colleges.
III. Data
The main empirical approach used in this paper is to relate zip code-level broadband Internet availability to individual-level testing and application data for students who took SAT exams.
This section describes our main data sources and how we construct our relevant variables.
a. Testing and Application Data
Our primary data source is the College Board (CB), an organization that administers the Preliminary SAT/National Merit Scholarship Qualifying Test® (PSAT) and the SAT to high school students. The PSAT is an assessment taken prior to the SAT that serves as a qualifying exam for a nationally competitive scholarship program. Approximately 3.5 million students take the PSAT each year, typically either in the fall of their sophomore or junior year or both. The SAT is primarily a college admissions exam, and it yields a key metric on admissibility as well as application and demographic information for a majority of college-bound students. Over 1.5 million students in the graduating high school class of 2014 took the SAT, typically as juniors or seniors and often as both. Using the population of PSAT takers for our analysis of SAT outcomes provides us with a pre-"treatment" ability metric, which can be related to test-taking and application outcomes in the SAT sample.
Our sample is composed of test-takers in the graduating high school cohorts of 2001 to 2008, with roughly one million students per cohort. 12 CB data contain self-reported information on high school graduation cohort, student race, gender, cumulative high school GPA, home zip code, parental income and education, and high school attended. The SAT contains math and critical reading sections, each of which is scored on a scale between 200 and 800 for a maximum composite score of 1600. 13 The PSAT has a similar format, but the section scoring is between 20 and 80.
Along with exam scores and basic demographics, the CB data also identify colleges to which students send official copies of their SAT scores (Score Sends), which serve as good proxies for actual college applications (Card and Krueger, 2005;Pallais, 2015). When registering for the SAT, the student has the option to send her scores to four colleges for no additional cost. Scores may also be sent at a later date for a fee of approximately $11 per Score Send. 14 For every Score Send, we merge characteristics of each college, including data on quality (average SAT score of incoming freshmen), control (public or private), level (two-or four-year), type (e.g., liberal arts, state public flagship), and location ( Table 1 displays the summary statistics on the sample. Table 1 displays scoring and application means of SAT takers. The average combined PSAT score on both sections is just over 99 and, similarly, the average SAT score, which combines the results of these same testing areas, is just over 1000. According to the students' Score Sends, about 79 percent submitted an application to at least one four-year college. In addition, 40 percent applied to at least five colleges, which, importantly, is one more than the default of four 12 We use the 2001 to 2008 cohort in order to be able to examine Internet access in both junior and senior year of high school. Internet access is available from December 1999 to December 2007. 13 The writing section was introduced in 2005, making the maximum composite score 2400. For consistency across classes, and because colleges typically do not rely on the writing section, we focus only on the math and critical reading sections. 14 The cost changed slightly over the sample period. 15 A few students were excluded because all their demographic data were missing or because they live in a zip code with no information on broadband access. free Score Sends. 16 Approximately 50 percent applied to a selective college (average SAT score greater than 1200) and 30 percent applied to a very selective college (average SAT score greater than 1300), though almost 70 percent of students applied to an academically matched college (average SAT score at least as good as their own or 1300, whichever is lower). Zooming in on school type, about 30 percent of test-takers applied to the state flagship, 50 percent to at least one out-of-state college, and less than 10 percent to a top private liberal arts college.
The top of Table 1 displays demographic characteristics of our sample. The sample consists of 67.2 percent who identify as white, 10.7 percent as black, and 9.5 percent as Latino/Hispanic.
About 45 percent of the sample is male. High school GPA is a categorical variable where 0 is a non-response, 1 is the lowest, and 12 is the highest. We use the categorical variable in the analyses but present the average of the continuous version (equal to 3.363) in the table.
Finally, we add zip code-level economic characteristics to our data in order to control for changes in local economic conditions. Mean adjusted gross income is $77,444, which is measured at the zip code-level and was obtained from the IRS Statistics of Income (SOI) data. 17 Population and housing data at the zip code-level were obtained from the 2000 Census and made time-variant by merging with zip code-level trends in SOI counts of filers and households. We capture local labor market trends by including information on county-level unemployment rates, collected from the Bureau of Labor Statistics. We also include trends in county-level house prices, obtained from the FHFA house price index and the 2000 Census. 18
b. High-Speed Internet Access Data
Our goal is to estimate how broadband Internet affects student testing and application outcomes. Since there is no measure of a student's ability to use or access broadband in the CB data, we construct a measure of broadband availability in a student's zip code. 19 To do so, we 16 The modal number of Score Sends is four and Pallais (forthcoming) shows that students tend to apply to the number of free Score Sends. Therefore, sending at least five Score Sends is a deliberate act. 17 We interpolate missing years in this data. 18 We construct a county-level measure of house prices by combining information on county-level median home prices from the 2000 Census with the Federal Housing Finance Agency house price index, as in Dettling and Kearney (2014). Urban counties use the Metropolitan Statistical Area (MSA) version of the index and rural counties use the rural index. 19 We might alternatively attempt to derive variation from household or student usage rates. Unfortunately, usage rates cannot be constructed for all years at the subnational level. The PEW data are available frequently but are only available at the national level. The CPS data include state identifiers but are only available in 2000, 2001, 2003, 2007, and 2009. Moreover, a measure of access is preferable to usage rates, which capture a student's ability to access online content but are also endogenous to our question of interest if, for example, parents take up high-speed Internet in order to improve their children's educational outcomes. However, in the appendix, we confirm that there combine information on zip code-level ISP coverage with national trends in aggregate use of broadband services to produce a binary measure of broadband availability such that any household within a zip code where broadband is available can opt to have it. Because individual households have very little control over whether and when providers enter their zip code, and very little impact on aggregate usage, the measure we derive is interpretable as an exogenous shock to a student's ability to use high-speed Internet. In this section, we describe the data and construction of our measure and save our examination of its exogeneity for the analysis section.
We derive our measure of broadband ISP coverage from FCC Form 477 Filing data. 21 The FCC requires every facilities-based provider with at least 250 high-speed lines to report basic information about its service offerings and end users twice a year. 22 The FCC releases summary statistics to the public aggregated to the zip code-level, namely a list of zip codes with the number of ISPs who have at least one subscriber within the zip code receiving speeds of 200 kbps or more. 23 The data are available bi-annually from December 1999 to June 2008, and to protect confidentiality, do not distinguish between one, two, or three providers in a zip code.
Over that time, there is considerable variation both across and within zip codes.
Ideally, we would like to operationalize the data on the number of ISPs in a zip code to be able to compare, across zip codes and time, the average resident's ability to access high-speed Internet in her home. While we do not have a direct measure of zip code-level access from which to derive a correspondence between the number of ISPs in a zip code and accessibility, we can compare nationally aggregated coverage rates implied by the FCC data to survey-reported national usage rates of high-speed Internet. This is a reasonable litmus test for how well the raw FCC data capture market penetrationi.e., how provider entry translates into usageat least at the national level. Figure 1 compares trends in the fraction of the population residing in a zip code with at least one provider to national trends in survey-reported usage, which were obtained is a strong positive relationship between our measure of accessaggregated to the state-leveland the state-level teen usage rates that can be derived from the CPS. 21 The FCC data can be downloaded from https://www.fcc.gov/encyclopedia/form-477-data-zip-codes-number-highspeed-service-providers. 22 Small providers, many of whom serve sparsely populated areas, are not required to report to the FCC and sometimes provide information on a voluntary basis. In our analysis we will provide separate treatment for rural and urban areas, in part to address concerns of measurement error arising by differential reporting standards across the two densities. 23 A "subscriber" can be either a residential or small business customer. Larger businesses and institutions are not included as they typically use an alternative technology.
from PEW Research. 24 There are large discrepancies between the two series, which do not match one another well in either levels or trends. 25 Figure 1 also indicates that using the next available cutofffour or more providersdoes little to improve the divergence in levels or trends.
The inability of the raw FCC data to capture national trends in usage is not surprising once one considers the vast heterogeneity in geographic and population sizes across the roughly 32,000 zip codes in the United States. Consider, for example, two zip codes which each reported one to three providers in 2000: 82332 is a rural zip code in Savery, WY with 134 residents in 1,422 square miles, and 10030 is an urban zip code in New York, NY with 25,847 residents in 0.30 square mile. By 2008, 82332 had 4 providers and 10030 had 11. Thus, it seems unlikely that all residents of each of these zip codes has equal access to broadband, suggesting a "one size fits all" measure of zip code-level coverage is inappropriate. 26 We propose a measure that scales the number of providers in a zip code by its size. A 2006 GAO report which investigated broadband deployment across the United States found that ISPs rarely overlapped service territories (GAO, 2006). Thus, the number of ISPs in a zip code should roughly translate into the extent to which that zip code is covered, relative to its size. Moreover, because the literature on broadband roll-out suggests that supply-side constraints limiting roll-out were structurally different in urban and rural areas, we allow for how we define "size" to vary across these concepts. 27 In particular, we scale the number of providers by square mile in rural 24 The PEW data can be found at: www.pewinternet.org/data-trend/internet-use/connection-type/. Note that the these rates of usage are extremely similar to those found in the Current Population Survey, which asked respondents about broadband Internet usage in 2000, 2001, 2003, 2007 and 2009. 25 We are not the first to note that the raw data on provider presence may not be able to accurately capture local access. A 2006 study by the GAO concludes that defining access according to provider presence alone "…may overstate deployment in the sense that it can be taken to imply there is deployment throughout the zip code even when deployment is very localized." The paper also provides the results of a case study in Kentucky, where it was found that 95 percent of residents had a provider in their zip code, but only 70 percent had access in their area. 26 Prior research using the FCC data attempts to circumvent the comparability issues across zip codes by drawing variation in ISP coverage from fairly homogeneous zip codes, either by investigating outcomes within a single state--where zip codes are quite similarly structurallyor by removing high-and low-density zip codes (Vigdor, Ladd, and Martinez, 2014;Xiao and Orazem, 2010). Unfortunately, such cuts are less desirable in our setting, because much of the foundational work in higher education suggests that these are precisely the sorts of comparisons we wish to make, as it is at extreme population densities where there is the most substantial divergence in both information and application behavior. 27 In rural areas, where zip codes are much larger and less densely populated, coverage was constrained by the cost of extending additional lines long distances to reach relatively few customers (GAO, 2005). In urban areas, population density can be a problem because too many customers using a single line at once can exhaust the system (Faulhaber, 2002;Greenstein and Price, 2007;Grubesic and Murray, 2002). Thus, to be most flexible in our definition of coverage, we allowed for the possibility that geographic size may be most relevant for rural zip codes and population size may be relevant for urban zip codes. Note that in our empirical match (described in the appendix) we allowed for a single definition but found that the best match was to include different definitions for areas and by population in urban areas. 28 In the appendix, we present alternative specifications using only the population-scaled measure and only the mileage-scaled measure across all zip code types (as well as separately for urban and rural zip codes). The results become similar to our main estimates the better the measure fits usage trends. 29 To operationalize the concept of high-speed Internet availability and facilitate interpretation, we convert the scaled measure into an indicator variable that takes a value of one when it crosses a specific threshold. 30 Since there is no theoretical guidance for what an appropriate threshold might be, we identify the threshold empirically by targeting the national trends in surveyreported high-speed Internet usage displayed in figure 1. We construct an algorithm to test varying thresholds and ultimately find that the best-fit measure defines penetration in a rural zip code as "at least one provider per 12 square miles," and in an urban zip code as "at least one provider for every 2,700 people." 31 More details on the construction of this measure can be found in the data appendix.
The red line in figure 1 displays bi-annual trends in our measure of broadband Internet access rates based on our measure of high-speed Internet penetration. We see that, unlike the rural and urban consumers. In the appendix we provide results using a single definition for both urban and rural areas. 28 Zip code population data are based on a combination of Census 2000 population data and SOI population data. SOI data provides a count of the number of income tax filers, which was used to create population trends to move the 2000 Census zip-code population total forward. Zip-code land area estimates are from the 2000 Census. 29 The root mean squared error between our main measure of access and national trends in usage is 0.92. Using just the urban population measure for the whole sample, the root mean squared error is 1.34 and the results are quite similar to our main measure. Using just the rural mileage measure for the whole sample, the root mean square error is much larger at 2.79. In this case, the results begin to differ from our main results. We attribute this to the relatively poor fit of this measure, as a different mileage measure which better fits overall usage patterns leads to more similar results to those obtained from our main measure. We also examined the sensitivity of our results to models using discrete bins characterizing the number of providers (one to three and four or more), with the caveat that we believe this substantially overstates national broadband availability and fits the data poorly (as evidenced by figure 1). The measure, though it is difficult to interpret, offers inconsistent testing results (additional test-taking drawing in equally able students but then lower SAT scores overall) and somewhat inconsistent application results (more applications to selective schools; fewer applications). The measure performs better in both respects when we investigate North Carolina alone, as in Vigdor, Ladd, and Martinez (2014), likely because zip codes within a single state are at least conceptually similar. Results are available upon request. 30 We do so because we wish to interpret our results as the effect of Internet access on student outcomes. Interpretation of the linear measure would be difficult, as the literature provides no guidance on what it means for a zip code to have one additional provider per person or square mile. Additionally, the provider measure is not continuous and is listed in bins; thus, a continuous measure of coverage cannot be used and interpreted as such.
While we prefer a dichotomous measure, we note the caveat that we can only interpret our measure as the average resident's ability to access high-speed Internet. It is possible that, for some parts of a zip code, Internet will be available prior to a zip code being "turned on" by our measure, and for others, Internet may never be available. 31 We define "best fit" as minimizing the root mean squared error between each measure and the survey-reported trends. More details can be found in the appendix. Table 4). We find that the fraction of the state population with broadband available in their zip code indeed predicts state-level teen usage rates. 32 (Table 1).
Finally, we note that our measure is designed to capture high-speed Internet "availability" or "access" to the average resident in a zip code, but an individual student's ability to use the Internet is much more nuanced, whereby household take-up and student usage can play either a mediating or amplifying role. High-speed Internet subscriptions are not free, and there are large observable differences in high-speed Internet take-up by education and income. These gaps remain to the present day when local high-speed Internet access is nearly universal, suggesting these differences are due to differential take-up rates beyond the question of availability. 34 Moreover, even if a home has a high-speed Internet subscription, differences in purchased speeds and the number of devices available to a student can lead to differences in a student's ability to use online services. 35 Finally, while survey evidence indicates that searching for college 32 We also consider other aggregated measures of availability, and ours performs the best In fact, some measures (such as the "at least one ISP" measure) are negatively correlated with usage. 33 The white areas on the map are zip codes for which we do not have information. Most represent unpopulated areas like national parks and bodies of water. 34 The 2009 October CPS indicates that 58 percent of 15-18 year olds with a mother who did not complete high school had broadband at home compared with 95 percent of 15-18 years with a mother with a post-graduate degree. 35 The 2003 October CPS indicates that 55 percent of 15-18 year olds with a mother who did not complete high school had a computer at home compared with 97 percent of 15-18 years with a mother with a post-graduate degree.
information is a popular usage of Internet services, if (some) teens do not use Internet for these purposes, we might not see changes in students' application outcomes. 36
IV. Analysis
In this section, we investigate whether high-speed Internet access affects a student's college admissibility and application set, and, to some extent, how the two interact. To separate the two, we need to understand the evolution of a student's application and distinguish behavioral changes in how she targets her application from structural changes in the quality of her application. Thus, we begin by examining whether our measure of high-speed Internet access coincides with shifts in the quantity and quality of SAT test-takers drawn from that population.
Then, in our core analysis, drawing from existing research on college quality and under-match, we consider application outcomes designed to capture behavioral patterns deviating from defaults or broad recommendations. We then examine the validity of the assumptions underlying our interpretation of our estimates as causal. Finally, we extend our analyses in two ways. First, we wed our testing and application results to examine the extent to which observable differences in applicant quality induced by high-speed Internet access can explain the application effects we detect. In addition, we investigate whether our results appear to be concentrated within particular demographic and socioeconomic groups.
a. Empirical Specification
Our estimating equation is a generalized difference-in-differences: yizc=α+β*broadbandizc+Aiθ+Xzcλ+γz+γc+εizc where yizc is a binary variable 37 capturing a testing or application outcome for student i from zip code z in cohort c; broadbandizc is an indicator for broadband availability in her junior year; Ai is a set of student-specific demographic and ability controls; Xzc is a set of cohort-varying zip code economic conditions coincident with the timing of the broadband availability measure (i.e., also in her junior year); 38 and γz and γc are zip code and cohort effects. Note that Ai includes the 36 A 2005 survey of Internet users found that 45 percent of Internet users had used the Internet to search for information on prospective college or universities for themselves or a family member (PEW, 2005). 37 An exception is when we estimate the effect of Internet availability on PSAT and SAT scores, which is an integerbased variable, ranging in increments of 10 from 400 to 1600. 38 Student-specific controls include dummies for gender, race, and high school GPA, as well as PSAT math and verbal scores; zip code controls include adjusted gross income, population, number of houses, unemployment rate, and median home price. Additionally, in Appendix Table 5 we add a control for student survey responses to parental education and income, questions that do not appear on the PSAT survey and that are associated with considerable nonresponse on the SAT survey. Results are qualitatively unchanged to their inclusion. student's PSAT verbal and math scores as well as high school GPA, which capture "latent ability" as well as an early signal of admissibility to selective colleges.
Our primary coefficient of interest is β such that when y is "applied to a four-year college," our estimate represents the increase in the likelihood a student applies to a four-year college if high-speed Internet is available in her home zip code. This characterization of β holds under the assumption that, all else equal, trends in testing and application outcomes in zip codes with and without high-speed Internet would have evolved similarly over time, save for the availability of high-speed Internet. Of course, zip codes which received high-speed Internet access earlier may have been different than zip codes which received high-speed Internet access later. That is why it is imperative that our specification include zip code fixed effects (γz) so that our estimates are net of any time-invariant differences across zip codes. Xzc also contains a set of zip-code-level economic indicators, including local income, unemployment rates, house prices, and population density in an effort to capture any observable changes in zip-code characteristics that may have been correlated with the availability of high-speed Internet.
Our identifying assumption is that, all else equal, our measure of high-speed Internet availability is exogenous to a student's testing and application outcomes. Recall that we derive our measure from zip-code-level access based on counts of ISPs benchmarked to trends in usage.
We might be concerned that usage is endogenous to our outcomes, and, at the individual level, it almost certainly is. However, recall that our specification includes cohort effects (γc), so that our estimated effects are net of any national trends in high-speed Internet adoption and the availability of online content. Thus, for identification purposes, we need only assume that provider entry is exogenous to student's outcomes. This means that threats to identification come in the form of either student-demand-related pressures on Internet service providers to enter their zip code, or any omitted variables that co-vary with student outcomes and provider entry. We view such threats to be small, if not negligible.
Per the former, there is abundant evidence that supply-side constraints restricted high-speed Internet access, and that the supply of high-speed Internet lagged consumer demand. To provide high-speed Internet, Internet service providers (ISPs)typically the existing phone or cable companyhad to make substantial infrastructure investments, retrofitting existing phone and cable lines and installing new switches and servers (Faulhaber, 2002;Greenstein and Prince, 2007;Grubesic and Murray, 2002). There is a general consensus that the costs slowed rollout and access did not keep up with consumer demand (Greenstein and Prince, 2007;Faulhaber, 2002). Dettling (forthcoming) discusses how variation in the underlying housing infrastructure and the availability and quality of existing telephone and cable wiring made these infrastructure investments differentially costly across locations, which created differences in the timing of the availability of high-speed Internet services across locations that was unrelated to consumer demand-related pressures. 39 Based on these known barriers to entry, it seems unlikely, but not impossible, that student-demand-related pressures induced high-speed Internet service provision.
Thus, net of state and year fixed effects, and the economic controls included in our model, the roll-out of high-speed Internet is arguably random.
For the latter to be plausible, it must be that our extensive set of student-and zip-code-level controls are inadequate. We investigate the exclusion of key variables by re-estimating our main outcomes over other timings of high-speed Internet availability that are less likely to affect our outcomes. For example, as a preview of our results, while we find that junior year high-speed Internet availability leads to SAT score improvements, when we instead use a measure of highspeed Internet availability in December of a student's senior year of high schoolafter most students would take the SATthere is no discernible effect on SAT scores. We view this as suggestive evidence that our estimation framework does not omit key variables, though we acknowledge it is a possibility.
Finally, before we turn to results, we make two notes regarding their interpretation. First, the coefficient on broadbandizc captures the effect of potential access, as opposed to high-speed 39 There are two main transmission modes for high-speed Internet in the United States: cable-based and telephonebased digital subscriber line (DSL) service. Each service requires the installation of fiber-optic wiring, which provides high-speed Internet service up to a certain point, from which the signal travels over traditional coaxial cable or copper telephone wiring the rest of the way. These fiber-optic lines may reach the ISPs' central office, some remote terminal in the neighborhood, or the home. The main issue that prevented timely rollout for the cable companies was capacity. Cable companies had installed some fiber lines in the 1980s to provide digital cable service, but each additional customer on a single fiber line reduces the "downstream" capacity, meaning that multiple simultaneous users reduces speeds and could exhaust the system. Thus, to provide reliable, high-speed Internet service, cable companies needed to add more fiber lines that came closer to residences. For DSL Internet from the phone companies, rollout was prevented by the need to upgrade the existing telephone wiring, much of which was old and had been split too often to be capable of carrying high-speed two-way traffic. In either case, the key insight is that existing wiring leading up to the home or apartment building was insufficient to carry high-speed traffic, while the wiring already in a home or building was typically sufficient. Dettling (forthcoming) demonstrates that this incentivized entry into markets where the existing housing infrastructure offered lowered costs of provision. That is, areas with more multiple-family dwellings received access earlier because ISPs could take advantage of economies of scale in the provision by bringing one line to multiple consumers.
Internet use, because broadbandizc is a measure of high-speed Internet penetration over a geography rather than student-specific high-speed Internet adoption. Arguably, this coefficient is the most relevant for policy. As noted earlier, the effect of high-speed Internet use will depend on how many students use it, how often they use it, and in what waysall of which the government is unable to fully control or measure. Moreover, local peer effects, wherein local access changes the college-going culture of students' neighborhood or high school even for those students who do not personally have access at home, could amplify or reduce the effectiveness of use in ways that are difficult to separate from own access but that a measure of local access will generally subsume.
Second, because our analysis focuses on how the prevalence of high-speed Internet in junior year coincides with late high school outcomes and includes both zip code and year fixed effects, our estimates will reflect only the testing and application effects of coverage that are immediately detectable. In practice, reducing the cost of information could affect outcomes at any pre-collegiate stage. Early high school students, for instance, could alter their coursework and career paths based on information gleaned from the Internet, which, in turn, could also affect postsecondary outcomes. While some of our results will hint that gains in applications are indeed larger the longer broadband is available, we generally leave this possibility to future work. 40
b. Main Results: Test-Taking, Scoring, and Application Outcomes
We begin our analysis by examining whether SAT test-taking rates systematically vary with high-speed Internet availability. Relative to the full population of U.S. students, SAT takers tend to be college-aspiring students, so our sample of testing and application outcomes are likely positively selected from the distribution of student ability. Moreover, there exists a competing exam for college-bound students, the ACT, that students can elect to take instead. Thus, depending on how high-speed Internet is used and by whom, we might see shifts in SAT testtaking and SAT test-taker quality that result from increased broadband availability.
Note that while test-taking is a separately interesting outcome that may be affected by highspeed Internet availability, any systematic differences that we detect in the amount or quality of SAT takers would also affect our interpretation of β when the dependent variable is an SAT score or a college application decision. This is because we can only observe these outcomes for 40 For example, Table 3 suggests that the application effects of broadband grow the longer it is available to students. students who take the SAT, and such results would imply that we do not observe a comparable set of SAT test-takers across zip codes with and without high-speed Internet. Consider, for example, if, when high-speed Internet is available, more students find it worthwhile to take the SAT, but those who elect to take the exam only in this state tend to earn below-average scores.
Were we to then observe less-favorable average application outcomes in zip codes where broadband=1, we might incorrectly infer that high-speed Internet negatively affects application outcomes, when instead the composition of students for whom we can observe outcomes has also changed. Thus, if there is observable selection into (or out of) SAT-taking when high-speed Internet is available, estimated βs for outcomes that can only be observed conditional on taking the SAT would reflect a combination of true application effects on students who take the exam in any state and broadband-driven sample selection.
To estimate shifts in test-taking and test-taker quality, we leverage information available from the PSAT. The PSAT is the qualifying test for the National Merit Scholarship Program and is often mandatory within a state or district, so that the population of PSAT takers approximates the at-risk (i.e., non-selected) population of SAT takers. Using the full set of PSAT takers, we first examine whether more students elect to take the SAT once broadband is available, so that yizc is an indicator value that takes the value of 1 if a student takes the SAT exam and 0 otherwise. Next, we examine whether broadband availability affects outcomes later in the application cycle. We begin by looking to see whether we can detect systematic differences in SAT scores ( Finally, we examine the effect of broadband on application rates. Drawing from existing research on college quality and under-match, we consider outcomes designed to capture behavioral patterns deviating from defaults or broad recommendations, such as applying to more schools than the number allotted with a test registration (i.e. applying to five or more), applying to a four-year school, applying to a selective school, applying to a match school, applying to a state flagship, applying out of state, and applying to a highly ranked liberal arts school. We construct each application measure as an indicator variable and we test, in line with the literature, whether the likelihood a student will deviate from perceived defaults increases with more information.
The remainder of Table 2 displays estimates of the effect of broadband on the application measures. The signs on all of the coefficients are consistent with improved outcomes: we estimate statistically significant gains in the probability a student applies to more than the default number of colleges, a four-year college, a selective college, a college commensurate with her 41 Ideally, to examine selection, we would use all students who are at risk of applying to college when broadband becomes available (e.g., high school juniors by home zip code over time), but we do not observe such a measure in our data and, to our knowledge, these data are not available elsewhere. However, we believe that the population of PSAT-takers is an ample proxy for the most relevant denominator in our setting for several reasons. For one, we find no evidence of changes in the underlying quality of test-takers in our sample (as measured by PSAT scores). In addition, the PSAT is generally administered to high school sophomores and juniors in the fall, and we restrict our analysis sample to only test-takers who took both exams; at best, this leaves a very small window to respond through PSAT-taking. It seems much more likely that, in practice, students who are induced into college-going by broadband availability in their junior year will forego the PSAT entirely and elect only the SAT exam, which is the exam relevant for college admissions. Such students are excluded from our analyses. Relatedly, students induced into PSAT-taking would generally not have been otherwise collegebound, and thus are extremely unlikely to apply broadly or to schools in the selectivity ranges we consider; if anything, including them should bias our estimates toward zero. Finally, when we restrict the sample to SAT states, where selection is less of a concern, our results look quite similar.
own score (i.e., a match college), a top liberal arts college, and an out-of-state college ranging from 0.2 to 0.4 percentage point. In some instances, these changes reflect quite meaningful deviations from mean behavior; for example, only about 7.2 percent of SAT takers apply to top liberal arts colleges, and high-speed Internet availability induces a 0.17 percentage point change over that baseline.
Interestingly, while both are positive in sign, the effect of broadband Internet on whether students apply to a very selective college or an in-state flagship are statistically indistinguishable from zero. The first null result could reflect disparities in student groups to whom the gains of high-speed Internet availability tend to accrueperhaps the most elite students are not the largest beneficiaries. The second null result is consistent with this line of thinking as well, but also with prior work that has found that groups of students who under-match generally tend to favor public, in-state colleges in their applications (Hoxby and Avery, 2013), so we might not anticipate large effects within this class of institutions. Moreover, flagship colleges tend to be large and well-known; thus, if simple awareness was preventing applications, we might not expect this outcome to be affected. 42
c. Validity of Research Design
Our identifying assumption is that barring the emergence of broadband, trends in our outcomes would have evolved similarly. One way to check this assumption is to examine trends in our main outcomes before broadband became available to residential customers in 1999.
Using SAT data for the 1996 to 1998 graduation cohorts, we classify zip codes according to whether high-speed Internet became available during the early (1999-2001), middle (2002-2005), or late (2005)(2006)(2007) years of our sample period, and chart the evolution of our main outcomes over this prior period. To conserve space, we limit the discussion of this analysis (and the rest of the validity checks) to the main testing and application outcomes that were statistically significant in Table 2 and present the remaining outcomes in the appendix. Figure 3 displays the results of this analysis, which indicate that there are substantial and persistent level differences by timing in several of our outcomes before broadband technology became commonplace: test-takers from zip codes that received broadband relatively early tend to 42 While Hoxby and Avery (2013) find that, within the state, these groups do not seem to favor flagships and that they often instead apply to less selective public universities in their state, in a related survey, Hoxby and Turner (2015) find that the reasons such students give for not applying to the state flagship seem more related to the social environment at the school than to unawareness of its academic quality.
have better application outcomes. This result is not surprising, since ISPs tended to enter wealthier zip codes earlier than less well-off zip codes. Recall, however, that our specification includes zip code effects. This means we need only assume that trendsnot levelsare statistically similar across the timings we consider in the absence of broadband. Figure 3 indicates that this assumption appears to be valid for the 1996-1998 period: there do not appear to be any differential trends in our outcomes by broadband availability category. Thus, prior trends do not appear to be driving our estimates.
Another way to examine the validity of our design is to estimate outcomes for students who were older in the application cycle when broadband became available. Specifically, we would not expect to see systematic differences in applications among students who were already freshmen in college by the time broadband became available to their home zip code. Thus, we reestimate our model, adding new indicators for broadband availability in a student's home zip code during her senior year of high school and freshman year of college. Although obtaining broadband access during one's freshman year of college should not theoretically affect application outcomes, in practice we have an imperfect measure of individual-level access and it is possible that some students have access to high-speed Internet before our measure officially turns on in their zip code. Thus, we do not necessarily expect to see precise zeroes for freshman year of college, but rather broadly more muted effects for cohorts who generally would have been too old to benefit. Again, for brevity we present the outcomes that were statistically significant in Table 2. In these figures, the x-axis plots the year of schooling and the y-axis displays the estimated coefficients and confidence intervals on the indicator for zip code-level broadband access in the year listed.
In each case, broadband access in junior year has a positive and significant effect on the outcome, and an imprecisely estimated null effect in the freshman year of college.
Finally, in order to comprehensively examine the pre-trends together with the dynamics following the introduction of broadband, we follow Autor (2003) and estimate an event-study version of our estimating equation. Specifically, we replace broadbandizc with a series of dummies for the years prior to and following the introduction of broadband over our sample period, and re-estimate each of our outcomes. For consistency and ease of interpretation, the omitted dummy in each equation corresponds to the year just before broadband is introduced. Finally, we have noted that in our data we can only observe testing and application outcomes for SAT takers, even though there exists a competing admissions exam students can elect to take instead. Thus, we might be concerned that some of the behavior we detect reflects selection into (or out of) our analysis sample, driven by switching between the ACT and SAT. While colleges throughout the United States generally will accept either exam, the tests historically tend to prevail in particular geographic regions; students on the coasts and in the South more often take the SAT and students in the Midwest favor the ACT. In addition, students planning to apply to highly selective universities often take the SAT. Over the time period we consider, as accountability pressures grew, some states, and often those where the ACT had already prevailed, began requiring their students to take an admissions exam. Since the resulting score could be used in the application process, state mandates potentially interfered with some of the competitive dynamics between the two exams.
To examine whether the existence of the ACT is introducing selection concerns into our analysis, we restrict our analysis sample to those states in which the SAT historically has prevailed, and re-estimate all of the outcomes (Table 4). The results broadly mirror the main 43 The one exception is that the policy leads for "Apply 5+" appear to indicate pre-trends, which suggests that we may be incorrectly attributing increases in the number of applications to broadband availability. Note that this interpretation is not fully consistent with the evidence brought to bear on this outcome in the prior two exercises, and of the many validity tests we do, it is statistically probable we fail one by random chance. Thus, the preponderance of evidence suggests that broadband availability is exogenous to students' application outcomes. results in Table 2. 44 The SAT score increase owing to availability is again 0.7 SAT points, and application outcomes increase across the board similar to our main estimates.
d. Exploring Mechanisms
We showed earlier that application outcomes improved for students with broadband Internet.
However, we also detect broadband-driven increases in their average admissibility, as measured by their scores. We have not yet shown whether application improvements occur independent of scoring improvements. In other words, to understand how broadband availability is operating to improve outcomes, we would also like to know whether students who are observably equally admissible apply differently when broadband is available.
To probe this, we re-estimate our application outcomes, adding a separate control for the SAT score in addition to the PSAT score. Table 5 displays the results of this exercise and indicates that broadband has an effect on student applications, above and beyond improved admissibility. Dividing the estimates in Table 5 by those in Table 2 indicates that score improvement explains at most 20 percent of changes in the application set (as indicated by the italicized percent changes). While there may be some complementarity between changes in SAT scores and other features of a student's application that are unobservable to usfor example, a student with a higher score might feel encouraged to enroll in more rigorous courses or participate in additional extra-curricular activitiesthese results indicate that broadband availability is likely also acting through channels independent of student admissibility, such as reducing the work involved in submissions or making information on schools easier to obtain.
We can further examine this separability by considering whether a more short-term treatment broadband availability in December of senior year of high schoolaffects our outcomes. In this case, students would have much less time to improve the strength of their application. Table 6 displays the effects of senior year access on application outcomes. Similar to the results for junior year, we find broad improvements in outcomes, although the results are muted. 45 Moreover, while still positive, we no longer detect significant effects on the probability a student 44 For completeness, we re-estimate the specifications examining selection into SAT-taking (i.e., the top panel of Table 2). The coefficients (standard errors) on "PSAT Score" and "Took the SAT" are -0.06520 (0.03571) and 0.00175 (0.00098), respectively, neither of which is statistically different from zero (at the 5% level). 45 Because we focus on the same cohorts of students and allow the years in which Internet access was obtained in the zip code to vary, the magnitude of the coefficients are not directly comparable to the junior year results. For instance, the first cohort we use graduates in 2001. That cohort could have obtained Internet access in December of their junior year, or 1999. In the senior year analysis, that cohort could have obtained access in December of 2000.
applies to a selective school or out-of-state school. Importantly, we do not estimate an SAT score change consistent with the later timing. Since students do not typically take the SAT after fall of senior year, we find this (non-)result to be a compelling affirmation of our strategy and indication that broadband availability is acting beyond score increases to improve application outcomes. Overall, Table 6 suggests that admissibility alone is not driving our results.
e. Heterogeneous Effects
The literature on under-match identifies the existence of substantially many high-achieving students left behind because of a lack of peers, role models, and necessary information that could help them with the application process. Broadly, findings in this area tend to connect measures of relative disadvantage-e.g., lower-income geographies, less-educated parents, students in rural areas, and students who identify as a racial minority-to information constraints limiting students' postsecondary opportunities. In this regard, it seems plausible that high-speed Internet access might have a larger impact on postsecondary outcomes for students from these groups.
However, our setting offers more than just pure information, such that there may also be unobserved countervailing forces, such as whether the student's family can afford to or wants to purchase a subscription, whether the student has devices at their disposal, and how effectively the Internet is being used. If those differences are correlated with relative disadvantagewhich most evidence suggests that they arehigh-speed Internet access may not translate into improved academic outcomes according to the groups we consider. 46 This would be consistent with recent work which suggests students from lower socioeconomic backgrounds (i.e., lower-SES students) with access to Internet suffer academically relative to their peers, while students from better socioeconomic backgrounds (i.e., higher-SES students) gain (Vigdor, Ladd and Martinez, 2014).
Thus, our next exercise is designed to probe whether we observe systematic differences in outcomes by particular student groups, with the caveat that the analyses that follow are somewhat difficult to interpret because the dimensions we consider are also correlated with 46 There is evidence that disadvantaged students are less likely to have broadband at home, have a computer at home and use the Internet for education-related activities: for example, the October 2003 CPS indicates that among 15-18 year olds whose mother did not complete high school, 8 percent had access to broadband at home and 55 percent had access to a computer at home. Among 15-18 years olds whose mother has a post-graduate degree, 46 percent had access to broadband at home and 97 percent had access to a computer at home. Among students with access to Internet at home, 74 percent of 15-18 year old Internet users whose mothers did not finish high-school use the Internet for education-related purposes, compared to 93 percent of Internet users whose mothers had a post-graduate degree.
lower rates of household Internet take-up and device access. As before, our βs are interpretable as the effects of potential access to high-speed Internet; if we observe no effect on the students who are most constrained, we are unable to test whether this is due to lack of take-up or lack of a treatment effect. Table 7 displays estimates of the effect of broadband Internet by group. We find that improvements in applications are concentrated among higher-income, more-educated, white, and urban students. By contrast, our estimates for lower-SES students appear to indicate large changes in SAT scores but a pervasive null effect on applications for these groups. 47 These patterns may partly reflect how well such populations are able to use the Internet to improve outcomes on their own.
Altogether our findings suggest that the benefits of high-speed Internet may be lopsided and favor those who already had more resources. Still, our set of results may be somewhat specific to our setting, when high-speed Internet was first becoming available to households. It may be that in its early stages, the benefits of high-speed Internet primarily accrue to resource-rich students, but as it becomes more diffuse, lower-resource students experience large benefits as well. We probe this hypothesis in a framework mirroring Table 3 splitting the sample, as above, by parental education (Appendix Table 7). The results generally indicate that low-SES students realize increased score gains over time, but that the gains only translate into improved applications if they essentially grew up with broadband in their zip code. By contrast, among high-SES students, the gains in both scores and applications are immediate and persistent. This set of findings is broadly consistent with the under-match literature and could imply any or all of the following: 1) there is learning among low-SES students in how one can use the Internet to improve outcomes; 2) low-SES students are very late adopters; 3) low-SES students alter their admissibility over time in important ways we cannot observe (e.g., through changes in their coursework); or 4) the benefits of broadband disseminate across zip codes over time, even to non-users. While outside the scope of this paper, future research could explore why lower-47 In some instances, lower-SES students appear to have lower average PSAT scores once broadband becomes available (available upon request). While interesting from a policy perspective, this result complicates the interpretation of the large SAT score increases and the null application effects, as they might reflect changes in the test-taking population or the characteristics of our applicant sample. Still, in other instances, we observe no such evidence of selection among lower-SES students. The SAT and application effects are similar across the measures of SES we consider.
resource students appear to experience gains in their scores over time, why these gains are not immediately reflected in their applications, and how policy could mitigate the gaps we detect.
V. Conclusion
This paper examines the effect of high-speed Internet on testing and college application decisions. We find that the availability of high-speed Internet in a student's zip code unambiguously improves her scores and her application set. Students appear to diverge from defaults and broad recommendations by sending more applications, applying to more-selective schools, applying to schools outside their state or smaller in presence, and applying to schools more commensurate with their abilities. Some of this improvement can be traced to increases in average test scores, also owing to broadband. Moreover, consistent with prior literature, we find that interventions fairly late in adolescence can have considerable effects on students' postsecondary outcomes.
We find that the primary beneficiaries of broadband Internet availability appear to be higherresource student groups, so that the digital divide may be substantially neglecting students who already tend to have fewer inroads to elite academic institutions. If this gap is due to differences in student's ability to access and use broadbandperhaps because their parents do not take up broadband or because students do not have access to devicesthese results suggest that policies aimed at increasing broadband availability and affordability could reduce inequality in postsecondary outcomes. If these gaps are due to differences in the ability to find and digest the relevant online content, policies aimed at providing guidance on how the Internet can enhance opportunities for these students might be effective. Nonetheless, our results imply that students can benefit from content available online to improve their outcomes. And, even though our results cannot speak directly to Internet usage at school, it is possible that in-school programs that encourage and monitor Internet searches could be an effective tool for improving college outcomes. We leave it to future work to uncover which margins are relevant for policy to mitigate these gaps.
Of course, an important lingering question is how the application improvements we estimate translate into differences in attendance. We cannot observe enrollment outcomes for our analysis sample, but we can use comparable estimates from the literature to derive anticipated effects.
Hoxby and Turner (2013) launched a national information intervention designed to improve application and attendance outcomes. Students who were sent material were 12 percentage points more likely to apply to "peer" institutions and then 5.3 percentage points more likely to enroll in them. In other words, in their setting, about 43 percent of the detected change in application behavior effectively translated into improved attendance outcomes. If we apply those estimates to our "match" application coefficient, we can expect a 0.12 percentage point increase in the likelihood that college-bound students enroll at schools commensurate with their ability.
Assuming no supply-side constraints or general equilibrium effects, this decreases Notes: Each column represents a separate OLS regression whereby the specification is as in Table 2, except the single broadband availability dummy is replaced by a series of dummies for the years prior to and following the introduction of the broadband. The dummy for the year immediately preceding introduction is excluded. Standard errors adjusted for clustering at the zip code-level are in parentheses (* p<.10 ** p<.05 *** p<.01). Sample includes students who took both the PSAT and SAT tests in the high school graduating cohorts of 2001 to 2008. Regressions include zip code and year fixed effects, as well as controls for gender, race, high school GPA, PSAT verbal and math sections, and time-varying zip code characteristics and a constant. ,202,386 5,202,386 5,202,386 5,202,386 5,202,386 5,202,386 5,202,386
For Online Publication Appendix Appendix 1. Broadband Internet Penetration Variable Creation and Alternative Specifications
To construct our measure of zip code-level broadband Internet access, we merged the zip codelevel FCC data on the number of ISPs in a zip code, to zip code-level information on (1) land area and (2) population. The land area data are from the 2000 Census. The population data come from a combination of 2000 census data and SOI income tax data, which we used to construct zip-codelevel population counts over time. We then constructed ratios of the number of ISPs to zip-code geographic size and population size. We assigned each zip code as urban or rural based on the fraction of its population which was urban, defining an urban zip code as one where more than 50 percent of the population is urban.
We also collected national and sub-national trends broadband Internet usage from the CPS/PEW data. The CPS data come from the CPS computer and Internet supplements/school enrollment surveys (August 2000, September 2001, October 2003, 2007, 2009. The PEW Survey is collected at least annually and the month of the survey was recorded. We interpolated the between months so the timing was equivalent with the FCC data (which were collected in June/December). We constructed national trends out of both series, and a separate urban and rural trends from the CPS series based on whether an individual lived in a metropolitan statistical area.
We then conduct the following exercise: 1) Identify a threshold for "coverage" (one provider per x square miles or one provider per y thousand people) 2) Construct an indicator which takes a value of one if threshold is in each zip code-year 3) Aggregate up using zip code population weights to construct national time series of high-speed Internet penetration 4) Estimate root mean squared error between CPS or PEW measure and #3 We then incrementally increase the threshold in step 1 and repeat steps 2 through 4. We tried the following combinations: one provider per 1-10,000 people (in intervals of 500 people), and one provider per 1-40 square miles (in intervals of one mile). We also allowed different thresholds for urban and rural zip codes (although we did not impose that they had to be different) by comparing to the CPS data. Finally, we minimized the root mean squared error (RMSE) and identified that threshold as the preferred definition of zip code broadband coverage. This is to define a rural zip code as "covered" when there is at least one provider per 12 square miles and an urban zip code as "covered" when there is at least one provider for every 2,700 people.
For exposition, we also present results for alternative internet penetration concepts, applying each of the population-based (i.e. "one provider per 2700 people") and mileage-based (i.e., one provider per 12 square miles) measures to all zip codes (Appendix Table 1), as well as for the urban and rural subsamples separately (Appendix Table 2). Appendix Table 1 demonstrates that the results using the population-based measure (second panel) are qualitatively similar to those obtained using our main measure (first panel). The RMSE for this single measure, however, is about 45 percent larger than the RMSE for our preferred measure, suggesting the fit to usage is worse. The third panel presents the results using the mileage-based measure (i.e. "one provider per 12 square miles"). In this case, the results are quite different from the results obtained using our main measure. However, this could be attributable to the fact that this measure fits the usage data poorly, with a RMSE of 2.79 (300 percent larger than the RMSE for our preferred measure). To further examine this, in the fourth panel of Appendix Table 1 we alternatively estimate the model using a mileage-based measure that better-fits the data (one provider per one square mile, which has an RMSE of 1.39). In this case, we find more similar results to both the preferred measure and the population-based measure. Taken together, these results are affirming and supportive of our strategy: alternative measures that more closely match national usage trends tend to lead to more consistent and similar results.
In Appendix Tables 2 and 3, we consider the alternative three measures from the prior table separately within the urban and rural zip code subsamples. First, it is clear from the RMSEs that the separate population and mileage-based measures used in the preferred measure fit the urban and rural samples better than the alternative measures do, with RMSEs of 0.85 and 0.5, respectively. In addition, Appendix Table 2 indicates that the mileage-based measure (i.e., "one provider per 12 square miles") performs poorly for the urban sample, with an RMSE of 2.89, and leads to results that differ from the population-based results. However, as in the pooled results, the mileage-based measure that fits the usage data more closely (i.e., "one provider per one square mile") leads to results which are more consistent with the main results. Finally, Appendix Table 3 confirms the null results for the rural subsample in Table 6. In each case, regardless of the fit of the measure, we see little to no effect of Internet access on our outcomes for the rural subsample.
In Appendix Table 4, we test the strength of the relationship between the various measures of broadband availability described in the body of the paper (and above) and teen high-speed Internet usage. We obtained teen high-speed Internet usage rates from the CPS, which interviewed respondents about their broadband usage in August 2000, September 2001, October 2003, and October 2007. Since zip codes are not available for all CPS respondents, we aggregate our measures of availability to the state-level using population weights. We then match this to collapsed CPS data for 15-18 year old respondents, including their high speed Internet usage and demographic characteristics. To mimic our main estimating equation, we control for similar characteristics, including year and state fixed effects, race/ethnicity, sex, family income, and parental education. We also include measures of population density, home prices, and unemployment rates at the state-level, again to emulate our main equation.
Appendix Table 4 indicates that the fraction of the state for which high-speed Internet is available by our main measure is a positive and significant predictor of teen high-speed Internet usage. Of the additional measures from Appendix Tables 2 and 3, the "improved fit" mileage measure (one provider per one square mile) is the only other statistically significant predictor of internet usage among teens. Moreover, of the measures we consider, our main measure, given the relatively small standard error, appears to admit the least noise. Columns 4 and 5 present estimates using the "at least one provider" in a zip code dummy variable and a linear measure of the number of providers, respectively. Neither are significantly associated with high-speed Internet usage among teens (and the "at least one provider" measure is actually wrong-signed). | 2019-01-05T14:05:38.318Z | 2015-11-01T00:00:00.000 | {
"year": 2015,
"sha1": "00465d1b579774a462e97759247fbecd8c727463",
"oa_license": null,
"oa_url": "https://doi.org/10.17016/feds.2015.108",
"oa_status": "GOLD",
"pdf_src": "ElsevierPush",
"pdf_hash": "bb8a5589197c56c7f2b763670e4d4234fa3534ad",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Computer Science"
]
} |
86100262 | pes2o/s2orc | v3-fos-license | The mineral , proximate and phytochemical components of ten Nigerian medicinal plants used in the management of arthritis
Ethnobotanical investigation revealed the use of ten medicinal plants in the management of arthritis in Ibadan, Nigeria. This study screened the plants for mineral, proximate and phytochemical components that could be responsible for their therapeutic value in arthritis. The powdered plant samples were analysed for nutritional constituents and phytochemical compounds using standard laboratory protocols. The use value of plant-parts was 50% leaves and 50% roots. Three out of the 10 plants had high calcium content: Oncoba spinosa (180.0 mg/100 g), Nymphaea lotus (160.0 mg/100 g) and Solenostemon monostachyus (125.0 mg/100 g). N. lotus had the highest iron content (8.0 mg/100 g). Phosphorus content was highest in O. spinosa (150.0 mg/100 g). Magnesium was highest in Phyllanthus amarus (14.0 mg/100 g). Crude fibre was highest in Solanum aethiopicum (15.90%) and the least in O. spinosa (14.00%). S. aethiopicum had the highest protein content (18.50%) and O. spinosa the least (14.75%). All the medicinal plants tested positive to alkaloids, carotenoids and flavonoids. The plants contained minerals and secondary metabolites that are implicated in arthritis viz. calcium, zinc, carotenoids and flavonoids. The presence of these compounds in the test plants might alleviate pains associated with arthritis. O. spinosa had high potential in the management of arthritis due to its high calcium and phosphorus components.
INTRODUCTION
Lecaniodiscus cupanioides Planch ex Benth., Carpolobia lutea G.Don, Microdesmis puberula Hook.f.ex Planch., Oncoba spinosa Forssk., Calliandra portoricensis (Jacq.)Benth., Phyllanthus amarus Schumach.& Thonn., Solenostemon monostachyus (P.Beauv.)Briq., Tetracera alnifolia Willd., Solanum aethiopicum L. and Nymphaea lotus L. are used for the management of arthritis in Ibadan, Nigeria.Although the plants are of therapeutic importance in managing arthritis, they are used for other health problems in folk medicine.The aqueous root extract of L. cupanioides is used as a galactogen and as a laxative (Adeyemi et al., 2005).C. lutea is used in folk medicine to facilitate delivery and treat male sexual disorders (Mitaine-Offer et al., 2002).M. puberula is used traditionally for the treat-ment of infectious diseases, genital problems, menstrual complaints, sterility, miscarriage and loss of virility.The roots of O. spinosa are used in the treatment of dysentery and bladder complaints (Burkill, 1994).C. portoricensis is mixed with ginger and water for use as an enema for lumbago pain and constipation, and when mixed with pepper; it is used for gonorrhoea (Burkill, 1985).The root has very strong anticandidal property (Gbadamosi, 2008).The leaves of S. monostachyus are eaten as a potherb; leaves are also used to treat dysmenorrhoea, haematuria, female sterility, rheumatism, foot infections and snakebites (Burkill, 1995).The roots of T. alnifolia are used for the treatment of venereal diseases, arthritis and rheumatism.The stem bark of the plant forms part of a recipe used as antianaemic (Gbadamosi et al., 2012).Medicinal applications of S. aethiopicum include the use of roots and fruits as a carminative and sedative, and to treat colic and high blood pressure.The leaf juice is used as a sedative to treat uterine complaints (Burkill, 2000).N. lotus finds application in the management of circulatory system disorders, digestive system disorders, infections and inflammation (Burkill, 1997).
Arthritis is the inflammation of one or more joints.Most forms of arthritis affect the joints, tendons, ligaments, muscles, and cartilage.Some types, in the advanced stages, can affect the body's organs.Although there is no cure for arthritis, pain relievers, certain natural substances including vitamins and minerals, exercise and other lifestyle remedies can help manage the disease.The symptoms of arthritis will depend on the cause and the type.If the cause of arthritis is an autoimmune condition, then the symptoms may occur suddenly and aggressively.Many people with autoimmune forms of arthritis will experience alternating periods of flare-ups and remission.If the cause of arthritis is related to aging, then the symptoms will occur gradually -sometimes over a period of years.No matter the type of arthritis, the symptoms will vary based on other factors, including overall health of the sufferer.Regardless of the type of arthritis, common symptoms include pain, swelling in the tissue and joints, stiffness, deformity and diminished flexibility.People with arthritis tend to experience an aching sensation that may improve or worsen as a result of factors that include weather, time of day, movement and physical activity (www.symptomfind.com).
There are more than one hundred identifiable types of arthritis.Some common types of arthritis are osteoarthritis, rheumatoid arthritis, gout, ankylosing spondylitis, psoriatic arthritis and juvenile arthritis.Osteoarthritis occurs when the joints break down from wear and tear as a result of old age or injury.Rheumatoid arthritis is an autoimmune disease that occurs when the immune system attacks its own cartilage and tendons between the joints.Gout is caused by excess uric acid in the blood; it is a sudden and severe attack that causes pain and swelling in the joints especially in the joint of the big toe.Ankylosing spondylitis causes swelling, pain, stiffness and other complications in the spine.Psoriatic arthritis is similar to rheumatoid arthritis and tends to affect the fingers.Juvenile arthritis affects children.Most children who have juvenile arthritis will develop symptoms of a sudden fever and swelling knuckles.Juvenile arthritis patients may also develop a rash.Many children who develop this form of rheumatoid arthritis will recover completely, while others face a lifelong chronic condition (www.symptomfind.com).
Some medicinal plants have been reported to be useful in the management of rheumatoid arthritis.Linum usitatissimum (flaxseed) oil can be an effective part of a rheumatoid arthritis treatment regimen.It is rich in Omega-3 fatty acids like alpha-lipoic acid, which have anti-inflammatory properties.Also useful is Tripterygium wilfordii (thunder god vine) which has unique immune suppressant and anti-inflammatory properties.Curcuma longa (turmeric) is a potent herbal remedy for rheumatoid arthritis symptoms.It contains curcumin, which gives it both its characteristic yellow color and anti-inflammatory properties (Neidzooicha, 2013).Other plants with antiinflammatory properties are Ageratum conyzoides, Artemisia copa, Bauhinia tarapotensis, Croton pullei and Maytenus ilicifolia (Lima et al., 2011).There is a lot of information in literature on plants with anti-inflammatory properties worldwide (Rathore et al., 2007;Adams et al., 2009;Vikrant and Arya, 2011;Vishwabhan et al., 2011;Mahesh et al., 2011;Apu et al., 2012).Medicinal plants with analgesic properties also play significant role in alleviating pains commonly associated with arthritis.Many plant analgesics have been reported (Santos et al., 1994;Mathangi et al., 2012).
This study analysed ten medicinal plants for their nutritional and phytochemical components with the aim of ascertaining their justification for ethnomedicinal uses in the management of arthritis.Furthermore, this study presents the test plants for future pharmacological and toxicological studies in related research fields.
Collection and Identification of botanicals
The test plants were identified at species level in the University of Ibadan Herbarium (UIH).Fresh and healthy plant parts of the test plants were either bought from the herbal market (Bode) or collected from University of Ibadan campus.
Preparation of powdered plant materials for experiment
The test plants were washed, cut into small pieces and air dried at room temperature (27 to 30°C) for two weeks until completely dried.The dry plant materials were ground into powder and stored in airtight glass bottles at room temperature prior to experiments.
Proximate analysis of powdered plant samples
The proximate analysis of powdered plant materials was carried out using the AOAC methods (2005) in the Laboratory of the Department of Animal Science, Faculty of Agriculture and Forestry, University of Ibadan.The plant samples were analysed for proximate compositions: moisture content, crude protein, crude fat, ash, crude fibre and carbohydrate.
Phytochemical screening of powdered plant samples
Phytochemical screening of samples was done using the methods of Sofowora (1993) and Evans (2002) as follows.
Alkaloids
The powdered plant sample (500 mg) was weighed and extracted with 10 ml of 2% hydrochloric acid (HCl).The HCl extract was then filtered with Whatman filter paper (No.1) so as to have a clear solution and also to prevent false results.The filtrate of about 2.5 ml was treated with few drops of Dragendorff's reagent.A precipitate indicated the presence of alkaloids.
Anthraquinones
The powdered plant sample (500 mg) was shaken with 10 ml of benzene.The solution was filtered and 5 ml of 10% ammonium hydroxide (NH 4 OH) solution was added to the filtrate.A violet colour was observed in the lower phase.It indicated presence of anthraquinones.
Carotenoids
The extract (10 ml) was added to a test tube and evaporated to dryness on a water bath. 2 to 3 drops of saturated antimony (III) chloride (SbCl 3 ) in Chloroform (CHCl 3 ) was added to the residue.A blue-green colour eventually changing to red indicates the presence of carotenoids.
Flavonoids
A few drops of concentrated hydrochloric acid (HCl) were added to a small amount of an extract (0.5 g) of the plant material.Immediate development of a red colour was taken as an indication of the presence of flavonoids.
Saponins
The sample (200 mg) was shaken with 5 ml of distilled water and then heated to boil.Persistent frothing showed the presence of saponins.
Steroids
The extract (0.5 g) was dissolved in 2 ml of chloroform.Sulphuric acid was carefully added to form a lower layer.A reddish-brown colour at the interphase indicated the deoxysugar characteristic of cardenolides.A violet ring formed just above the layer and gradually spread throughout the layer (sulphuric acid) indicated presence of steroids.
Tannins
The sample (500 mg) was mixed with 10 ml of distilled water and heated on a water bath.The mixture was filtered and ferric chloride (FeCl 3 ) was added to the filtrate.Appearance of blue black colouration showed the presence of tannins.
Statistical analysis of data
All data were statistically analysed using one-way analysis of variance (ANOVA) and expressed as mean ± SD.The Duncan multiple range test (DMRT) was used to test means for significance (P ˂ 0.05).
RESULTS
Table 1 shows the profile of the medicinal plants used in this study.Herbs (40%) are mostly used, followed by shrubs (30%) and trees (30%).The plants contained minerals in varied quantities (Table 2).The ten plants are rich in proteins and fibres (Table 3).Most of the medicinal plants tested positive to alkaloids, anthraquinones, carotenoids, flavonoids, saponins, steroids and tannins (Table 4).
DISCUSSION
The ten plants belong to 10 different families (Table 1).The use value of plant-parts was 50% leaves and 50% roots commonly used for the management of arthritis.Some traditional recipes used for the management of arthritis are: (i) the leaves of P. amarus and S. monostarchyus (1:1) are squeezed together in water and the extract (150 ml) is taken daily after food.(ii) The root of L. cupanioides is dried and powdered; a teaspoonful of the powder is taken in hot water or pap daily after food.
(iii) The root of C. portoricensis is dried and powdered; a teaspoonful of the powder is taken in hot water or pap daily after food.
Phosphorus content was highest in O. spinosa (150.0 mg/100 g) (Table 2), followed by L. cupanoides (120.0 mg/100 g).L. cupanioides (125.0mg/100g) and the least was for P. amarus (45.0 mg/100 g).Three out of the 10 plants had high calcium content: O. spinosa (180.0 mg/100 g), N. lotus (160.0 mg/100 g) and S. monostachyus (125.0 mg/100 g).Calcium builds and maintains strong bones and teeth.A daily intake of 1,500 mg is recommended for people with inflammatory conditions.Calcium needs to be combined with phosphorus and vitamin D to be more effective.People with rheumatoid arthritis who took 1,000 mg of calcium along with 500 IUs of vitamin D reversed steroid-induced bone loss and gained bone mass as well (Holt, 2011).N. lotus had the highest iron content (8.0 mg/100 g), followed by L. cupanioides (7.50 mg/100 g) and M. puberula the least at 2.45 mg/100 g.There is an earlier report on high iron content (7.78 mg/g) in N. lotus (Okayi and Abe, 2000).Magnesium was highest in P. amarus (14.0 mg/100 g), followed by C. lutea (4.5 mg/100 g) and the least was for T. alnifolia (1.0 mg/100 g).The antioxidant activity and ion profiles of C. lutea have been reported (Nwidu et al., 2012).The highest zinc value (0.1 mg/100 g) was observed for L. cupanioides, C. portoricensis and P. amarus.Zinc is often deficient in people with arthritis and a daily intake of 50 mg is recommended for arthritis sufferers (Holt, 2011).The findings of the present study on mineral components of the test plants conform to the reports of previous authors (Nwidu et al., 2012;Afolabi et al., 2012).Overall, O. spinosa is a valuable plant in the management of arthritis due to its high phosphorus and calcium content.
aethiopicum was highest in nutritional constituents (moisture, crude protein and crude fibre), a result that agrees with the findings of Chinedu et al. (2011).
Conclusion
The test plants contained significant mineral, proximate and phytochemical components.The antioxidant, analgesic and anti-inflammatory properties of some of the plants have been reported, and these therapeutic properties are significantly relevant in the management of arthritis.The nutritional components of the test plant might complement the phytochemicals in alleviating pain and reducing inflammation in arthritis
Table 1 .
Profile of ten medicinal plants used in the management of arthritis.
Table 2 .
Mineral constituents of ten medicinal plants used in the management of arthritis.
Table 3 .
Proximate components of ten medicinal plants used in the management of arthritis.
Table 4 .
Phytochemical quality of ten medicinal plants used in the management of arthritis. | 2018-12-07T20:53:27.391Z | 2014-06-22T00:00:00.000 | {
"year": 2014,
"sha1": "95b0a762555a4dcb546c15f8e695c99828e23448",
"oa_license": "CCBY",
"oa_url": "https://academicjournals.org/journal/AJPP/article-full-text-pdf/38CF29845469.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "95b0a762555a4dcb546c15f8e695c99828e23448",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
256538686 | pes2o/s2orc | v3-fos-license | Account-Based Marketing Strategy for B2B Company in Indonesia
org
INTRODUCTION
Since 2018, there has been a digital transformation of taxation in Indonesia [1].This explains the increasing number of people using tax software or application.Based on Directorate General Tax (DGT), since 2017 until 2021 the number of taxpayers that moved from manual taxation into digital taxation significantly increased [2].However, this opportunity has not been utilized optimally by one of the tax application providers.These tax application providers in this research focuses on business-to-business (B2B) segmentation.Constraints in the lead generation process caused the company to not have the number of clients and revenues that were in line with the target.Moreover, all client providers only come from the SOE sector (State-Owned Enterprises) and are not diversified from various industrial sectors.One reason is because the sales cycle exceeds the time limit.As a result, the author wishes to delve deeper into customer behavior in the B2B segment when selecting a supplier.Then, from the results of the analysis, the authors provide recommendations on the right strategy and adjust it to meet customer expectations.To do so, Account-Based Marketing (ABM) strategies are studied as a field of study.ABM can shorten the sales cycle.Funnel acceleration campaigns help sales teams engage more influencers and decision makers in target accounts and quickly move them to the next positive stage of the buying cycle.Companies can reach more stakeholders, helping build consensus on company target accounts and moving them through the pipeline more quickly [3].For now, the research that author did is the only study that analyzes ABM strategy, especially for the B2B segment in Indonesia.It is hoped that this ABM strategy can become a new formula that can be used by all B2B companies in Indonesia to overcome problems related to marketing activities.
LITERATURE REVIEW
Account-Based Marketing (ABM) is a focused approach to B2B marketing in which marketing and sales teams work together to target best-fit accounts and turn them into customers.ABM is nothing new, identifying and targeting key customers has always been a best practice for B2B marketing and sales teams [3].Besides, technology is key in executing the ABM program.Most importantly, it plays a crucial role in enabling sales and marketing alignment around the goals of the ABM program.It enables the teams to; identify target accounts and key contacts within the accounts, understand and create audience segments, execute content and web personalization, manage multi-channel engagement, perform sales and marketing analytics, measure results against a set of key performance indicators and align sales and marketing strategically [4].Important for a company to have people with the required set of skills and knowledge to take on the strategy so as to ensure that it is successful.Getting all the relevant internal stakeholders within the company behind the ABM strategy is also important as doing so makes it seamless to create consistent experiences for the priority accounts [4].Based on surveys conducted by marketers around the world and various recent studies, ABM appears to be more effective in providing a better return on investment.The results showed that 97% of respondents were more likely to adopt her ABM, yielding slightly higher returns than other similar marketing campaigns [5].Account-Based Marketing have 3 types such as (a) Strategic ABM which is the share of future revenue is important enough to dictate the future of the business.In strategic ABM, account-based marketers are an integral part of the account team.(b) ABM lite which is the most common type of ABM.In ABM Lite.Technology becomes increasingly important, helping to automate account and stakeholder insight processes, campaign execution, and measurement.(c) ABM programmatic primarily used for new accounts to generate leads within targeted named accounts, whether or not they have indicated intent to buy [6].
RESEARCH METHOD
The methodology used in this research is Qualitative Research, which focuses on obtaining in-depth and detailed data.The interactions between researchers and respondents were carried out through the in-depth interview method.Interview is a way of gathering qualitative data by asking respondent specific questions concerning social processes of behaviors of interest.An interview is an open-ended approach where the respondents is free to answer the question in his/her own words.Although the rese-archer would have worked out in advance the particular topics that will be raised in the questions, the questions themselves are not written down in a formal questionnaire [7].A good interview (both through telephone and face-to-face) is very important in B2B research because respondents are also important to the company, must be handled with care, and feel that the interview is useful [8]. 1. explains the list of respondents in conducting In-depth Interview.In this research, interview is conducted in various ways, such as face-to-face and online interviews via online meeting platforms.This is adjusted to the conditions of the respondent (both of existing customers and potential customers).
ANALYSIS
The author conducted in-depth interviews for approximately two weeks, both face-to-face and through online meeting platforms.
Based on the results of these interviews, the following analysis results were obtained:
SOE in Railway Transportation
Existing Customers 1. Want a specific proposal that related to the industry 2. Brand is easy to find on search engine.
2.
SOE in Toll Road Management Existing Customers 1. Brand is easy to find on search engine.SOE in Oil and Gas Industry Potential Customers 1. Need a guideline that related to the industry 2. Brand is easy to find on search engine.
4.
Private Company in Coal and Mining Industry Potential Customers 1. Want a specific proposal that related to the industry.2. Brand is easy to find on search engine and at the top of search engine result page.
Private Company in Power Plant Producer Industry
Potential Customers 1. Want a content of proposals that related to the industry.2. Brand is easy to fine on search engine.Source: Author, 2023 From the results of the analysis conducted by the author, there are two expectations with high intensity that customers have when they search for suppliers, especially providers of tax application services.Customers expect potential suppliers to provide specific proposals or instructions regarding tax information, according to the industry they are involved in.In addition, the initial stage when they search for suppliers is to do an online search through search engines.Some customers believe that the credibility of a brand is also influenced by its online presence.Moreover, if the brand's or supplier's website is ranked at the top of the search page, the level of trust will be even higher.From some of these expectations, the author recommended a solution for all of the B2B companies in Indonesia to maximize their marketing activities through Account-Based Marketing (ABM) strategy.An organization uses Account-Based Marketing to get a high response rate [9].In carrying out the ABM strategy, the objective to be achieved is targeting a strategically named account so it can be acquired to become a new customer.Apart from that, specifically, the type of ABM strategy that will be used is ABM Lite.This type of ABM is Targeting 10-100 accounts that share similar characteristics, challenges, and initiatives [3].Also, ABM Lite is used for accounts that are still strategic but unable to be addressed with full ABM due to resource constraints, or are second tier.These accounts could still be important but do not call for the investment of the top tier [10].To carry out the ABM strategy with the type of ABM Lite, the author uses ABM Funnel that adopted from #FlipMyFunnel model by Sangram Vajre in 2016.
Table 1 .
List of Respondents
Table 2 .
Results of In-depth Interview | 2023-02-03T16:10:59.424Z | 2023-02-01T00:00:00.000 | {
"year": 2023,
"sha1": "b7688647adc805fbc74c1187b1c58d9f4a8171d9",
"oa_license": "CCBY",
"oa_url": "http://djpb.kemenkeu.go.id/tk/images/peraturan_TK/KMK-91_2021.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "c2629ea457fb1a34fd077d7cce3a350648ecb672",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
245832618 | pes2o/s2orc | v3-fos-license | Lessons From the Clinic: ADPKD Genetic Test Unraveling Severe Phenotype, Intrafamilial Variability, and New, Rare Causing Genotype
Claudia Izzi, Chiara Dordoni, Elisa Delbarba, Cinzia Mazza, Gianfranco Savoldi, Laura Econimo, Roberta Cortinovis, Letizia Zeni, Eva Martin, Federico Alberici and Francesco Scolari Division of Nephrology and Dialysis, Department of Medical and Surgical Specialties, Radiological Sciences, and Public Health, University of Brescia and ASST-Spedali Civili, Brescia, Italy; Medical Genetics Clinic, Department of Obstetrics and Gynecology, ASST-Spedali Civili, Brescia, Italy; Medical Genetics Laboratory, ASST-Spedali Civili, Brescia, Italy; Università degli Studi della Campania Luigi Vanvitelli, Napoli, Italy; and Radiology Unit, Montichiari Hospital, ASST-Spedali Civili, Brescia, Italy
A utosomal dominant polycystic kidney disease (ADPKD) is the prevalent inherited renal disease worldwide. Cystogenesis originates focally in the tubule and usually starts in utero although ADPKD symptoms usually develop in the fourth decade. The progressive cystic enlargement leads to end-stage renal disease in approximately 70% of patients with a median age of 58 years. Family history is present in >85% of cases; in 10% to 15% of cases, a family history may be absent because of de novo mutation, mosaicism, or mild disease. 1,2,S1, S2 Pathogenic variants in PKD1 and PKD2 genes are responsible for 60% to 78% and 15% to 26% of ADPKD, respectively; approximately 10% to 15% of patients have no recognizable pathogenic variants. 1,2,S1,S2 Recently, whole-exome sequencing studies identified mutations in other cystogenes (e.g., GANAB, DNAJB11, ALG8, ALG9) in a small proportion of patients with ADPKD. 3,S3 Somatic mosaicism can be an alternative explanation of the unresolved cases; finally, some patients may harbor missed rare pathogenic variants in noncoding region of PKD1 and PKD2. 1 Renal phenotype variability is well-recognized in ADPKD, because of locus (PKD1 vs. PKD2) and allelic effect (protein-truncating vs. proteinnontruncating variant), gene modifiers, and stochastic and environmental factors. 4,S4,S5 Nevertheless, in 1% to 2% of patients with ADPKD, intrafamilial variability may be extreme, characterized by onset in perinatal period in very early onset ADPKD (ADPKD-VEO) or before age of 15 years in early onset ADPKD. 4 ADPKD-VEO/early onset ADPKD may carry unusual complex genotypes, characterized by biallelic PKD1 or PKD2 in transinheritance, digenic PKD1/PKD2 variants, or transheterozygosity for PKD1 and PKHD1 or HNF1B. 5-7,S6-S8 Here, we describe a molecular study performed in 7 ADPKD pedigrees of our single-center ADPKD cohort (186 index patients) revealing a complex PKD genotype, with digenic or biallelic inheritance, including the first adult patient with a digenic inheritance because of PKD1 and PKHD1 variants. Detailed methods and clinical and molecular data are available in Supplementary Material (S1 and S2).
The key findings of our study were threefold. First, we confirm the association between the most severe renal phenotypes and complex genotype, including biallelic inheritance and digenic inheritance, with mutations in different cystogenes (PKD1/PKD2 and PKD1/PKHD1). Second, our data suggest the importance of genotyping in the presence of discordant renal disease severity among affected family members. In this clinical setting, genotyping has a crucial role in terms of diagnosis (explaining the genetic background of the intrafamilial variability) and major implications for reproductive counseling (see Supplementary Material S3 for detailed genetic counseling information). Third, we first report the causative role of variants located in the untranslated region of PKD1 gene, suggesting some genetically unexplained cases could harbor mutation in noncoding regions of the PKD genes. 3 Clinical and genetic data of families are detailed in Table 1 and Supplementary Material S2.
Family trees are detailed in Supplementary Figure S1. In families 1 (Figure 1a and b) and 2, index cases (ICs) presented ADPKD-VEO mimicking autosomal recessive polycystic kidney disease (ARPKD) giving prenatal onset. ADPKD-VEO and ARPKD are difficult to differentiate in perinatal/neonatal period; however, to reach a conclusive diagnosis is relevant for renal prognosis, which is poorer in ARPKD, usually aggravated by hepatic fibrosis (Table 1). 3,5,S9 Both ICs were negative for PKHD1 variants and revealed coinheritance of a PKD pathogenic variant with a hypomorphic PKD variant already identified in ADPKD-VEO cases. 4-6,S4-S8 Moreover, recently, in the largest series of ADPKD-VEO, 7 a high prevalence (70%) of biallelic PKD1 variants (hypomorphic variants in trans with a pathogenetic variant) was reported; biallelic PKD2 variants or transheterozygous PKD1 and PKD2 variants were found in few additional patients with ADPKD-VEO. The described complex genotypes lead to ADPKD-VEO genesis likely for a mechanism related to reduced gene dosage, according to the threshold model of cystogenesis. 8,S9,S10 Family 3 is of interest because it underlines the need to go beyond the "simple" segregation of the germline PKD familiar variant in pedigree with relevant clinical variability. Indeed, in the youngest twin daughters, the discordant early onset ADPKD phenotype was explained by the occurrence of a de novo pathogenic variant in cis with the paternal PKD1 mutation, thus contributing to more severe phenotype (Figure 1c and d). Families 4, 5, and 6 exemplify digenic inheritance. In family 4, coinheritance of PKD1 and PKD2 variants is likely the major contributor to intrafamilial variability in adult patients. The most severely affected brothers (Figure 1e) carried both a pathogenic PKD1 missense variant and a truncating PKD2 variant, whereas the youngest sibling with a milder phenotype presented only the PKD2 variant.
In family 5, the clinical diagnosis of severe ADPKD with early manifestations was guided by the severely enlarged polycystic kidneys, typical of dominant form of the disease, and the absence of signs of liver fibrosis. The IC harbored both a de novo, likely pathogenic PKD1 variant and 2 in trans PKHD1 missense variants classified as likely pathogenic and variant of unknown significance which probably contributed to worsen the phenotype (Figure 1f and g). Mutations in multiple PKD genes exerting an aggravating effect have already been reported, that is, Bergmann et al. 6 described 2 clinically discordant ARPKD fetuses born from a mother with PKD2. The authors suggested that the worsening of ARPKD disease in a fetus was due to the coinheritance of biallelic PKHD1 pathogenic variants with the maternal PKD2 variant. In a recent series of early ADPKD, S8 heterozygous PKHD1 changes were detected in addition to the familial mutation in 4 patients. To explain the PKHD1 aggravating effect, Olson et al. 9 described synergistic interactions between PKHD1 and PKD1 in murine models; indeed, digenic murine models for PKHD1 and PKD1 genes were found to develop a more severe renal cystic disease when compared with single PKHD1 homozygous murine model.
To the best of our knowledge, this is the first case of adult ADPKD aggravated by the presence of PKHD1 changes, supporting the hypothesis that PKHD1 and PKD1 gene products cooperate in a common pathway to maintain tubular integrity. S11,S12 In pedigree 6, the IC presented a de novo atypical ADPKD characterized by renal cysts in slightly increased kidneys with advanced kidney failure (Figure 1a-h). Molecular analysis revealed a homozygous PKD1 hypomorphic variant; the healthy parents were found to be heterozygous. Of note, this variant has already been identified (in addition to another
RESEARCH LETTER
Kidney International Reports (2022) 7, 895-898 PKD pathogenic variant) in 3 patients with VEO disease. S8 The previous and present data indicate a hypomorphic variable effect in heterozygous state for this variant, whereas in homozygous state, it may cause mild cystic phenotype resembling late-onset ADPKD.
In family 7, the IC presented a de novo ADPKD phenotype. Next-generation sequencing analysis failed to identify the pathogenic variant, and we first describe a pathogenic PKD1 deletion in noncoding 5 0 -untranslated region without involving exon 1 detected by multiplex-ligation-dependent probe amplification analysis (Supplementary Figure S2). The de novo occurrence and cDNA study supported the causative role of the deletion. This finding prompts genomic analysis beyond the coding region to enhance mutation detection in ADPKD when coding variants are not found.
In conclusion, our study supports an evolving role of genetic testing for ADPKD for diagnosis, prognosis, and familial counseling. Indeed, genotyping and elucidation of molecular mechanism underlying atypical but not rare ADPKD scenarios in the patients is important for the understanding of polycystic kidney disease, linking dosage effect and variability of phenotype severity, and for genetic counseling. We finally suggest that genetically unresolved cases could harbor pathogenic variants located in noncoding regions of PKD genes.
ACKNOWLEDGMENTS
The authors are especially grateful to all pateints and their families. The work was supported by grants of the Polycystic Kidney Italian Association (AIRP, www. renepolicistico.it).
SUPPLEMENTARY MATERIAL
Supplementary File (PDF) S1. Supplementary Patients and Methods. S2. Clinical and Genetic Patients' Data. S3. Implications for Genetic Counseling. S4. Supplementary References. Figure S1. ADPKD families' pedigrees revealing a marked intrafamilial phenotypic variability owing to complex genotypes. The genotypes at PKD1 and/or PKD2 genes and PKHD1 gene for pedigree 5 are indicated below with each subject symbol. Figure S2. Potential transcription factor binding sites present in the deleted region upstream of the translation start site (ATG) are indicated. The transcription start site is indicated with an arrow. B. MLPA analysis of PKD1 gene suggesting the presence of the heterozygous deletion in the 5'UTR (probe PKD1 up 257) C. Zigosity of the SNP rs34197769 G/A (exon 35) on genomic DNA (IGV visualization) and cDNA (Sanger sequencing). Hemizigosity on cDNA indicates the absence of the transcript from one allele (see text). | 2022-01-10T16:05:33.694Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "ddc4d36a66e8142fc208cea7b5773b3b1b357d19",
"oa_license": "CCBYNCND",
"oa_url": "http://www.kireports.org/article/S246802492101617X/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4e8e00f4a271b39917b25b6aa7a35a3691e1c75b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
28718234 | pes2o/s2orc | v3-fos-license | MGRO Recognition Algorithm-Based Artificial Potential Field for Mobile Robot Navigation
This paper describes a novel recognition algorithm which includes mean filter, Gaussian filter, Retinex enhancement method, and Ostu threshold segmentation method (MGRO) for the navigation of mobile robots with visual sensors. The approach includes obstacle visual recognition and navigation path planning. In the first part, a three-stage method for obstacle visual recognition is constructed. Stage 1 combines mean filtering and Gaussian filtering to remove random noise and Gauss noise in the environmental image. Stage 2 increases image contrast by using the Retinex enhancement method. Stage 3 uses the Ostu threshold segmentation method to achieve obstacle feature extraction. A navigation method based on the auxiliary visual information is constructed in the second part. The method is based on the artificial potential field (APF) method and is able to avoid falling into local minimum by changing the repulsion field function. Experimental results confirm that obstacle features can be extracted accurately and the mobile robot can avoid obstacles safely and arrive at target positions correctly.
Introduction
Mobile robots integrate a number of technologies such as machinery, manipulation, sensors, and information processing [1].Equipping mobile robots with sensors is essential for ensuring them to navigate and function in unknown environment [2].
Electromagnetic navigation is widely used in the early stage when mobile robot navigation emerged [3].Electromagnetic navigation requires burying navigation lines in the working area of mobile robots ahead of time with high cost and poor flexibility.Track navigation appeared later where mobile robots calculate and correct their traveling paths continuously according to information from encoders and gyroscopes equipped on the robot.However, this time tends to have large time dependent accumulated error [4].In sensor navigation, sensors such as radar, sonar, ultrasound, and infrared are often integrated in mobile robots for detecting obstacles in unknown environment and for calculating paths or building maps by using path planning algorithms [5][6][7].
Visual sensors such as cameras are able to obtain a wealth of information and are often used in mobile robot navigation.
In the visual navigation process, mobile robot firstly obtains environmental information image from visual sensors.The robot then generates information via image processing and finally determines the best path via path planning algorithms [8,9].Tan et al. proposed a visual navigation system with double modules.One module deals with vision detection, and the other is in charge of planning track.The vision detection module can detect the edge of each area according to the brightness and geometric features of image.The planning track module then updates planning information in real time to guide the moving of a mobile robot autonomously [10].Pei et al. studied the navigation of underwater robot.The information of obstacles above waterline was probed by the stereo vision technology, and then the static position information and dynamic velocity information were calculated to realize collision avoidance and dynamic target tracking [11].Wang et al. constructed a multiview vision system for navigation.This system included multiple cameras to obtain surface information of the navigation scene.The information used for navigation was then achieved from the visual features by using Homography matrix decomposition and bias estimation.Experiments results proved the effectiveness and accuracy of navigation [12].
In summary, there are two most important tasks in visual navigation of mobile robots.One is how to validly extract features of image information from visual sensors such as camera in order to provide the basis for the subsequent path planning.The other is path planning with strong robustness to generate rational paths according to visual features.Since tasks for mobile robot navigation are required to be in real time or near real time, in addition to accuracy, minimizing processing time is also essential in order to meet the demand for rapid navigation of mobile robots.In this paper, we present an approach for feature extraction of navigation images and path planning in the navigation process to achieve satisfactory performance of mobile robot navigation.
MGRO Recognition Algorithm
In order to achieve autonomous navigation of mobile robots, it is vital to obtain environmental images and recognize obstacle information by using visual algorithms.In environmental images from cameras, several issues can impact the accuracy of obstacle recognition, including random noise and Gauss noise existing in the images, low contrast between obstacles and background, and recognition difficulty from inconspicuous features of obstacles.
In this work, we designed a three-stage approach termed MGRO recognition to improve obstacle recognition accuracy of environmental images captured by the mobile robot.Stage 1 combines mean filtering and Gaussian filtering to remove random noise and Gauss noise in the environmental image.Stage 2 increases image contrast by using the Retinex enhancement method.Stage 3 uses the Ostu threshold segmentation method to achieve obstacle feature extraction.
Noise Removal.
Image noise can significantly influence image quality, and random noise and Gauss noise take the highest proportion.Since the multiplicative of the image noise can be converted to additive, these two types of noise can be removed one by one.We first remove random noise with a mean filter and then remove Gauss noise with a Gaussian filter.
A template with the size 3 * 3 pixels is chosen, and the pixel at the center is taken as the processed one.Random noise is determined according to Equation (1) indicates that if the gray-scale value of the center pixel is much different from the average gray-scale value of the surrounding 8 pixels, the center pixel is random noise; otherwise, the center pixel is not random noise.The judgment of the difference is made according to the prior set threshold value .For the pixel determined as random noise, it is replaced with the average gray-scale value of the surrounding 8 pixels to remove the random noise.
Removing Gauss noise is performed with a two-dimensional Gaussian filter, according to ( The two-dimensional Gaussian filter is a circular symmetric function.Image smoothing is mainly controlled by which is an integer greater than zero.When performing denoising, the filter is convolved with the image to process.
Retinex Enhancement.
In actual images, the contrast between foreground and background often tends to be low, which is not conducive to extracting obstacle information.Therefore, we increase the contrast between foreground and background with the Retinex enhancement method in this work.
The basis of Retinex enhancement is that, from the perspective of light intensity, the image information can be divided into incident light intensity and reflected light intensity, which, respectively, forms the low frequency information and the high frequency information of the image.If the proportion of the incident light intensity and reflected light intensity can be changed, the contrast can also be adjusted to enhance the image.The mathematical description of the single scale Retinex enhancement is where (, ) indicates the reflected information of the original image, (, ) indicates the overall information of the original image, and and are the two controlling parameters of the incident information of the image.Note that the proportion of incident and reflected information can be changed by adjusting these two parameters.Here, is not circumference ratio.The single scale Retinex enhancement method only uses one group of and ; therefore, its enhancing capacity is limited to detailed features in different regions.Hence, we use a multiscale Retinex enhancement method in this work to achieve improved effect by adjusting more groups of and , and the final enhancement result is the linear superposition of all groups of single scale enhancement.The mathematical form of the multiscale enhancement is where there are groups of controlling parameters as and in the enhancement, and there is a weight coefficient in each group of the enhancement results.
Improved Ostu Threshold Segmentation.
Since different features exhibit different pixel gray scales in an image, threshold-based segmentation can extract individual features effectively.However, there are often large deviations in manual threshold settings, which are also not conducive to automatic adjustment, making it difficult to satisfy the mobile robot navigation task with real-time performance.Among the methods capable of automatically adjusting threshold values, the one-dimensional Ostu method is often used and is implemented as follows.
Step 1. Count gray scale values of all pixels in the original image.Let indicate the original image; let (0, 1, . . ., − 1) indicate the gray-scale level; and let indicate the pixel number in the image.The relationship among these variables is Step 2. Normalize the pixel number in each gray-scale level according to Step 3. Generate a threshold value randomly, and divide the pixels in the original image into target aggregate 0 and background aggregate 1 ; then calculate their probability and mean according to Step 4. Revise the current threshold value according to target aggregate 0 and background aggregate 1 until attaining the optimal threshold value.
With the real-time requirement for the processing algorithm in mobile robot navigation, it is time consuming to perform the above operation in the full image plane.In addition, background information in the navigation environment is often complicated since there may exist information of multiple obstacles.Hence, it is difficult to split out all the obstacles with the same threshold value in the full image plane.Therefore, we improve the one-dimensional Ostu segmentation method as follows.The original image is first partitioned to generate subimages according to Ostu segmentation is then performed in each subimage, background is set to white, and obstacle information is set to black.Finally, all the subimages are combined, and connecting black areas are closed to form each obstacle's information according to where 1 and 2 are the pixel average of two image areas, 1 and 2 are the pixel variance of these two image areas, 1 and 2 are the pixel number of these two images.
Artificial Potential Field (APF) Navigation Strategy Based on Auxiliary Visual Information
We obtain obstacle information in front of the mobile robot to build a structured map by visual sensors and using the above image processing method.Among all the mobile robot navigation methods in structured environment, the artificial potential field (APF) method is the most commonly used.The basis of the APF method is that, considering the fact that the target position exerts gravity on the robot, the obstacle exerts repulsion on the robot.The resultant of forces formed by the gravity and repulsion makes the robot navigate towards the target position to avoid obstacles.However, there are several limitations in the APF method.Firstly, the robot cannot find the path to the target position when the environment information is too complicated.Secondly, the robot would stop at the target position when falling into a local minimum.Accordingly, we construct a new strategy for mobile robot navigation in this work, combining the obtained visual information of obstacle and assistance from the APF method.Figure 1 illustrates this navigation method that is implemented in the following steps.
Step 1. Determine the starting position and target position of the robot, and construct the straight path between them.
Step 2. Select irregular areas with the rectangular marquee according to obstacle information.
Step 3. Guide the robot to move forward and avoid the obstacle under the gravity of the target position.This step can be divided into several strategies according to the arrangement of the obstacles.
(a) Take the peripheral rectangular marquee as the repulsion calculation region when there is only one obstacle in front of the robot.In order to avoid the APF method falling into local minimum, a new function of the repulsion field is constructed as follows: where is the repulsive field parameter related to the distance between the robot and the obstacles. is a positive number.(, ) indicates the minimum distance between the robot and the obstacle.( − ) indicates the distance between the robot and the target position. is used to adjust repulsive force influence according to the distance from the robot to the target position and is in the interval (0, 1). indicates the repulsion range of the obstacle, as the circular yellow region of the rectangle corner point shown in Figure 1.
The radius of the circular region is set as half the width of the robot body.
Compared with the repulsion function set in the conventional APF method, we introduce the relative distance between the robot and the target position as shown in (8), which is conducive to avoiding falling into local minimum before the robot reaches the target position.The corresponding gravitational field function is where is gravitational field factor related to the distance between the robot and target position and is a positive number.
(b) When the distance of two obstacles is large enough, the contralateral corner of the two rectangular regions can be connected to confirm the probable passing range of the robot, and then the best path is determined.
(c) When the distance of the two obstacles allows the robot to pass but is too narrow, rectangular marquee is no longer used.At this moment, the bulge of the corresponding position of the two obstacles is found, and the distance between them is calculated to search for a neutral position allowing the robot to pass.
(d) When the distance of the two obstacles does not allow the robot to pass, according to situation (3-a), the robot is guided to avoid obstacles from the left side of the left obstacle or the right side of the right obstacle.
Experimental Result
We performed experimental verification to test the validity of the obstacle recognition and navigation method based on visual sensing.We chose the Srv-1 robot (Figure 2) as the mobile robot in the navigation experiment.
Srv-1 robot is a track robot with the size of length 12 cm, width 10 cm, and height 8 cm.The main sensor to receive external information is a CMOS camera set in the front of the robot body with a resolution from 160 * 280 pixels to 1280 * 1024 pixels.This robot was chosen because of its small size and the function of visual sensing, making it suitable for testing the navigation method in the experimental environment (Figure 3).
As shown in Figure 3, there are five obstacles.The starting position of the mobile robot is A, and the target position is B. It can be seen that there is noise in the image.Meanwhile, the obstacles form several shadow areas because of the light source position, causing interferences for the recognition of the obstacles accurately.By using the method described in Section 2, the recognized feature information of obstacles is shown in Figure 4.
As shown in Figure 4, the feature information of the five obstacles was recognized accurately.This is because the Retinex enhancement method increases the contrast between the obstacles and background effectively, and the shadows of the obstacles were also brought into the background.After the contrast was increased, the improved Ostu threshold segmentation method was able to separate the obstacle information accurately.
According to the recognition result, the environmental information of the robot moving process was constructed.According to the navigation algorithm described in Section 3, the planned path for the robot is shown in Figure 5.
It can be seen in Figure 5 that the APF navigation method based on the auxiliary visual information was able to plan an ideal moving path for the robot.From A, the repulsion field, generated by the lower right corner point (O1) of the rectangular marquee formed by the first obstacle, impacted the ideal path from A to B, and the robot avoided the first obstacle to form a new path.After that, the robot returned to the ideal path from A to B after a period of time.Next, the robot avoided the repulsion field at O2 successfully.When entering the region between O3 and O4, because of the narrow width of the region, the robot moved forward along the path parallel to the edge of the rectangular marquee.When reaching the repulsion field at O5, the robot avoided the last obstacle successfully and finally arrived at the target point B.
In order to further verify the performance in a complex environment, the comparison between the proposed algorithm and existing algorithm was made in this paper.A complex environment was constructed further shown as in Figure 6.There are 10 * 10 grids and some obstacle blocks in the scene.Every block is half length and half width of a grid.Similarly, the robot's width size is equal to the block's width.The obstacles are made up of one block, two blocks, or several blocks.In the scene, the number of obstacles is 18.
In the experiment, the initial position of the robot is set in C point and the target position of the robot is set in D point.There are three robots in the experiment.Robot number 1 equipped with vision sensors will be controlled to carry out a task according to the red path which was planned by the proposed algorithm above.Robots number 2 and number 3 equipped with the ultrasonic distance sensor will be controlled to carry out the same task as robot number 1 according to the green and blue path which was planned by the artificial potential algorithm.
From the comparison results shown as Figure 6, the positions of entry of number 2 and number 3 are all far from the central axis of the scene map.This is due to more obstacles in the middle of the scene.These obstacles caused trouble to the information feedback of the ultrasonic sensors and the robots were hard to judge if the situation would be safe or not.The amplification comparison results are shown as Figure 7.
From the partial enlargement results shown as Figure 7, robot number 1 chose the center position with the complex obstacles distributed as the entry and found a safe path to go through the area.This proved that the proposed algorithm is effective and the information acquisition of vision sensors is accurate.Robots number 2 and Number 3 chose the longer path to avoid the whole area with the complex obstacles to ensure safety.
The comparison results between robots number 1 and number 3 are shown as Figure 8. Robot number 1 kept the optimal path under the action of the gravitation of the target position and the local repulsion of the obstacles.Under the action of the gravitation of the target position robot number 3 tried to return the position of the central axis of the scene map.However, because of the ultrasonic sensors feedback information and the safe area setting of the path planning robot number 3 cannot go through the middle of the scene.After repeated failure, it turned back to the further position.By the comparison above, the proposed algorithm has higher effectiveness and better path planning.
Conclusion
Visual sensors are important for mobile robots to receive environmental information.By receiving environmental information with visual sensors and building maps according to obstacle features of the environmental images, the ideal moving path for the mobile robot can be planned successfully.We divide the environmental image processing into three steps: noise removal, contrast enhancement, and obstacle feature extraction.After obtaining the obstacle information with visual sensors, we further design an APF based navigation strategy using auxiliary visual information.In this method, the repulsion field function is improved for the navigation algorithm to avoid falling into local minimum, and the external rectangular marquee of obstacles is used to deal with the irregular visual feature areas.Experimental results demonstrated that the obstacle extraction method proposed in this work can effectively tackle the influence of noise and shadows and accurately extract the obstacle information.The navigation method makes full use of the visual feature information and achieves correct navigation for the mobile robot.
Figure 1 :
Figure 1: Artificial potential field (APF) navigation strategy based on auxiliary visual information.
Figure 6 :Figure 7 :
Figure 6: The path planning results in the complex environment.
Figure 8 :
Figure 8: The path chosen by number 1 and number 3 in the scene. | 2018-04-03T01:16:52.855Z | 2016-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "f6eed665ada5de8a6e82d3ae0ce9d956be25087b",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/js/2016/1959160.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f6eed665ada5de8a6e82d3ae0ce9d956be25087b",
"s2fieldsofstudy": [
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Engineering"
]
} |
235458974 | pes2o/s2orc | v3-fos-license | How did the mental health symptoms of children and adolescents change over early lockdown during the COVID‐19 pandemic in the UK?
Abstract Background The COVID‐19 pandemic has caused extensive disruption to the lives of children and young people. Understanding the psychological effects on children and young people, in the context of known risk factors is crucial to mitigate the effects of the pandemic. This study set out to explore how mental health symptoms in children and adolescents changed over a month of full lockdown in the United Kingdom in response to the pandemic. Methods UK‐based parents and carers (n = 2673) of school‐aged children and young people aged between 4 and 16 years completed an online survey about their child's mental health at two time points between March and May 2020, during early lockdown. The survey examined changes in emotional symptoms, conduct problems and hyperactivity/inattention. Results The findings highlighted particular deteriorations in mental health symptoms among preadolescent children, which translated to a 10% increase in those meeting possible/probable caseness criteria for emotional symptoms, a 20% increase in hyperactivity/inattention, and a 35% increase in conduct problems. In contrast, changes among adolescents were smaller (4% and 8% increase for hyperactivity/inattention and conduct problems, respectively) with a small reduction in emotional symptoms (reflecting a 3% reduction in caseness). Overall, there were few differences in change in symptoms or caseness over time according to demographic characteristics, but children and young people in low income households and those with special educational needs and/or neurodevelopmental disorders exhibited elevated symptoms (and caseness) at both time points. Conclusions The findings highlight important areas of concern in terms of the potential impact of the first national lockdown on children and young people's adjustment. Developing an understanding of who has been most severely affected by the pandemic, and in what ways, is crucial in order to target effective support where it is most needed.
INTRODUCTION
While children and young people are at low risk of infection from coronavirus disease 2019 , the pandemic and the measures taken to try to minimise the spread of the virus, such as lockdown, school closures and social distancing, have caused extensive disruption to the lives of children and young people.
Understanding the psychological effects of the COVID-19 pandemic on children and young people, in the context of known risk factors is crucial to mitigate the effects of the pandemic (Holmes et al., 2020).
Early cross-sectional findings have given an indication that children and young people have had relatively high levels of mental health symptoms during the pandemic (Racine et al., 2020).
For example, in China, Xie et al. (2020) found that 22.6% of 2330 young people survey reported elevated depressive symptoms and 18.9% reported elevated anxiety symptoms during lockdown. We have also recently started to see reports based on comparisons between children and young people's mental health prior to the pandemic and at a particular point of time during the pandemic. Of particular note, the NHS Digital Survey of children and young people's mental health in England (NHS Digital, 2020) highlighted that in July 2020 (after the end of national lockdown but while many restrictions were still in place) the proportion of children and young people with a probable mental health disorder was one in six, compared to one in nine in 2017. While it is possible that this deterioration may have been a continuation of the pattern that had been seen from previous surveys (Sadler et al., 2018), the fact that over 40% of young people reported that they felt that the pandemic had made their mental health worse highlights the potential contribution of the pandemic to this worsening picture.
However, the lack of longitudinal data on change over time during the pandemic limits our understanding of how particular features of the pandemic, such as national lockdown (which included school closures for most children), were associated with changes in mental health.
It is likely that the impact of the pandemic will differ depending on a range of factors, including those already known to be risk factors for poor mental health generally. For children and young people, this includes being from a low income household (Gutman et al., 2015;Wickham et al., 2017), a single parent household (partly due to material disadvantage) (Dunn et al., 1998;Spencer, 2005), and having special educational needs (SEN) that require special health and education support (Gadeyne et al., 2004;Linna et al., 1999). Indeed, there are already indications of a high prevalence of emotional and behavioural difficulties among young people with neurodevelopmental disorders (NDs) during early lockdown (Nonweiler et al., 2020). In general, there are also differences in the risk of developing mental health difficulties on the basis of age and gender, with boys of primary school age more likely to have any mental disorder (12.2%; most commonly behavioural problems) than girls of the same age (6.6%), but by secondary school age, boys and girls are equally likely to have any mental disorder with higher rates of emotional disorders among adolescent girls (Davis et al., 2018).
Finally, the impact of the pandemic may have differed between age groups. For example, compared to adolescents, younger children may have faced particular disruption given that they are likely to be less able to access learning independently while out of school, are more dependent on their parents (who are known to have experienced high levels of stress during lockdown; (Office for National Statistics, 2020), and less able to connect with peers in meaningful ways (e.g., remotely through electronic devices rather than face to face play). However, adolescents might be particularly affected due to their normative drive for autonomy and social connections (Steinberg, 1990), which were curtailed during lockdown.
The Co-SPACE (COVID-19: supporting parents, adolescents and children during epidemics) study was set up to track the trajectories of mental health of children and young people during the COVID-19 pandemic in the United Kingdom through a monthly online survey completed by parents and carers of children and young people aged 4-16 years. In this paper, we set out to answer the following research questions: 1. How did mental health of participating children and adolescents change during early lockdown in the UK-in terms of both continuous symptoms and 'caseness'? 2. How did this vary on the basis of (i) child gender, (ii) household income (living in poverty or not) and family composition (i.e., single adult family or not), and (iii) presence of SENs/NDs? This early lockdown period in the United Kingdom involved a national lockdown from the end of March 2020 (including across the devolved nations), during which schools were closed (except to children of key workers and vulnerable children), people were not allowed to mix with others outside their household, nonessential shops, entertainment venues and playgrounds were closed, and people were instructed to stay at home except for very limited purposes (e.g., food shopping). Restrictions began to be eased across the UK from the beginning of June 2020.
Participants
Parents and carers (over the age of 18 years) of school-aged children and young people aged between 4 and 16 years who lived in the United Kingdom were eligible to take part. The current paper focuses on the 2673 participants who completed the baseline survey online between the 30th March and the 30th April 2020 and a follow-up survey 1 month after baseline (30th April 2020-30th May 2020), and completed the Strengths and Difficulties Questionnaire (SDQ; Goodman, 1997;Goodman et al., 1998) at both time points. Demographic information for participants and their children can be found in Table 1.
Recruitment
Participants were recruited through a variety of means, including promoting the study through partner organisations, networks, charities and schools, print and digital media coverage and social media.
Procedure
Parents/carers provided informed consent and then completed the baseline survey online between 30th March and the 29th April 2020 and a follow-up survey 1 month after baseline (30th April 2020-30th May 2020). If participants had more than one child within this age range, they were asked to choose one 'index' child to report on each time. A link to the follow up survey was sent via email to each parent/ carer one calendar month after they had completed their baseline survey. Full procedural information can be found in the protocol (osf. io/8zx2y). Ethical approval for the study was provided by the University of Oxford Medical Sciences Division Ethics Committee (reference R69060).
Measures
Demographics. Parents/carers reported on their own and their child's age, gender, and ethnicity and on their total household income. Due to the typical differences in patterns of child and adolescent mental health and their different educational experiences, we dichotomised age at baseline as 4-10 year olds (children) and 11-16 year olds (adolescents). A household income of less than £16,000 per year was categorised as 'low household income' as it reflects an income below 60% of the median income in the United Kingdom. Parents/carers were asked whether or not their child had a SEN and/or diagnosed attention deficit hyperactivity disorder (ADHD) or autism spectrum disorder (ASD). Parents/carers were also asked about their family composition (to establish whether there were any other adults aged 18 years or older living in the household).
SDQ (Goodman, 1997;Goodman et al., 1998). The mental health of children and young people in the survey was measuring using the SDQ, a brief behavioural screening questionnaire. This measure has been validated in both community and clinical samples and is able to detect psychiatric diagnoses with good sensitivity and specificity (Goodman et al., 2000;Stone et al., 2010). The parent/carer-report version was used due to its satisfactory psychometric properties across the study age range (Stone et al., 2010). The SDQ consists of 25 items, each rated on a 3-point Likert scale ranging from 0 ('not at all') to 2 ('certainly true'). There are five subscales, each consisting of five items, assessing emotional symptoms, conduct problems, hyperactivity/inattention, peer relationship problems and prosocial behaviour. In the current paper, we examine the three subscales that relate to mental health symptoms: emotional symptoms (related to fear/worry, clinginess, sadness and somatic symptoms), conduct problems and hyperactivity/ inattention. A subscale score is obtained by summing the responses in each of the subscales (range: 0-10). Where there was missing data, the person mean was imputed on responses to at least three of the five subscale items. The SDQ also includes an impact supplement which assesses the functional impairment of the identified problems across four domains (the child's home life, friendships, school-life and leisure activities) and distress. Impact items are scored on a four point scale from 0 if either 'not at all' or 'only a little', 1 if 'quite a lot' and 2 if 'a great deal'. Scores on the impairment and distress items are totalled, leading to a maximum total impact score of 10. As is a standard requirement for the SDQ, at the first assessment the SDQ asked about symptoms and impact over the last 6 months, and follow-up assessments asked about the preceding month. The likelihood that a child or young person may have a mental disorder can be classified using a pseudo diagnostic algorithm as 'unlikely', 'possible' or 'probable', based on both symptom (>80th percentile = possible) and impact ('quite a lot' in at least one domain = possible) ratings (Goodman, 1999;Goodman et al., 2000).
In this study we followed the 'lenient' approach used by Nielsen et al. (2019) with preadolescent children, distinguishing between 'possible'/'probable' and 'unlikely' cases, to err on the side of being inclusive to those who might be a potential 'case'.
Data analysis
All analyses were carried out in R Studio (v. 1.3.1093) using R (version 4.0.3). We calculated SDQ caseness categories using syntax downloaded from: http://www.sdqinfo.com/py/sdqinfo/c0.py. To examine change over time, the main effect of time point on SDQ symptoms was examined within separate linear mixed effects models for children and adolescents, and on SDQ caseness within binomial generalised mixed effects models (using a bobyqa optimizer, unless stated otherwise). We next repeated the models above, first with the inclusion of each variable of interest individually (where they were not already included in the models as covariates) and again with those variables as an interaction with time point, to establish how patterns of change in mental health symptoms varied on the basis of (i) child gender, (ii) household income (low [poverty level] income or not) and family composition (i.e., single adult family or not), (iii) presence of SENs/NDs (including ASD and ADHD). Models were run using the lmer function within the lme4 package (v. 1.1-23; Bates et al., 2015). Each model was estimated using maximum likelihood estimations (with laplace approximation for caseness models) and included dichotomous variables of child age, gender, and ethnicity and total household income and employment status as fixed effects.
A random intercept was included for each participant and time was WAITE ET AL.
RESULTS
Question 1. How did mental health of participating children and adolescents change during early lockdown in the United Kingdom -in terms of both continuous symptoms and 'caseness'? Table 2 presents the model results of the main effects of time for parent/carer reported emotional symptoms, conduct problems, and hyperactivity/inattention and caseness. All means, percentages and confidence intervals can be found in Tables S1 and S2 (available as an online data supplement).
Between baseline and follow-up, for children (age 4-10 years) there was a small increase in emotional symptoms and conduct problems and a larger increase in hyperactivity/inattention (standardised mean differences [SMD] of 0.05, 0.16 and 0.22, respectively). Figure 2 presents the change in time for each level of each predictor, split by age group. All means and confidence intervals for SDQ symptom scores, as well as percentages and percentage change for cases, can be found in Tables S1 and S2 (available as an online data supplement). Given the unprecedented context in which this study took place, it remains unclear why particular groups of children and young people experienced particular patterns of change in mental health symptoms and caseness. The finding that increases in mental health difficulties were most pronounced among primary school aged children may be surprising, given the known risk for the onset of mental health problems in adolescence (e.g., Kessler et al., 2005). However, on the other hand, increases in family stress caused by the demands of home-schooling alongside working (NHS Digital, 2020) may have been a particular challenge for parents of younger children who would have been more reliant on parents for support with education, as well as generally monitoring, entertaining and providing for them throughout the day. On the other hand, adolescents may have been relatively independent during lockdown, and were also likely to have been able to better maintain peer relationships through, for example, online chats, messaging and gaming. The potential impact on both family stress and peer relationships on adjustment during lockdown will be a critical area for future research.
The increases in externalising (conduct, hyperactivity and inattention) problems across the age range are of particular concern, given the wide range of associated negative consequences for individuals, families and societies (Erskine et al., 2016). It will be important to carefully monitor this over time to understand to what extent they reflect particular challenges associated with the early lockdown period, and whether they resolve once children and young people are able to return to (some of) their normal activities or persist and require further support. Notably, however, emotional symptoms somewhat declined among adolescents. The lack of prepandemic data and day-to-day data right from the start of the pandemic makes this difficult to interpret, as it is possible that, for example, adolescents' levels of emotional symptoms had increased prior to the start of this study and we saw a gradual return to 'normal' levels. Alternatively, it is possible that aspects of lockdown brought some benefits to participating adolescents, particularly due to a reduction in academic or social pressures (which are both known to be high among adolescents; e.g., Peña-López, 2016).
Whilst our findings are based on parent/carer report, at least one other study has reported a reduction in adolescent self-reported anxiety levels among year 9 (13-14 year olds) from pre to during F I G U R E 1 Estimated marginal means and % caseness for Strenghts and Difficulties Questionnaire emotional symptoms, conduct problems and hyperactivity/inattention from baseline to follow-up, by age group WAITE ET AL.
-7 of 10 pandemic assessments (Widnall et al., 2020). Notably we also saw particular reductions in emotional symptoms among adolescents from single-adult households and externalising problems among problems (e.g., arguments) seen across the age range may reflect broader distress observed in the form of behavioural disturbance (Angold & Costello, 1993). It is also important to highlight that at the first assessment, the SDQ requires that symptoms be rated over the past 6 months, which then changes to the past month at subsequent follow-up time periods. Thus, although parents' ratings at follow-up are of their child's symptoms during the lockdown period, the baseline ratings cover a large time span, which should be taken into account when interpreting changes over time. Furthermore, while we examined commonly occurring mental health symptoms for this age range, using a well-validated screening instrument, we did not assess the presence of mental health disorders against the diagnostic criteria of international standard classifications such as ICD-10 (World Health Organization, 1992). Further research is also required to understand other mental health difficulties, such as those related to sleep or eating difficulties.
It is also important to highlight that the study population was not a representative sample, and there was clear bias towards more affluent families from White British backgrounds. Given the markedly elevated levels of mental health symptoms and caseness found among children and young people in low income households within our study, we expect that the levels of difficulties we have reported here are likely an under-estimation of the extent of difficulties experienced more broadly in the community, and detection of predictors of change in mental health symptoms over time may have been limited by relatively small samples among some groups. Indeed, the very small samples within, for example, individual ethnic groups unfortunately meant that we were limited to combining children and adolescents from Black, Asian and ethnic minority backgrounds in to one category, which is a clear limitation given the very different experiences during the pandemic (Levita, 2020). Other factors such as the children and families' experience of COVID-19, parental employment status (including whether they were a key worker, working out of the home and in relatively high risk environments) and child school attendance will also be important to consider in future investigations.
This rapid longitudinal study in response to the first COVID-19 lockdown in the United Kingdom has highlighted deterioration in parent or carer reported externalising behaviours among participating children and, to a lesser extent, adolescents over 1 month of lockdown. While emotional symptoms also increased among preadolescents in this study, there was a small decrease among adolescents, and this was also the case for externalising problems among children and adolescents with SENs. As such the findings highlight important areas of concern in terms of the potential impact of the first national lockdown on children and young people's adjustment. It will be important to further track the trajectories of mental health of children and young people over the course of the pandemic beyond early lockdown, as schools reopen, as further regional and national lockdowns occur and the economic impacts are more keenly felt.
Developing an understanding of who has been most severely affected by the pandemic, and in what ways, is crucial to target effective support where it is most needed. | 2021-05-14T13:14:38.765Z | 2021-04-01T00:00:00.000 | {
"year": 2021,
"sha1": "b7fdce28c1aa849933889225dfd88f03f57002f7",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/jcv2.12009",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c802ea2576f26d7f9d0008288a8cd8e51dcd5fe0",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
125875572 | pes2o/s2orc | v3-fos-license | Propagation of solitons in a two-dimensional nonlinear square lattice
We investigate the existence of solitary waves in a nonlinear square spring–mass lattice. In the lattice, the masses interact with their neighbors through linear springs, and are connected to the ground by a nonlinear spring whose force is expressed as a polynomial function of the masses out-of-plane displacement. The low-order Taylor series expansions of the discrete equations lead to a continuum representation that holds in the long wavelength limit. Under this assumption, solitary wave solutions are sought within the long wavelength approximation, and the subsequent application of multiple scales to the resulting nonlinear continuum equations. The study focuses on weak nonlinearities of the ground stiffness and reveals the existence of 3 types of solitons, namely a ‘bright’, a ‘dark’, and a ‘vortex’ soliton. These solitons result from the balance of dispersive and nonlinear effects in the lattice, setting aside other relevant phenomena in 2D waves such as diffraction that may lead to a field that does not change during propagation in nonlinear media. For equal constants of the in-plane springs, the governing equation reduces to the Klein–Gordon type, for which bright and dark solitons replicate solutions for one-dimensional lattices. However, unequal constants of the in-plane springs aligned with the two principal lattice directions lead to conditions in which the soliton propagation direction, defined by the group velocity, differs from the wave vector direction, which is unique to two-dimensional assemblies. Furthermore, vortex solitons are obtained for isotropic lattices, which shows similarities with results previously found in optics, thermal media and quantum plasmas. The paper describes the main parameters defining the existence of these solitary waves, and verifies the analytical predictions through numerical simulations. Results show the validity of obtained solutions and illustrate the main characteristics of the solitary waves found in the considered nonlinear mechanical lattice. The study provides an analysis of the physics of waves in nonlinear systems, and may lead to novel designs of devices that can be used for high-performance waveguides.
Introduction
The interest in waves propagating in nonlinear and dispersive media was prompted by Scott Russell [1] who observed in a water channel what he called wave of translation, a single wave in the surface of water that can travel undisturbed over very large distances.In 1895, Korteweg and de Vries developed a model (KdV) for waves on shallow water surfaces that could explain this observed phenomenon, since then called soliton or solitary waves [2].In a general sense, the term soliton is used to refer to any localized field that does not change during propagation due to a balance between dispersive and nonlinear effects.Soliton solutions have been also found in magnetic fields [3], electrical networks [4], or in biological systems [5,6], among others.In the field of nonlinear optics, the study of solitons has been particularly active [7].The balance between nonlinear effects and diffraction (spatial soliton), or between in periodic media is a very active research field.These systems can be widely used in mechanical, acoustic, electromagnetic, electronic and opto-mechanical applications.For one dimensional (1D) mechanical lattices, Zabusky and Kruskal [10] first showed that the continuum limit of Fermi-Pasta-Ulam lattice [11] was the KdV equation [2], and was therefore susceptible to admit soliton-like solutions.The Toda chain, with neighbor interaction [12] also constitutes a paradigmatic example of 1D lattice able to develop solitons; the Toda chain has motivated numerous works in this field [13][14][15].Also, Iizuka et al. [16] studied the wave propagation in random lattices with different nonlinearities, leading to soliton solutions.Recently, Nadkarni et al. [17] studied a 1D periodic chain of bistable elements; the continuum limit approach of this system in the long wave-length regime leads to a 1D nonlinear Klein-Gordon equation which, under certain conditions, exhibits the existence of bright and dark solitons.
In two dimensional (2D) lattice systems, it is necessary to cite the work by Toda [18], who studied, for the first time, the presence of solitons in (2D) mechanical systems.Later, Sreelatha and Joseph [19] presented a continuum treatment of soliton behavior in these kind of lattices.Interestingly, intense focussing effect and the generation of compact acoustic pulses (sound bullets) have been created by [20] by using a tunable, nonlinear acoustic lens, which consists of ordered arrays of granular chains.In contrast to optical bullets that are theorized to reside in optically dispersive/diffractive media, sound bullets arise when solitary waves created within the nonlinear granular medium coalesce in a linear nondispersive medium.Wang and Liu [21] investigated the dynamics of a two-dimensional system with nonlinear interactions, finding solitary wave solutions in certain directions for an hexagonal configuration.Deng et al. [22] found vector solitons in a structure comprising a network of squares connected by highly deformable ligaments.In any case, the number of studies devoted to solitons in 2D mechanical systems is comparatively smaller.
The paper presents a model for a 2D anisotropic square lattice resting on a weakly nonlinear substrate, with masses being submitted to out-of-plane displacement.The low-order Taylor series expansions of the discrete equations lead to a continuum representation that holds in the long wavelength limit.The continuum nonlinear equations are solved using a multiple-scale expansion [23].The leading order solution consists in a carrier wave modulated in amplitude by different solitons, which are derived from 1D or 2D nonlinear Schrödinger equations of the third order.These solitons result from the balance of dispersive and nonlinear effects in the lattice, setting aside other relevant phenomena in 2D waves such as diffraction that, as previously stated, may lead to a field that does not change during propagation in nonlinear media.The study shows that the propagation of bright and dark solitons is essentially one dimensional, although the lattice anisotropy affects their orientation with respect to the carrier wave.The existence of a third soliton type, namely the vortex soliton, previously identified in optics [24][25][26], thermal medium [27] or quantum plasmas [28], has specific bi-dimensional features.The propagation of these solitons in the lattice is verified by numerical solution of the discrete equations.The simulations provide the opportunity for the analysis of the physics of the considered nonlinear lattice, which may suggest exploiting of solitary waves for novel devices and waveguides.
The paper is organized as follows.After this introduction, Section 2 describes the lattice configuration, its dispersion properties and the continuum approximation adopted for the subsequent analytical solutions, which are presented in Section 3. Next, Section 4 presents the numerical simulation of the nonlinear lattice response, while Section 5 summarizes the main findings of the study and provides recommendations for followon investigations.
Lattice configuration and governing equations
We consider the propagation of waves in a spring mass lattice of square topology assembled in the , plane (Fig. 1(a)).The lattice consists of masses that undergo motion only in the direction perpendicular to the lattice plane, i.e. the direction in Fig. 1.Linear springs with constants and connect neighboring masses in the and directions respectively, and provide a restoring force in direction , proportional to the relative vertical displacement between adjacent particles.In addition, a mass in the generic position , within the lattice is connected to the ground by a nonlinear spring.The behavior of this nonlinear spring is defined by a potential ( ) , with denoting the direction displacement of the th mass.Thus, the force exerted by the nonlinear spring is generally expressed as: and the Hamiltonian of the lattice is given by: ] .
Accordingly, the momentum balance equation of the mass at location , is: We expand the force of the ground spring up to the third order: where 1 is the linear force coefficient while the coefficients of the nonlinear terms are obtained as: , 3 = − 1 6 Based on this definition, the momentum balance equation can be rewritten as follows: where the following parameters and variables are introduced: with
Linear dispersion relation for the lattice
We first describe the linear dispersion relations of the lattice, which serve as reference and provide basic information on the propagation of plane waves.To this end, a linear approximation for the force in the ground spring is considered, i.e.: where, based on the definition of the force potential in Eq. ( 4), the relations f (0) = 0 and A plane wave solution is imposed in Eq. ( 7) of the form: where 0 is the magnitude, and is the frequency associated with the wave vector = + .Also, , denote the position of the As customary in the analysis of periodic systems, the assumed plane wave solution is rewritten in terms of a non-dimensional wave vector = + = + .Substituting the resulting plane wave expression into the governing equation (Eq.( 7)) and further introducing a non-dimensional frequency: leads to the following dispersion relation for the linear lattice: A final parameter is introduced to account for the difference in the spring constants along the and directions.Namely, letting: gives: Examples of dispersion relations are presented in Fig. 2, where the surface defined in Eq. ( 15) is represented in the form of iso-frequency contours.The case of = 1 (Fig. 2) is characterized by iso-frequency contours which in the long wavelength regimes appear as circular as a result of the isotropic nature of the lattice.In contrast, the case of = 10 (Fig. 2) illustrates the difference in propagation which manifests itself even at low frequencies corresponding to the long wavelength regime.This frequency/wavenumber range, and the considerations herein are of importance in the context of the continuum approximation for the behavior of the lattice, and the analysis of the propagation of solitary waves based on such approximation, which is described in the next section.
Continuum limit and long wavelength approximation
The limiting case of long wavelengths is particularly interesting since the displacement amplitudes change only slowly from mass to mass.The lattice size has then no part to play and transition to a continuum is possible.Then, continualization may serve to provide a connection between parameters of discrete and equivalent continuum media, and to identify general properties of the discrete mechanical system.In the continuum limit, the displacement of first neighbors can be written using a Taylor expansion centered at the particle , , thus 4 12 4 12 where u = u(, ) denotes the continuous variable approximation of the out-of-plane motion of the lattice.The continuum counterpart of the governing Eq. ( 7) becomes 4 12 4 12 Assuming a plane wave solution in the linear regime, f (u) = −u, leads to the dispersion relation for the continuum which is given by: Upon truncation to the first Taylor expansion term in Eqs. ( 16), (17), which restricts the study to the long-wavelength, i.e. = √ 2 + 2 ≪ 1, Eq. ( 18) reduces to: The resulting equation also governs the dynamics of a membrane lying on a nonlinear foundation, which is submitted to different prestress in the directions and .Thus, the results presented next are readily transferable to membrane-type systems consisting in ultra-thin sheets bonded or in contact to elastomeric or fluid substrates as found for example in MEMs applications [29,30].Else, it may guide the definition of an experimental set-up for the verification of the solutions found herein, which may be the object of future investigation.Within the longwavelength approximation, the coefficient defined in Eq. ( 14) can be considered as an anisotropy index.Thus, in the isotropic case ( = 1), Eq. ( 20) is recognized as a 2D nonlinear Klein-Gordon type, which arises in various problems in science and engineering [31].
The linear approximation of the ground interaction leads to the corresponding dispersion relation which has the well-known form: Fig. 4 compares the approximate continuum dispersion relation in the long-wavelength regime, with the exact discrete one.The difference between both surfaces are small for | | ≪ 1, | | ≪ 1, which specifically illustrates the range of validity of the first order approximation, expressed in Eq. ( 20) and employed in the remainder of this paper.As expected, the higher order continuum approximation provides a better estimate of the dispersion relations, and therefore of the dynamic behavior of the lattice, over a broader range of wavenumbers and frequencies.
Multiple scale solution of the continuous equations
The continuous approximation truncated to the first order is employed as the basis for the analysis of the considered lattice in the presence of nonlinearities.To this end, we restrict the analysis to the case of weak nonlinearities so that a perturbation approach can be employed.
The considered continuum equation (Eq.( 20)) is further simplified by introducing the following set of non-dimensional variables: The proposed change of coordinates {, } constitutes a uniform scaling which preserves angles between lines.
For moderate displacement amplitudes the nonlinear force is approximated by a third-order Taylor expansion and the governing equation becomes with
Multiple-scale expansion
Eq. ( 23) is solved by considering a multiple-scale expansion [23], which leads to a solution process similar to the one previously employed for the 1D version of this problem in Nadkarni et al. [17].Consider the third-order ansatz: where ≪ 1 is a book keeping parameter that enforces that wave motion occurs for small amplitude and therefore that the results obtained below hold in the weakly nonlinear regime.Also, the following scaled spatial and temporal scales are introduced, Accordingly, the unknown functions in the solution expansion depend on the scaled variables, i.e.
Solution of the ordered equations
Substituting the ansatz (25) in Eq. ( 23) and equating terms of like order (up to 3 ), leads to the following set of ordered equations: • Order 2 : • Order 3 : where ℒ[⋅] is the following linear differential operator, corresponding to the linear governing equation expressed in Eq. ( 20): (31)
Solution for order 𝜀 1
The solution for the first order equation is in the form of a plane wave with unknown amplitude modulation, and is expressed in the following form: In Eq. ( 32) above and in the remainder of the paper, c.c. denotes the complex conjugate, while the parameter , which corresponds to the imposed phase modulation, is introduced for convenience: Here, , and are related by the linear dispersion relation corresponding to the considered continuum description, which is expressed in Eq. ( 21).
Solution for order 𝜀 2
Following the solution for order 1 (Eq.( 32)), Eq. ( 29) can be rewritten as where the coefficients of i in Eq. ( 28) are resonant and therefore lead to a secular term which is set equal to zero: Considering the ansatz for the non-homogeneous solution of Eq. ( 34) and equating oscillating and non-oscillating terms gives: and finally The non-oscillating term in 1 accounts for the potentially asymmetric response of the system due to an eventual non-nil value of the coefficient 2 .According to Eqs. ( 6) and ( 24), 2 represents the curvature of the force F (ū) at the origin, whose sign defines whether softening appears in compression or in tension.
Solution for order 𝜀 3
Substituting the solutions for 0 and 1 in Eq. ( 30) yields where the secular term in Eq. ( 35) is enforced, and where, again, the secular terms associated with the resonant term i in Eq. ( 30) are set to zero, which gives: 2i Finally, the following ansatz: with is imposed in Eq. (39), which gives the following solution for order 3 :
Soliton solutions for the wave envelope
The solution of Eqs. ( 35) and (40) resulting from conditions on the secular terms provide the envelope function ( 1 , 2 , 1 , 2 , 1 , 2 ) which modulates the plane wave i in Eq. (32).Seeking for wave solutions of propagating with the group velocity v , we will transform the Eqs.( 35) and (40) by first representing space position -at the different orders -in a fixed coordinate system { , } with the first axis aligned with v (see Fig. 5) and subsequently using the Galilean transformation of group velocity to the coordinate aligned with v In terms of the variables , and , Eq. ( 35) reduces to thus does not scale with in time.Similarly, Eq. ( 40) becomes Therefore, the solution of Eq. (47) provides the shape of the envelope in the frame of reference { , }, which propagates with the group velocity v .
Bright and dark solitons
We first seek for a solution of Eq. (47) that does not vary in the direction 1 , therefore with constant amplitude along the direction normal to the group velocity.It is worth noting that these characteristics will remain in the reference {, } since the change of coordinates given by Eq. ( 22) preserves angles.
Then, the propagation of the soliton reduces to a one-dimensional problem.This leads to the 1D nonlinear Schrödinger (NLS) equation [32]: where in this case After some algebra thus is positive.
It is worth noting that Eq. ( 48) corresponds to the one found and solved by Nadkarni et al. [17] for a weakly nonlinear 1D lattice containing bistable elastic elements.The dispersive and the nonlinear terms in Eq. ( 48) can balance and yield to solitons.Depending on the sign of the parameter in the nonlinear term, the NLS equation becomes focusing for > 0, or defocusing for < 0 [33].The focusing equation possesses soliton solutions decaying to a nil background state, called bright solitons as they are localized traveling 'humps'.The defocusing equation admits soliton solutions on a nontrivial background, called dark as they are localized traveling 'dips'.
Following the approach proposed by Remoissenet [34] for the case > 0 (focusing medium), and back-substituting 2 t, leads to the expression defining a bright soliton, which consists in a harmonic carrier wave i modulated by an envelope given by: where the subscript '' denotes the 'bright' soliton type, and where s and ω are respectively given by: and Thus, the solution to the leading order can be expressed as where = is the wave amplitude.Eq. (54) describes a solitary wave defined by an envelope that propagates in the direction and speed defined by the group velocity.The envelope modulates a carrier wave defined by a wave vector that is in general not aligned with the group velocity v , and that is related to the frequency through the dispersion relations of the medium in the considered form.The frequency of the carrier wave is also modified by a (small) factor ω that depends on the amplitude squared.Additionally, in the long wavelength regime the group velocity is expected to be smaller than the phase velocity since, according to Eq. ( 21), the slope of the dispersion surface tends to zero when → 0. Therefore the envelope will be slower than the carrier wave.According to Shabat and Zakharov [35] the bulk energy is contained in the soliton, which propagates with practically permanent shape leaving behind a dispersive tail.Moreover, any initial localized disturbance eventually evolves into a bright soliton, as also stated by Nadkarni et al. [17].Fig. 6 shows the displacement solution (3D view and cross section along the ȳ = x line at a given instant of time) provided by Eq. (54).This case illustrates a situation where the group velocity and the wave vector are not aligned, which occurs in the case when the lattice is not isotropic, i.e. when ≠ 1.Although the solution process follows closely the steps for the 1D soliton predictions [17] the misalignment presented above predicts a situation which is unique to the anisotropic 2D lattice considered herein and has no 1D counterpart.The validity of the solution in Eq. ( 54) for non-isotropic lattices will be tested through numerical simulations as described in Section 4. In the case < 0 (defocusing medium), the solution of Eq. ( 48) is [34]: where subscript '' denotes a 'dark' soliton, with The solution to the leading order becomes where = is the wave amplitude.This solution, originally found by Shabat and Zakharov [35], predicts a line of nil value of the amplitude for s − v t = 0, and thus travels at the group velocity.Since the sign of the hyperbolic tangent is an antisymmetric function at zero (odd function), its sign swapping forces a phase inversion in the carrier wave along the line s − v t = 0. Fig. 7 shows the displacement solution provided by Eq. (57), again for the case of an anisotropic lattice.Specifically, the cross section along the ȳ = x line at a given instant of time illustrates the anti-symmetric shape of the wave enforced by the dark soliton.
Vortex soliton
A third soliton type is investigated for the case of the isotropic lattice ( = 1, or 0 = 0 = 0 ).This limiting condition is imposed because the equation corresponding to the secular terms given in Eq. (47) reduces to which, upon the following change of variables can be recognized as 2D NLS equation of the third-order [36], of the form: In polar coordinates, Eq. ( 60) becomes: For < 0 we get the defocusing 2D NLS equation [33] which supports dark vortex solitons of the form where subscript '' denotes a 'vortex' soliton.The signed integer (taken here as ±1 for vortex stability) is called the topological charge, and is the frequency of the vortex.Substituting the previous ansatz in Eq. ( 61) leads to a nonlinear Bessel ordinary differential equation in terms of the vortex amplitude function (): 1 2 The sign of the topological charge does not modify the previous differential equation, but defines the chirality of the vortex.The unknown function is obtained by imposing as boundary conditions a nil amplitude at the origin (0) = 0, and a constant background amplitude far from the vortex core, ( → ∞) = ∞ .This value is evaluated by assuming that the amplitude of the envelope is stationary at infinity.This corresponds to letting ′ , ′′ → 0 for → ∞, which gives a background amplitude equal to: thus must have the same sign as (negative) for an envelope solution consistent with the singular boundary condition.A closed-form analytical expression for the boundary-valued problem in Eq. (63) does not exist, but a numerical solution can be found by transforming it into an initial value problem at = 0 and using the shooting method [32].It is worth pointing out that, for = ±1 , the numerical solution can be suitably fitted with a hyperbolic tangent , where 0 is a parameter that fits the numerical solution of Eq. (63).
Finally, the solution to the leading order becomes: with ω = 2 , and = ∞ .The polar coordinates and can be transformed back into cartesian coordinates x and ȳ with the expressions (59) and ) Fig. 8 shows the displacement solution provided by Eq. ( 64), where the vortex center is located at the origin and propagates with the group velocity, along with the corresponding cross section along the line ȳ = 0.
Discrete lattice configurations and integration scheme
The analytical predictions presented in the previous section are validated through comparison with the numerical simulation of the response of the discrete lattice.The nonlinear equations of motion for a finite lattice consisting of 1000 × 1000 masses are assembled and subsequently integrated in time using the Verlet integration scheme [37].The is considered in a free-free configuration, so none of the masses are constrained.
The perturbation to the lattice is applied in the form of initial conditions that correspond the various soliton solutions.Such solutions are applied to the entire extent of the upon the application of a spatial window that modulates to zero the initial displacements and velocities at the boundaries.This allows the observation of the propagation within the lattice of the initial perturbation for a sufficient time before the wave interacts with boundary reflections.Thus, a direct comparison between analytical results and numerical predictions can be performed in the absence of boundary effects.In general, the applied initial conditions are expressed as: where , = 1, … , , = 1000.Also in Eq. ( 67) the subscript identifies the type of soliton considered: ''-bright, ''-dark, or ''-vortex according to the solutions presented in the previous section.Moreover, the coordinates ( 0 , 0 ) define the center of the soliton at = 0, while denotes the applied spatial window, which in this case consists in a 2D generalized Gaussian window centered at 0 = 0 + 0 , which is defined as: ) where = + is the position of the particle, defines the width of the domain along the direction , while is the number of particles along , which in this case is either perpendicular to the soliton or parallel to it.The resulting window bounds the initial conditions within a box of defined size.Finally, is the parameter describing the generalized Gaussian window, with = 2 providing the normal distribution, while > 2 provides increasingly sharper edges.
Based on the analytical derivations presented in the prior section, solitons travel at the group velocity , whose components in the long wavelength approximation can be expressed as: where = || is the magnitude of the wave vector of direction .Accordingly, the soliton propagates in the plane in a direction , which is given by: This expression illustrates the potential misalignment of the wave vector and soliton direction of propagation that occurs when the lattice is non isotropic, i.e. for ≠ 1.The difference in propagation direction of the soliton and wave vector are illustrated schematically in Fig. 9.Note that based on Eq. ( 69), misalignment between wave vector and soliton occurs only when ≠ 0, 2 , which correspond to propagation along the and directions respectively.In these cases, the soliton is always aligned with the wave vector regardless the level of anisotropy given the fact that for the type of lattice considered herein, propagation along the principal lattice directions appears unaffected by properties in the direction perpendicular to them.
Bright and dark solitons
Results and comparisons are first presented for the bright and dark solitons described in Section 3.3.1.In all simulations in this section and in the remainder of the paper, results are found for unitary mass = 1, and assume = 1 as inter-mass distance.Also for both sets of simulations considered in this section, the parameters of the lattice are set such that 1 = 10 and = 20, and the imposed wave vector has magnitude = 10 and direction = 4 .While these parameters are chosen somewhat arbitrarily, the choice of wave vector amplitude is motivated by the need to ensure that the simulations are conducted within the limits of the long wavelength approximation.Simulations are then conducted for a variety of values of the anisotropy parameter , which allows evaluating the solution in the presence of directional misalignment between wave vector and soliton propagation.The bright soliton simulations, presented first, are conducted for parameters defining the nonlinear ground potential 2 = 5, 3 = 5, which lead to > 0. Results are first presented for the case of = 1 with a soliton originating at particle located at coordinates 0 = 0 = 500 (where 500 denotes the particle number along the , directions), which illustrates the propagation along the prescribed 45 • line given the isotropy of the lattice.The considered Gaussian window limits the extent of the soliton in the direction perpendicular to its propagation only, given that the bright nature of the soliton naturally limits its spatial extent along the propagation direction.The Gaussian window is defined by = 2 and = 200, and it is centered at the maximum of the soliton.Fig. 10 shows snapshots at the beginning of simulations and at = 0.3 , where denotes the last time instant computed by the time integration scheme.This simulation time is chosen based on a estimated time or arrival to the boundary of the domain.It is computed as = ∕( cos ), with denoting the distance from the initial location position of the soliton ( 0 , 0 ) to the nearest boundary.Given the chosen set of parameters, it is here equal to ≈ 390 s.These surface plots illustrate that the shape of the soliton is maintained during the propagation, which as expected occurs along the 45 • direction.A different representation of the results is given in Fig. 11, where the colored contours of the propagating wavefield show the spatial extent of the soliton, along with the equal phase contours, which in this case of = 1 are perpendicular to the propagation direction (Fig. 11).Such direction is also represented as a thin black line superimposed to the wavefield contours.The interpolated cross-section of the wavefield along this propagation line allows the direct comparison between numerical predictions and analytical solutions which are shown in Fig. 11.Specifically, the blue line corresponds to the results of the numerical simulations, while the red envelope is obtained from the analytical solution of Eq. ( 51).This solution is propagated in time according to the soliton group velocity , computed from Eq. ( 68).The comparisons in Fig. 11(b), (d) demonstrate that the analytical procedure presented in this paper correctly predicts the existence of bright solitary waves, and evaluates its amplitude and speed of propagation, which match the predictions of numerical simulations.
A second set of results is here presented for a lattice characterized by ≠ , and specifically for = 10.The simulation results and the comparisons with analytical predictions are presented in Fig. 12, using the same formats previously described.In this case, one can observe the existence of a misalignment between wave vector, directed perpendicularly to the lines of constant phase in the wavefield, and the direction of propagation, which is the result of the lattice anisotropy along its principal directions as defined by = 2.While the wave vector is imposed to be still directed along the = 45 • direction, the soliton propagation occurs at an angle ≈ 84 • , as given by Eq. ( 69).This misalignment does not have a 1D counterpart.The results from numerical simulations, of which only a representative subset is presented herein, confirm that analytical predictions hold also for nonisotropic 2D cases.
Next, the case of the dark soliton is investigated.The dark soliton solution exists for < 0, which we impose by selecting a nonlinear force coefficient 2 = 0.5.All other parameters in the simulations are kept the same.The windowing function in this case is applied in both perpendicular and parallel direction to the soliton, as the harmonic spatial content would otherwise cover the entire spatial domain and reflections would begin as soon as simulations are started.To this end, a Gaussian window with = 8 and = 400 is applied in addition to the transverse Gaussian window previously considered for the case of the bright soliton.The propagation corresponding to imposed initial displacements and velocities corresponding to a dark soliton (Eq.( 57)) is first illustrated in the form of snapshots surfaces, which are shown at the initial time instant in Fig. 13(a), and at about = 0.3 in Fig. 13(b).The surface plots show the propagation of the dark soliton along the expected direction.
As in the case of the bright soliton, the contour representation of the dark soliton propagation highlights the wavefield, the soliton direction of propagation, and shows the line along which the numerical solutions are interpolated for the subsequent comparison with the analytical envelope predictions.The plots for the isotropic case ( = 1) are presented in Fig. 14, while the results for a non-isotropic lattice, now with = 4, are displayed in Fig. 15.Both configurations demonstrate a good agreement between the numerical results and the analytical envelope predictions provided by Eq. (55), which again hold even when the soliton propagates at approximately ≈ 76 • for a wave vector along the 45 • .
Vortex soliton results
As a final set of tests, numerical simulations were conducted for the case of the vortex soliton predicted from the analytical solution in Eq. (64).As in the previous examples, initial conditions for displacements and velocities were applied, along with a prescribed wave vector.Given that the obtained solution holds only for isotropic lattices, the direction of propagation is aligned with the soliton, and propagation occurs equally in all directions.Thus, the case of wave vector direction = 0 is considered among all possible choices.The initial conditions were again windowed with the chosen generalized Gaussian function, which was set with = 8 along the propagation direction, and = 2 in the direction to it.This choice allows the propagation of the soliton undisturbed by the boundary reflections, while minimizing the distortions coming from the sides.The width of the windows was set to 200 particles in both directions to minimize distortion of the center of the soliton located at the center point of the domain in the initial time.
The results for two vortex solutions are presented in what follows, specifically for the cases of = −20 and = −10.In both instances, the analytical solutions expressed by Eq. ( 64) and the associated time derivatives are applied as initial conditions.The corresponding values of the fitting parameter 0 are provided as obtained from the application of the shooting method mentioned in the theory section.These values are 0 = 0.28397 and 0 = 0.401599 respectively for = −20 and = −10.As in previous examples, the long wavelength approximation is enforced by letting = ∕10, with = 0 for the considered propagation direction.The value of the topological charge was chosen to be = −1.The case = +1 was also evaluated, and is here not presented as the results do not appear qualitatively different.The relative value of the topological charge is instead of importance in the study of the interaction between multiple solitons, whose combination may either increase or reduce the stability of propagation of individual solitons.The results for = −20 are presented in Figs.16 and 17, which show surface plots corresponding to snapshots at two instants of time, and the direct comparison with analytical predictions along the middle line parallel to the direction of propagation.Similar results for = −10 are presented in Figs.18 and 19.The results show a good correlation between analytical envelope predictions and numerical simulations, both in terms of amplitude and speed of propagation.The numerical results show distortions of the solitons along the direction perpendicular to the propagation direction, along with the back propagation of a dispersive wave, which is particularly noticeable for the case of = −10 (see Fig. 18).The generation of this back propagating wave, its relation with the stability of the single soliton, and the influence of its characteristic parameters are currently under investigation.However, the presented results illustrate the existence of this form of solitary wave, which has no 1D counterpart, and which is well predicted by the analytical framework developed in the paper.
Conclusions
The paper investigates the existence of solitary waves in twodimensional nonlinear lattices.The lattices are composed of linear springs connecting nearest neighbors, and of nonlinear springs connecting masses to the ground, leading to forces expressed as polynomials in terms of the out-of-plane displacement.Solitary wave solutions are found from the solution of continuum equations derived in the long wavelength limit.Application of the method of multiple scales, within the assumption of weak nonlinearities, leads to an ordered set of equations and to corresponding secular terms that express differential equations in terms of the envelope modulating the propagation of plane waves.These equations admit solitary wave solutions of three kinds, corresponding to bright, dark and vortex solitons.In the case of the bright and dark solitons, the solution approach follows previous procedures used for the derivation of solitary waves in one-dimensional lattices, but predicts 2D wave effects such as the misalignment between wave vector and group velocity.This occurs when the lattice is non isotropic.The case of the vortex soliton is also derived from the solution of a NLS obtained from the secular terms in the perturbation solution, which holds for isotropic lattices.The validity of the analytical solutions and the existence of the solitary waves is verified through numerical simulations of the discrete lattice response, where analytical solitons displacement and velocity are imposed as initial conditions.Comparison between numerical results and analytical solutions confirm the existence of the solitary waves for a variety of lattice parameters and propagation directions.The presented analysis and numerical results illustrate the behavior of solitary waves of various kinds in a prototypical nonlinear mechanical lattice, and describe effects which are unique to a 2D domain.Further investigations will extend the study to consider the behavior of multi-vortices and their interaction, the effect of anisotropy, and consider potential application of these configurations for the nondispersive transfer of signals and information.
Fig. 2 .
Fig. 2. Dispersion relations for the linear lattice (with 0 = 4.5) represented as iso-frequency contours: isotropic lattice ( = 1) (a), and anisotropic lattice ( = 10) (b).Color bars correspond to values of the non-dimensional frequency .(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 3 .
Fig. 3. Dimensionless dispersion diagram for the linear lattice (with 0 = 4.5) along the edges of the irreducible Brillouin zone (IBZ) for two different values of anisotropy: = 1 (blue, solid) and = 10 (orange, dashed).Schematics of the first Brillouin zone is depicted below.
)Fig. 4 .
Fig. 4. Comparison of linear dispersion relations: discrete system (solid line), and continuum approximation (dotted lines) truncated to the first order, see Eq. (21) (a) and to the second order, see Eq. (19) (b).Color bars correspond to values of the non-dimensional frequency .(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 9 .
Fig. 9. Schematic of directions of soliton propagation and its relation with the direction of the wave vector .Lines of constant phase for the wave are shown as thin black lines, while the soliton is represented by the thick red line, and its direction of propagation is defined by the thick red arrow perpendicular to the soliton line.(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 11 .
Fig. 11.Contour of numerically predicted bright soliton wavefields for = 1 and direction of propagation considered for comparison with analytical results (black thin line): time = 0(a), = 0.3 (c).Numerical integration results (blue line) and analytical envelope evaluated from Eq. (51) (red line): time = 0(b), = 0.3 (d).(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 12 .Fig. 13 .
Fig. 12. Contour of numerically predicted bright soliton wavefields for = 10 and direction of propagation considered for comparison with analytical results (black thin line): time = 0 (a), = 0.4 (c).Numerical integration results (blue line) and analytical envelope evaluated from Eq. (51) (red line): time = 0 (b), = 0.4 (d).(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 14 .
Fig. 14.Contour of numerically predicted dark soliton wavefields for = 1 and direction of propagation considered for comparison with analytical results (black thin line): time = 0 (a), = 0.3 (c).Numerical integration results (blue line) and analytical envelope evaluated from Eq. (55) (red line): time = 0 (b), = 0.3 (d).(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Fig. 15 .Fig. 16 .
Fig. 15.Contour of numerically predicted dark soliton wavefields for = 4 and direction of propagation considered for comparison with analytical results (black thin line): time = 0 (a), = 0.4 (c).Numerical integration results (blue line) and analytical envelope evaluated from Eq. (55) (red line): time = 0 (b), = 0.4 (d).(For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.) | 2019-04-22T13:12:16.003Z | 2018-11-01T00:00:00.000 | {
"year": 2018,
"sha1": "3a5c5a79efef78da184e8fa1989ee0fb67954638",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.ijnonlinmec.2018.08.002",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "8dfc0c1bf8a9df88cb31662fd222a25e33e0dede",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
119234803 | pes2o/s2orc | v3-fos-license | The Faint Young Sun and Faint Young Stars Paradox
The purpose of this paper is to explore a resolution for the Faint Young Sun Paradox that has been mostly rejected by the community, namely the possibility of a somewhat more massive young Sun with a large mass loss rate sustained for two to three billion years. This would make the young Sun bright enough to keep both the terrestrial and Martian oceans from freezing, and thus resolve the paradox. It is found that a large and sustained mass loss is consistent with the well observed spin-down rate of Sun-like stars, and indeed may be required for it. It is concluded that a more massive young Sun must be considered a plausible hypothesis.
Introduction
The young Sun started its life on the main sequence with about 70% of the luminosity of what it has now according to standard stellar evolution theory. It is still a scientific riddle how, with such a faint Sun, the young Earth could be warm enough to host liquid water in its first couple of billion years. Yet geological evidence clearly indicates there have been warm oceans from very early on (Kasting 1989), and that these oceans were a key ingredient in the development of life. This is called the Faint Young Sun Paradox. The paradox is even more compelling for the planet Mars which we know now to have been covered with oceans for periods of hundreds of millions of years in its early life, with only half of the incoming energy flux of sunlight of the Earth. Stellar evolution simulations dictate his paradox and it therefore applies to all G stars, and less so for K and M stars that evolve much slower. In all cases the habitable zone around the star gradually moves outwards and planets that started out balmy are expected to end up scorched. Given that it took about four billion years on planet Earth for the development from single cell organisms to multi-cellular life -and since that is the only example of evolution we have -it is a reasonable assumption that the development of multicellular intelligent life takes a very long time in general, with most G star planets not spending enough time in the habitable zone. This paradox has been known for a long time, and one of the first to hint at a solution was well known science popularizer Carl Sagan (Sagan & Mullan 1972). Many solutions to the Faint Young Sun Paradox have been proposed over the years, and they come from very different fields. Fairly straightforward proposals are an enhanced greenhouse effect by carbon dioxide or methane, geothermal heath from an initially much warmer terrestrial core, a much smaller Earth albedo, life developing in a cold environment under a 200 meter thick ice sheet, a secular variation in the gravitational constant, etc. Most of 2 Petrus C. Martens these models have serious shortcomings: For example the greenhouse effect from methane appears to be self-limiting, and not enough CO2 is indicated by the geological record to justify a greatly enhanced greenhouse effect in the past (Kasting, 2004) There is not enough space in a proceedings paper to review all the material discussed above, so I refer to a recent review by Feulner (2012) and a series of very enlightening presentations and papers by Dr. James Kasting (Kasting, Toon & Pollack 1988;Kasting, 2004) on the many hypotheses proposed to resolve the Faint Young Sun Paradox.
Most of the solutions proposed for the Faint Young Sun Paradox apply to Earth alone; they do not explain the presence of liquid oceans on early Mars. Also, they are not solutions for the Faint Young Stars Paradox in general. A simpler solution has been proposed in that the early Sun was more massive and hence more luminous. This necessitates a massive, sustained solar wind for the first billions of years of the Sun's evolution, a condition that most in the community find implausible.
In the current paper I will explore the hypothesis of a more massive and hence much brighter young Sun in more detail, and investigate whether this leads to logical contradictions or can be ruled out by observations. I will find, surprisingly, that a more massive young Sun is not implausible at all, and links together what we know about stellar spin-down with simulations of the same. The results of this paper do not prove that the young Sun was more massive -we would need observations that demonstrate the sustained presence of a massive solar wind. But it does show that such a hypothesis cannot be ruled out at present, and consequently that the presence of oceans on early Mars may not be a conundrum, and that the habitable zones around solar analogs as well may remain in place for the billions of years it takes for multi-cellular and intelligent life to develop.
Mass Loss and Luminosity
A slightly more massive Sun would be significantly more luminous: In the solar portion of the Hertzsprung-Russel diagram luminosity scales with mass to the power 4 to 5. If the Sun were more massive earlier on the Earth would be closer in as well: Because of conservation of it angular momentum the mean Sun-Earth distance varies as the inverse of the mass of the Sun, while the incoming radiation at the top of the Earth's atmosphere scales with the inverse of the square of that distance. Hence the amount of radiation the Earth receives varies as the mass of Sun to the power 6 to 7. A 30% less luminous Sun at the Zero Age Main Sequence (ZAMS) at one solar mass could be compensated for by a mere 4 to 5% mass increase going back in time from the current Sun.
A sustained solar mass loss rate of roughly 10 −11 M ⊙ yr −1 is required to accomplish that. The current solar mass loss rate is estimated at 2-3×10 −14 M ⊙ yr −1 for the fast wind and 10 −15 M ⊙ yr −1 for Coronal Mass Ejections (CMEs). Interestingly the mass loss from photon emission is twice as large, around 7×10 −14 M ⊙ yr −1 , but the latter contributes much less to angular momentum loss, because the photons are not forced to co-rotate with the magnetic field.
So the current mass loss rate, if extrapolated to the past, is insufficient to resolve the Faint Young Sun Paradox by a factor of 300 or more. Hence we must assume a much higher mass loss rate for the young Sun, sustained for several billions of years. Observations of some young solar-type stars indicate mass loss rates of roughly the right magnitude: e.g. 70 Opiuchi with a mass of 0.92 times that of the Sun, and an estimated age of 0.8 billion years, has a mass loss rate of 3×10 −12 M ⊙ yr −1 , and κ −1 Cet (Do Nascimento et al. 2016) yields the same result from X-ray calibration.
A much more detailed analysis than the back-of-the-envelope calculation above, by Minton & Malhotra (2007), narrows down the mass loss rate constraint further. Minton & Malhotra calculate the solar mass and hence mass loss rate required to keep the radiative equilibrium temperature of the Earth's atmosphere at 273 • K, the freezing point of water, during solar evolution. It is plausibly assumed that the greenhouse effect will add about 15 • K to that, as it does at present, to achieve the average atmospheric temperature we have now, that is favorable to life. The result of their analysis is, again, a required mass loss rate of about 10 −11 M ⊙ yr −1 , but it only has to be sustained for the first 2.4 billion years.
The choice of maintaining the radiative equilibrium temperature at or above freezing is a rational one, because at a lower temperature planet Earth could flip to an equilibrium in which all of the surface is frozen over -snowball Earth -where the albedo is much higher, because of all the ice and snow. The geological record indicates that several "snowball Earth" episodes have occurred in Earth's history -in addition to the much more recent ice ages, where there is no full planetary ice coverage.
As an aside, but in response to an obvious question: How does the Earth's atmosphere escape from a "snowball Earth" state? The answer probably lies in the addition of CO2 to the atmosphere from volcanic eruptions. A slow but steady addition of CO2 by volcanism and no uptake of CO2 by the weathering of rocks and diffusion into the oceans -all covered by ice -will eventually create enough of a greenhouse effect to initiate melting at equatorial latitudes, after which, via various feedbacks, melting will proceed precipitously.
Minton & Malhotra also point out that their model for "minimum mass loss", according to the simulations of Kasting (1991), maintains solar luminosity at a high enough level to keep the atmosphere of Mars above the freezing point for the first billion years of its history -when oceans are believed to have existed on Mars. So indeed a strong early solar wind can resolve the paradox for both Earth and Mars, no separate solutions are required, much to the liking of Father William of Occam.
Stellar Spin Down and Mass Loss
In this section I will relate stellar mass loss rates, which are hard to observe, with the much better known stellar spin down rates in order to verify whether these can be made consistent.
It is well known that Sun-like stars spin down from rotation periods of just a few days in their first billion years to several weeks in their mid-life, e.g. 26 days for the Sun at 4.5 billion years. The loss of angular momentum is usually ascribed to the torque applied by the stellar wind that co-rotates with the star near the surface and is forced to co-rotate roughly out to the Alfvén radius where the wind outflow velocity equals the Alfvén speed. Weber & Davis (1967) were the first to relate spin-down rates to a stellar wind model. Their model is of a purely radial magnetic field that changes polarity at the equator. Their key result that the torque applied by the wind on the star is to a good approximation given by where ω is the angular rotation rate of the star, R A the critical Alfvén radius where the wind outflow velocity equals the local Alfvén speed, andṀ is the stellar mass loss rate. The calculation of Weber & Davis includes at factor 2/3 on the right hand side resulting from the azimuthal integration of the torque, which I omit here for simplicity Physically interpreted this means that the stellar wind torque is roughly equal to the angular momentum of a stellar wind forced to co-rotate up to R A and then let go, flowing out further preserving its angular momentum. The result of Eq. (3.1) follows from the requirement that the solution flows smoothly through the critical point in the defining equations, much like the requirement for the critical point in the thermally driven Parker wind.
The location of the critical Alfvén radius in the Weber & Davis solution is 24 solar radii, while sophisticated numerical solutions (Keppens & Goedbloed 2000, their Fig. 3) yield 7 to 14 stellar radii in the segment of their solution with open field lines -where the stellar wind comes from. Recent observations for the Sun (Velli, Tenerani & DeForest 2016) also indicate that for the Sun the Alfvén radius is of the order of 12 radii out over the polar regions. So there is broad agreement between observations, theory and simulations here.
However, it turns out that the expression of Eq. (3.1) as defining the stellar wind torque is strongly dependent upon the geometry of the stellar coronal magnetic field. While the field in Weber & Davis (1967) is purely radial (reversing at the equator), that in Keppens & Goedbloed (2000) has a much more realistic "dead zone" over the equator, where the field is closed, while open field lines spread out from the poles. The dead zone in the simulations takes on a form very similar to observed solar helmet streamers, as observed during a solar eclipse.
The torque from the stellar wind in Keppens & Goedbloed is a factor 15 to 60 smaller than that given by Eq. (3.1) (a factor 10 to 40 compared to Weber & Davis), because of the difference in magnetic field topology. The same result had already been pointed out by Priest & Pneuman (1974), based on the helmet streamer geometry of Pneuman & Kopp (1971).
I will show now that this result has important implications both for the mass loss required for the Sun to slow down from its initial rotation rate at the ZAMS, to its current rate, and for the slow down of solar rotation in the remainder of the Sun's main sequence lifetime.
First we need to know the moment of inertia of a star to be able to estimate the slow down rate for a given stellar mass loss rate. The angular momentum of a star is given by where L ⋆ is the angular momentum, I ⋆ the moment of inertia, ω the rotation rate, M ⋆ the stellar mass, R ⋆ its radius, and β I R ⋆ the radius of gyration, with β I the gyration constant, i.e. the fraction of the radius for the arm in the moment of inertia. Stellar evolution codes show that the value of β I decreases a little after arrival of a star at the ZAMS, because of the production of Helium from lighter Hydrogen in the core (see Schrijver & Zwaan 2000, their Sect. 13.1). Interestingly then we would expect Sun-like stars to slowly spin up as they evolve on the main sequence if it weren't for stellar mass loss. Later, as the star evolves towards its giant phase, the moment of inertia greatly increases of course, but that is of no concern here. The typical value of β I for a Sun-like star on the main sequence is of the order of 0.25, the value I shall use from here on, and assumed to decrease much less than the rotation rate. The decrease in angular momentum of a star as it evolves. i.e.L ⋆ is of course equal to the torque applied to it by the stellar wind, i.e.
where f is the efficiency factor discussed above, determined by Keppens & Goedbloed (2000) to range from 1/60 to 1/15. When we write the Alfvén radius R A as a multiple of the stellar radius, α A R ⋆ a very simple expression results for the stellar mass loss that is required to produce the much better observed spin-down of late type stars after arriving on the main sequence,Ṁ The termω ω is simply the inverse of the e-folding time for the slow down in rotation of late type stars, which is of the order of 2-3 billion years, with not much variation between different stars (e.g. Nandy & Martens, 2007). Above I have found β I ≈ 0.25, α A ≈ 10, and f ≈ 1/30. Inserting these values into Eq. (3.4) we derive our main result, M ⋆ = −7.5 × 10 −12 M ⋆ yr − 1. (3.5) This represents the mass loss required to explain the observed stellar spin down rate by magnetic breaking. This mass loss rate also equals the mass loss required to resolve the Faint Young Sun Paradox, as discussed in the previous section. Indeed a very large mass loss, sustained for several billion years, is not just a possibility, but it may very well be required to explain the spin down of Sun-like stars. The analysis above also demonstrates that at its current mass loss rate of 2-3×10 −14 M ⊙ yr −1 our Sun will not slow down significantly for the remainder of its presence on the main sequence. This appears consistent with the observed rotation rates of older late type stars on the main sequence (Egeland, these proceedings). Figure 1. Helmet streamers observed during solar eclipses at solar minimum (left) and solar maximum (right). During maximum more streamers are present but their size is smaller than the ones at minimum.
Discussion and Conclusions
The current mass loss rate of 2-3×10 −14 M ⊙ yr −1 is not sufficient to slow down the Sun from an initial rotation period of 4-5 days to its current value : That would require an Alfvén radius of 170 solar radii, beyond the orbit of Venus. One might argue that the young Sun most likely had a much stronger magnetic field, which would increase its Alfvén radius. However, the simulations of Keppens & Goedbloed show that the slow down is not very sensitive to magnetic field strength: A stronger magnetic field is compensated by a larger dead zone, keeping the wind torque nearly constant.
Observations during solar eclipses even suggest a shrinking Alfvén radius with solar magnetic field. Fig. 1 shows juxtaposed helmet streamers observed during eclipses near solar minimum and solar maximum. The solar maximum image on the right shows more helmets, as expected, but of smaller size, with in particular the peak of the helmets closer to the solar surface. If, as in the simulations of Keppens & Goedbloed, the peak of the helmets approximately coincides with the Alfvén radius at that position angle, the Alfvén radius of the Sun, in its current phase of evolution, is indeed smaller at higher activity levels.
I conclude that a high mass loss rate is a reasonable hypothesis to resolve both the Faint Young Sun and the Faint Young Stars Paradoxes. Observations of mass loss rates of late type stars in the first billions of years after their arrival on the ZAMS are needed to verify this hypothesis, as well as, if possible, investigations of remaining signatures of the early solar wind. | 2017-06-04T02:24:56.000Z | 2016-10-01T00:00:00.000 | {
"year": 2016,
"sha1": "00c31b0fe0e1f084a5220c67abebc4fe5ce91249",
"oa_license": null,
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/966756200FFBF159E02846B909630E5C/S1743921317004331a.pdf/div-class-title-the-faint-young-sun-and-faint-young-stars-paradox-div.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "00c31b0fe0e1f084a5220c67abebc4fe5ce91249",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
198417729 | pes2o/s2orc | v3-fos-license | Agricultural cooperatives and the challenge of social management: a study in the south/southwest region of Minas Gerais, Brazil
It is supposed that cooperative organizations can and should be relevant collective social actors for development. Nevertheless, are organizations submerged in a competitive economic system and therefore should act to survive, considering the rules imposed by the system. The aim of this paper is to analyze how agricultural cooperatives of the south/southwest region of Minas Gerais, Brazil, contribute to local development. The seventh cooperative principle was considered concern for the community, focusing on its relation with the social management of cooperatives. Results show weaknesses in articulating social management and poor accomplishment of the seventh cooperative principle.
Introduction
This work aimed to identify the activities that agricultural cooperatives carry out with regard to social management as well as the activities they perform for the community. The purpose of this study was to verify how is the organization of the cooperatives alongside with other local entities, such as the relation among cooperative members, cooperative and community. There are as well considerations about social projects and a self-assessment by cooperatives about their contribution to local development.
Cooperativism is based on the union of people and not on capital, on which the common entrepreneurship is focused on the needs of the group and not on profit, seeking a joint prosperity around the cooperative values that would lead to an effective balanced success. These values are: mutual aid, self -responsibility, solidarity, equity, equality and democracy, which in practice must follow the guiding principles of the seven cooperative principles: 1. Free and voluntary adhesion, 2. Democratic and free management, 3. Economic participation of members, 4. Autonomy and independence, 5. Education, training and information, 6. Intercooperation, and 7. Interest in the community (ACI, 2018).
The concern with local development is explicit in the seventh cooperative principle: Interest in the community, which clarifies that special policies must be approved by cooperative members with the fundamental objective of contributing to the sustainable development of their respective communities being also agents of social change (MILAGRES; SOUSA, 2016).
Thus, the discussion developed in this paper is based on the idea that the development to be promoted by cooperatives in the community would be related to compliance with the seventh principle, and that its success depends on the freedom of actions and decisions that people have to exercise its role as an agent of economic, social and political change.
It is known that Cooperative organizations are immersed in a modern economic system and for this, they must act as companies that must survive by the rules imposed by this system. According to Pozzobon, Zylbersztajn and Bijman (2012) the growth of cooperative structures is followed by an increase in the complexity of its management. In other words, the professionalization of these organizations has been a major challenge for all cooperatives since they are forced to evolve, which is not different from other companies, especially with regard to their activities, technological interface, economic and financial management, the complexity of the organizational structure, as well as concerning relations amongst people and institutions (VALADARES, 2005).
According to Amodeo (2006) and Sousa et al. (2017a), there is an aspect that has not yet been developed with regard to the specificities that these organizations present since the cooperatives differ from a non-cooperative company, since they act simultaneously as companies and associations.
In the study by Valadares (2005), with regard to activities, it can be said that cooperatives are migrating from a defensive behavior, characteristic of the 1970s and 1980s, to a more aggressive performance in the final markets, due to the high levels of competitiveness demanded by the new markets. It is therefore perceived that there is a need for the professionalization of management in the modern world to take place in cooperatives.
Hence, they would play their role as competitive companies in the market and guarantee their future survival and so of their associates, for this, it is important that the cooperatives do not leave aside the specificities that they have.
It should be emphasized that due to the existence of effective and efficient business management is perhaps not the only attribute to be followed in order to achieve the success of the cooperative organization. Aspects such as the relations between the cooperative and its members become a crucial factor for the strategic management, since they cannot only contribute to the distribution of benefits to the cooperative, but also make them aware of the importance of investing in the growth of the cooperative. The membership's fidelity is crucial to make them more active and committed to the decisions to be taken by the cooperative, strengthens them not only from a social point of view, but also from a business approach (SOUSA et al., 2014).
Cooperative societies, in order to obtain committed members to their organization, should invest in their training and qualification, as well as in promoting the cooperative values of the community to which they belong. That is, investing in their effective participation, aiming to have them trained, strengthen the organization, and thus have access to the market, policies and other local resources (SOUSA et al., 2017b;Pires, 2018).
According to Amodeo (2006), this approach is based on the idea that in cooperatives, besides of focus on economic management, it becomes necessary to deepen the importance of social management, understood here as the management of the relation with its members.
Therefore, not only professional business management of the highest quality is required, but also social management, through cooperative education, efficient intra-company communication systems and associates, and an internal management of power which enables joint learning AMODEO, 2018). Consequently, the company must learn to engender value from its members associated characteristics, as well as the profile of the cooperative association obtains from the company profile the economic advantages that participation in the markets makes possible (AMODEO, 2006;BIALOSKORKI NETO, 2016).
Aiming to turn cooperatives into strong and promising competitors in markets, participation should also be prioritized and the role played by cooperative representative bodies should contribute to strategic management and a balance between corporate management and social management should be promoted (SOUSA et al., 2015).
Cooperative education, organization of membership, improvement in the information channels between management and associates are some of the necessary mechanisms capable of allowing cooperatives to have a better quality management in their services and/or products. A more qualified management (transparent, participatory and democratic) guarantees the survival of the cooperative organization. According to Schneider (2006), to encourage the participation of partners, it is recommended to foster the spirit of work team and community through joint cultural, educational and charitable promotions, where the women of the members and young people have significant space in manifestation. Associates must therefore participate in centralization decisions in a flow that goes from the bottom up to the top down management of the organization.
For Albuquerque (2015), the role of the cooperative organization is to mediate processes, which are characterized by a network of relations demanding a new approach to the organization of training. The new approach has to be built from a social practice that inserts the associate and the cooperative in his community. This insertion of the cooperative organization into the local community can be applied through the seventh cooperative principle, Interest by the Community.
According to the International Cooperative Alliance (ICA), the seventh cooperative principle states that cooperatives are organizations that exist in the first instance for the benefit of their members. Because of this strong association with their members, often, in the same geographical space, they show that cooperatives are closely linked to their communities.
Thus, according to the principles, these organizations would have the special responsibility to ensure the continuity of the development of their community in economic, social and cultural aspects. Another requirement of ICA's documents on cooperative principles is the obligation to work constantly for the environmental protection of its community. In this way, it is up to the members of the cooperative society to decide how much and in what form a cooperative should contribute to their community (ACI, 2018).
Thus, it can be seen that cooperatives could, as a community organization, be an important institution in promoting the social participation of those involved in the community in order to conduct their actions and decisions and promote local development, either by making their members economically viable or articulating their interests towards political institutions (LONDEIRO; BIALOSKORSKI NETO, 2016). They could also, through the participation of their associates, be promoters of community action and conduct projects/programs for the community in which they are inserted, since their owner-users would be both agents and beneficiaries of them. By enabling its members to participate, it would simultaneously promote the formation of citizens capable of participating in the management of public policies. Likewise, by the fact that cooperatives are organizations with a presence in the community, they could become privileged collective actors, with a leading role in the development process (MILAGRES; SOUSA, 2016).
Local development involves people and the vocations of the community, their relevant role is that community action does not have an owner, but belongs to everyone. A fact that resembles it with the main characteristic of cooperatives being a collectively owned and democratically managed enterprise, in them is prevailing free admission, where the "we" prevails in a participatory way in the construction of the common good. The fact of being a social actor in the community shares social responsibilities, that is, each member of the collective feels stronger, more important and active in the construction of a public good. It also offers physical, mental and all claim that the material or financial effort sets the real community action, which, in turn, has a strong relation with cooperative organizations, due to the seventh principle also seen here. The concern with the community is what will sustain the cooperative in the future and will permanently need to command its different social, political, cultural and economic systems.
For all this, it is necessary to sensitize managers of cooperatives of the influence and importance of the integration of the associate to the cooperative (MACEDO et al., 2017). It is also necessary to make clear that this work usually shows results in the medium and long term and that cannot always be evaluated quantitatively, but qualitatively (VALADARES, 2005). Therefore, it is up to the cooperative to adopt the vision of development in economic and social management, otherwise the choices and opportunities of those who would believe in the cooperative organization as a mechanism of participation and democratic control would be limited.
Methodology
The present omy of the region is predominantly agricultural and comes from coffee cooperatives a large part of this contribution. Minas Gerais produces 56.3% of Brazilian coffee. The region of the South//southwest of Minas Gerais counts with a population of 38 agricultural cooperatives that will be objects of study of this work. In this sense, a form with open and closed questions was sent by email to all affiliated to OCEMG, but only 11 returned duly filled, which corresponded to a percentage of 28.9% of the total number of agricultural cooperatives in the region. The data were collected from the use of self-administered questionnaires, which, according to May (2011), are presented as a cheaper technique of data collection to cover a wide geographical area and is a channel for the anonymous expression of rooted points of view.
Finally, the data were analyzed with the support of the Statistical Package for the Social Sciences (SPSS), statistical software for the social sciences.
Results
Cooperatives are classified by Organization of Brazilian Cooperatives into 13 branches that differentiate them according to economic activity. In order to give compliance of treatment to the issues of interest, to present proposals and claims related to structural or conjunctural problems, to contribute to promote the integration of the branches and the achievement of common goals for the development of cooperativism. The branches are Agricultural, Consumer, Credit, Educational, Special, Housing, Infrastructure, Mineral, Production, Health, Labor, Tourism and Transportation. In the case of this study, we will consider only the 28.9% of the cooperatives belonging to the agricultural sector of Minas Gerais that answered the questionnaire sent. Other cooperatives from other sectors also answered the questionnaire, although this work only presents the results of the research with the agricultural cooperatives.
A correct cooperative business management is fundamental to guide the organization's planning, strategies, allocate and generate resources, and enable the proposed economic and social objectives to be achieved. In this sense, an attempt was made to identify how the administrative management of agricultural cooperatives in the region studied was characterized (Figure 1).
Figure 1 -Characterization of the administrative management of agricultural cooperatives in the South/Southwest of Minas Gerais
Source: Research data.
It is observed, therefore, that agricultural cooperatives are adopting the professionalization management model as a way to assist in decision-making and to improve business management and cooperative competitiveness. Hired professionals under the supervision of the board of directors manage most of them. This professionalization must be viewed with care by the cooperative members, since there is a danger that the manager's decisions will pre-empt, deferring the participation and preferred strategic options of the cooperative, that is, generating a low participation and involvement of the associated in the cooperative. The importance of having a social management good work (participation and cooperative education).
The graph above shows that the financial and economic issues (members' payment default) were the main problems mentioned in the survey; followed by the information system and the involvement of members. Some cooperatives have shown that business issues relate to social issues and not that they should be seen separately, they interconnect forming the two sides of cooperative (business and social) management. This can also be seen in figure 2 about the participation and economic outcomes.
Figure 2 -Some relations between the social participation of the cooperative and the results obtained by the agricultural cooperatives in the South/Southwest of Minas Gerais is perceived
Source: Research Data.
Although a large portion fails to visualize the social problems that surround the cooperative organization (36.36%), those who perceive it mention the involvement of the members (27.7%) as the main one in terms of entrepreneurship, concerning customer loyalty.
It is therefore clear that it is from a joint between business and social management would result in an authentic cooperative management, turning it into a more efficient system, socially and economically as well as a greater impact would be obtained in the community.
The reason for being a cooperative is its associate since the cooperative is to meet the needs of the owners and users of its services. This should play a key role in the cooperative, since the member is responsible for making the decisions and still enjoy the benefits. The membership of the cooperative must be aware of its rights and duties in order to be able to relate positively to the cooperative organization. According to the data obtained in the agricultural cooperatives of the South/Southwest of Minas Gerais, 81.8% stated that the cooperative members know about the role they play within the organization. However, since the low participation of cooperative members is one of the most common problems in Brazilian cooperativism, it was also sought to investigate whether there is any work of Organization of Membership. It is not important whether it is done through committees, nucleus, local commissions or other means of instigating the members to participate, to train, Another result points shows the sad reality of agrarian cooperativism in the South/Southwest of Minas Gerais, that is, the weakening of the membership due to the lack of engagement in cooperative education activities. This analysis can be proven through the sum of the high number of cooperative members that did not respond and those that do not perform any cooperative education activity, that is, 72.7%. A good work of cooperative education reflects in the knowledge of the organization by its members, by the local community and through the verified data. It is perceived that there can be a deficit in relation to the cooperatives of the region. For those that do carry out activities, it is noticed that these are individual actions or activities developed by other institutions, not appearing in the responses to a training process participation offered by the cooperative.
A relevant point in this work concerns the cooperative members' relation with the local community. It plays an important role in the actions of the cooperative: to promote the economic and social well-being of the cooperative, to guide them towards community participation and to contribute to an improvement in the quality of life of the population.
Cooperatives through their membership should interfere in the community dimension of development, that is, participate in other bodies that contribute to the development of the municipality, such as local councils and/or committees or any other public policy management body, to carry out actions, individually or in partnerships, to promote local sustainable development.
Other collective institutions could participate and contribute to local development within cooperative organizations. Local development is a participatory process that involves people and entities in a joint action on projects aimed at improving the living conditions of the community, so it is necessary not only to know how to articulate forces, but also to promote the corresponding process. Partnership and working together with these entities is essential and can give great support to the community. Thus, when asked about which instances collaborated locally with other organizations, 9.1% mentioned rural tourism and another 60% related to local government, with 20% specifying the Secretary of Agriculture, another 20% mentioned the Municipal Councils for Sustainable Rural Development and the others did not elaborate.
Besides the relation with other institutions, it was also analyzed whether these organizations carry out some kind of social project with the community where they are located and the area of their projects (Figure 3). Projects turned to education are the most representative, with 46% of the cooperatives, followed by Environment with 18% and culture with 9% only.
Figure 3 -Area of action of the social projects developed by the agricultural cooperatives of the South/Southwest of Minas Gerais
Source: Research data.
It was also proposed to cooperatives consulted to assess their contribution to the community in which they are inserted, assigning a grade of 1 (low participation in the community) to 5 (high participation in the community) independent of the social projects that they performed in the local community. The notes can be seen in the following chart: Although the self-evaluation grades concerning the contribution of cooperatives to the community are considered high (45% note 4), this is not consistent with the results of other questions that asked about the recognition of the organization by its members and by the local community. That is, work involving social management aspects would be important for the culture of cooperation to spread among its members and the local population.
However, 45.5% stated that the local population would not be able to differentiate the cooperative's work from a non-cooperative organization with equivalent economic functions. The pressure to focus on economic management is strong, promoting a dichotomy where social and business are presented as opposites. Searching to improve the competitiveness of cooperative organizations in the market, these organizations have often sought to become similar to companies. Thus, it seems that there is a belief that these organizations should be less cooperative in order to become more competitive in the market, and that the effort to facilitate the relationship between members and local community should be left aside (FERREIRA, SOUSA; COSTA, 2018).
The South/Southwest of Minas Gerais, besides being one of the most well-developed regions of the state, has large cooperatives that contribute economically to their members.
However, when analyzing the education processes that these organizations undertake, they prove to be inadequate and weakly articulated. Directly or indirectly, themes like financial and economic issues also relate to the participation of the cooperative members, since the greater the participation, the greater the control of the cooperative on the management, the less opportunism, the greater commitment of the members (AMODEO, 2006).
According to the study by Valadares (2005), cooperatives are migrating from a defensive behaviour to a more aggressive approach because of the high levels of competitiveness required by new markets. It is clear, therefore, that in the modern world there is a necessity for the professionalization of management in cooperatives, so that they may perform their role as competitive organizations and guarantee their survival and their members' survival. However, for this, it is important that cooperatives not forget their basic features.
In addition, there is a low participation of cooperatives articulating their work with other organizations operating in the municipality, such as local committees and boards, among others. This would be a way to achieve effective participation and also contribute to local development decisions and management of the municipality.
It can be seen that fostering participation or exercising social management is a preponderant factor that can begin with the loyalty of the members, calling them to participating continuously in the cooperative, contributing both economically and socially. It is through an effective, conscious and responsible participation of all members that the success of the socioeconomic goals of the cooperative entrepreneurship will be achieved (Valadares, 2005).
One can also conclude that the role of cooperatives in local development as mentioned in the seventh principle was not always prioritized and a cooperative education with members would be a way to achieve their commitment to the cooperative.
Cooperatives should invest in training and human development to achieve commitment from members towards the organization, as well as promote cooperative values in the community in which they belong. In other words, cooperatives should invest in effective participation so that members are empowered, thereby strengthening the organization to access the market, policies and other local resources.
Social management can be feasible through cooperative education work. Amodeo Discussions about cooperatives, community and cooperative education are based on the idea that the development promoted by cooperatives should be related to the seventh principle, and that a cooperative's success depends on the freedom of action and decisions of its members as they act as agents of economic, social and political change.
The results presented in this study indicate that work to undertake a critical analysis of the role of education on management of cooperatives is required. It is also important to emphasize the contribution of an instrument that can strengthen the cooperative movement as a whole and also the development of the community.
Conclusions
The South/Southwest region of Minas Gerais, besides being one of the most developed regions of the state, it has large cooperatives that contribute economically to its members, however, this study showed that the main business problems mentioned by these organizations relate to social issues. The low level of responses to a survey concerning social management, participation and cooperative education is, in itself, an indicator of an evaluative tendency. The social participation of the cooperative members is seen as an indicator that positively influences the economic life of the cooperative, although they manifest problems in the way of making this participation viable. Given these circumstances, it was noticed in the cooperatives the challenge of articulating their social activities.
There is a weak participation of cooperatives articulating their work with other collective institutions operating in the municipality, such as local commissions, councils, among others. Although this work is not representative of the universe of cooperative organizations in general, it is worth mentioning that their work with other institutions would be a way to make effective participatory management possible, as well as contributing to local decisions and the development of the municipality. Corroborating this importance, however, would require a more detailed and long-term study concerning the main role of these institutions on social relations.
Accordingly, cooperative activities in local development -mentioned in the seventh principle -were not always prioritized and that cooperative education among members of the social community would be a way to obtain cooperative members committed to their organization. As aforementioned, investing in training and membership qualification would also be a way of promoting cooperative values in the community. Although it is convenient to carry out a critical analysis about the role of cooperativist education in the management of these cooperatives, it was noticed that this should not be understood as a panacea, but to emphasize its importance as an instrument for constructing and strengthening cooperativism. | 2019-07-26T11:17:30.465Z | 2019-06-30T00:00:00.000 | {
"year": 2019,
"sha1": "3d125233a68fb48ce0c0b79ba5a78aa2203877bd",
"oa_license": "CCBYSA",
"oa_url": "https://seer.faccat.br/index.php/coloquio/article/download/1307/830",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "089ff335373e034533d38c1d81ccf7e5f47735e5",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Geography"
]
} |
258878954 | pes2o/s2orc | v3-fos-license | Selective Bacteriocins: A Promising Treatment for Staphylococcus aureus Skin Infections Reveals Insights into Resistant Mutants, Vancomycin Resistance, and Cell Wall Alterations
The emergence of antibiotic-resistant S. aureus has become a major public health concern, necessitating the discovery of new antimicrobial compounds. Given that the skin microbiome plays a critical role in the host defence against pathogens, the development of therapies that target the interactions between commensal bacteria and pathogens in the skin microbiome offers a promising approach. Here, we report the discovery of two bacteriocins, cerein 7B and cerein B4080, that selectively inhibit S. aureus without affecting S. epidermidis, a commensal bacterium on the skin. Our study revealed that exposure of S. aureus to these bacteriocins resulted in mutations in the walK/R two-component system, leading to a thickening of the cell wall visible by transmission electron microscopy and subsequent decreased sensitivity to vancomycin. Our findings prompt a nuanced discussion of the potential of those bacteriocins for selective targeting of S. aureus on the skin, given the emergence of resistance and co-resistance with vancomycin. The idea put forward implies that by preserving commensal bacteria, selective compounds could limit the emergence of resistance in pathogenic cells by promoting competition with remaining commensal bacteria, ultimately reducing chronical infections and limiting the spread of antibiotic resistance.
Introduction
On a global scale, Staphylococcus aureus is the second killer when it comes to deaths associated with antibiotic resistance [1]. S. aureus is responsible for various infections such as skin abscesses, wound infections, deep tissue abscesses, osteomyelitis, endocarditis, toxic shock syndrome, sepsis, and bacteremia [2]. This Gram-positive bacterium possesses several mechanisms to reduce its sensitivity to antibiotics, including the formation of biofilms, small colonies variants [3], the use of efflux pumps [4], persistence, and the ability to infiltrate human cells [5]. This pathogen first caused issues due to the emergence of methicillin-resistant Staphylococcus aureus (MRSA). With vancomycin used as a key antibiotic in the treatment of severe bacteremia caused by this pathogen, the emergence of vancomycin-intermediate Staphylococcus aureus (VISA) and vancomycin-resistant Staphylococcus aureus (VRSA) strains (defined has having MIC > 4 µg/mL and MIC ≥ 16 µg/mL respectively) is a major concern [6].
Discovery of Selective Bacteriocins
The PARAGEN collection comprises various bacteriocins which, due to their short length (<60 amino acids) and lack of post translational modifications, can be synthesized chemically. Those bacteriocins from the collection were used to realize as fast screening performed on S. aureus ATCC 6538 and on S. epidermidis ATCC 12228 by spot assays. As illustrated in Figure 1, the presence of an inhibition halo on S. aureus but not on S. epidermidis allowed for the identification of two bacteriocins which possess activities against S. aureus, with no effect against S. epidermidis: cerein 7B, a 56 amino acid peptide, previously identified as a product of Bacillus cereus Bc7 [21]; and cerein B4080, a leaderless bacteriocin, which was Antibiotics 2023, 12, 947 3 of 17 formerly identified after an in silico analysis and added to the PARAGEN collection [20] as lacticin Z-Variant 2 (novel). However, due to the fact that the gene coding for this bacteriocin sequence was discovered in Bacillus cereus B4080, we have renamed this bacteriocin as cerein B4080. chemically. Those bacteriocins from the collection were used to realize as fast screenin performed on S. aureus ATCC 6538 and on S. epidermidis ATCC 12228 by spot assays. A illustrated in Figure 1, the presence of an inhibition halo on S. aureus but not on S. epide midis allowed for the identification of two bacteriocins which possess activities against aureus, with no effect against S. epidermidis: cerein 7B, a 56 amino acid peptide, previous identified as a product of Bacillus cereus Bc7 [21]; and cerein B4080, a leaderless bacterioci which was formerly identified after an in silico analysis and added to the PARAGEN co lection [20] as lacticin Z-Variant 2 (novel). However, due to the fact that the gene codin for this bacteriocin sequence was discovered in Bacillus cereus B4080, we have rename this bacteriocin as cerein B4080. We evaluated those bacteriocins' activities against multiple bacterial strains by sp assays, including Lactococcus lactis IL1403, for which both bacteriocins were found to b active and Escherichia coli MG 1655 for which no activity was observed. Neither bacterioc demonstrated activity against Propionibacterium acnes ATCC 6919, a bacterium associate with the development of acne [22], while cerein B4080 was the only bacteriocin of the tw to be found active against the human commensal Staphylococcus hominis ATCC 27844, T ble 1. We evaluated those bacteriocins' activities against multiple bacterial strains by spot assays, including Lactococcus lactis IL1403, for which both bacteriocins were found to be active and Escherichia coli MG 1655 for which no activity was observed. Neither bacteriocin demonstrated activity against Propionibacterium acnes ATCC 6919, a bacterium associated with the development of acne [22], while cerein B4080 was the only bacteriocin of the two to be found active against the human commensal Staphylococcus hominis ATCC 27844, Table 1. This observed selectivity in the spot assay was then confirmed by the evaluation of the minimum inhibitory concentration (MIC) against S. aureus ATCC 6538 and S. epidermidis ATCC 12228 using liquid dilution assays (in Figure 2). Where the MIC value for cerein 7B against S. aureus was determined to be 20 µg/mL and the MIC value of the cerein B4080 was determined to be 90 µg/mL, no growth inhibition was observed for S. epidermidis at concentrations up to 250 µg/mL. This observed selectivity in the spot assay was then confirmed by the evaluati the minimum inhibitory concentration (MIC) against S. aureus ATCC 6538 and S. e midis ATCC 12228 using liquid dilution assays (in Figure 2). Where the MIC value fo rein 7B against S. aureus was determined to be 20 µg/mL and the MIC value of the c B4080 was determined to be 90 µg/mL, no growth inhibition was observed for S. e midis at concentrations up to 250 µg/mL.
Cerein 7B, a Membrane Anchoring Bacteriocin
Numerous bacteriocins have been reported to exert destabilizing effects on the c lar membrane, either by inducing membrane destabilization or by forming pores int membrane. We decided to focus on the cerein 7B. To gain insights into its mechanis action, we utilized AlphaFold, a powerful protein structure prediction tool, develop DeepMind, an artificial intelligence research laboratory headquartered in the U Kingdom, to perform a structural prediction of cerein 7B. The resulting structural m revealed that cerein 7B consists of two parallel alpha helices linked by a loop contain proline residue.
To explore the structure-function relationship of cerein 7B in greater depth, we ducted an alanine scanning of the entire peptide. Both the wild type cerein 7B an mutated versions were expressed in vitro, and their activities were assessed by spot a on both S. aureus ATCC 6538 and L. lactis ILI403. Most mutations (30/47) did not affe antimicrobial activity of the bacteriocin on L. lactis ILI403. However, 17 mutants sho loss of activity, providing insight into a mutative mode of action ( Figure 3). The l generated mutants and the spot assay results can be found in the Supplementary Ma Figure S2 and Table S1. Our results show that the MIC50 value of cerein 7B against S. aureus ATCC 6538 is 20 micrograms/mL, and the MIC value of the cerein B4080 was determined to be 90 µg/mL. No significant impact on the growth of S. epidermidis was observed, even at the highest tested concentration of 250 micrograms/mL of cerein 7B or of cerein B4080.
Cerein 7B, a Membrane Anchoring Bacteriocin
Numerous bacteriocins have been reported to exert destabilizing effects on the cellular membrane, either by inducing membrane destabilization or by forming pores into the membrane. We decided to focus on the cerein 7B. To gain insights into its mechanism of action, we utilized AlphaFold, a powerful protein structure prediction tool, developed by DeepMind, an artificial intelligence research laboratory headquartered in the United Kingdom, to perform a structural prediction of cerein 7B. The resulting structural model revealed that cerein 7B consists of two parallel alpha helices linked by a loop containing a proline residue.
To explore the structure-function relationship of cerein 7B in greater depth, we conducted an alanine scanning of the entire peptide. Both the wild type cerein 7B and all mutated versions were expressed in vitro, and their activities were assessed by spot assays on both S. aureus ATCC 6538 and L. lactis ILI403. Most mutations (30/47) did not affect the antimicrobial activity of the bacteriocin on L. lactis ILI403. However, 17 mutants showed loss of activity, providing insight into a mutative mode of action ( Figure 3). The list of generated mutants and the spot assay results can be found in the Supplementary Material Figure S2 and Table S1.
In particular, K8A, W2A, W3A and W6A were not able to prevent the growth of S. aureus in our assay. Both Lysine (through ionic interaction with the phosphate of the lipid headgroup) and tryptophan are membrane-anchoring residues when located at the ends of transmembrane helices [23]. This result thus supports the hypothesis that cerein 7B requires membrane anchoring to be biologically active. Furthermore, two GXXXG motifs in the peptide sequence, specifically G24AAAG28 and G43GVSG47, led to a loss of activity if either of the glycine were replaced by an alanine. These motifs are known facilitated helixhelix interactions and can lead to dimerization [24], suggesting that cerein 7B alpha helices interact together to exert their antimicrobial activity. Additionally, numerous hydrophobic amino acids showed a loss of activity, some of which forming a hydrophobic region on the bacteriocin. This hydrophobic region likely then interacts with the hydrophobic region of the lipid bilayer that makes up the membrane. shows the hydrophobic surface of the bacteriocin; (b) highlights the importance of various amino acids in cerein 7B that was gained through the alanine scanning. The critical GXXXG motifs are colored blue, while the two cysteines required for the protein's activity are colored yellow. The lysine residue, the only charged amino acid in the sequence, is highlighted in red, and the three tryptophan residues are marked in magenta. (c) shows the predicted pIDDT of the cerein 7B for the best obtained model.
In particular, K8A, W2A, W3A and W6A were not able to prevent the growth of S. aureus in our assay. Both Lysine (through ionic interaction with the phosphate of the lipid headgroup) and tryptophan are membrane-anchoring residues when located at the ends of transmembrane helices [23]. This result thus supports the hypothesis that cerein 7B requires membrane anchoring to be biologically active. Furthermore, two GXXXG motifs in the peptide sequence, specifically G24AAAG28 and G43GVSG47, led to a loss of activity if either of the glycine were replaced by an alanine. These motifs are known facilitated helix-helix interactions and can lead to dimerization [24], suggesting that cerein 7B alpha helices interact together to exert their antimicrobial activity. Additionally, numerous hydrophobic amino acids showed a loss of activity, some of which forming a hydrophobic region on the bacteriocin. This hydrophobic region likely then interacts with the hydrophobic region of the lipid bilayer that makes up the membrane.
Taken together, our findings suggest that cerein 7B likely interacts and anchors itself to the bacterial membrane through lysine and tryptophan residues, while the GXXXG motifs and Cysteines play a role in alpha helix stabilization, allowing the peptide to exert its potent antimicrobial activity. However, these results do not indicate whether cerein 7B is active as a monomer or a multimer. . This figure depicts the predicted structure of cerein 7B, generated using AlphaFold: (a) shows the hydrophobic surface of the bacteriocin; (b) highlights the importance of various amino acids in cerein 7B that was gained through the alanine scanning. The critical GXXXG motifs are colored blue, while the two cysteines required for the protein's activity are colored yellow. The lysine residue, the only charged amino acid in the sequence, is highlighted in red, and the three tryptophan residues are marked in magenta. (c) shows the predicted pIDDT of the cerein 7B for the best obtained model. Taken together, our findings suggest that cerein 7B likely interacts and anchors itself to the bacterial membrane through lysine and tryptophan residues, while the GXXXG motifs and Cysteines play a role in alpha helix stabilization, allowing the peptide to exert its potent antimicrobial activity. However, these results do not indicate whether cerein 7B is active as a monomer or a multimer.
Isolation of Mutants Able to Grow in Presence of Bacteriocins
To anticipate the impact of the selective pressure on S. aureus upon treatment using the bacteriocins, colonies able to grow in medium containing 2xMIC were isolated. Exposure to 40 µg/mL of cerein 7B led to a frequency of resistance of one in one million with six mutants, numbered 1 to 6, selected for further characterization. The same processes were realized for the cerein B4080 with 200 µg/mL, where the rate of apparition ended up being two per million and where two mutants were selected for further characterization, numbered named 7 and 9, respectively.
In order to confirm the reduced sensitivity of the isolated mutants towards the bacteriocin, a growth assay was conducted with the wild type and mutant strains. As shown in Figure 4, the mutant strains are able to grow in the presence of higher concentrations of bacteriocin as compared to the wild type S. aureus strain. Specifically, for both bacteriocins, the mutants displayed growth at twice the inhibitory concentration of the wild-type strain. Remarkably, mutant strains 1 and 2 displayed a growth capacity exceeding four-fold the inhibitory concentration of cerein 7B, whereas mutants 7 and 8 exhibited a comparable growth capability of at least four times the inhibitory concentration for cerein B4080. numbered named 7 and 9, respectively.
In order to confirm the reduced sensitivity of the isolated mutants towards the bac teriocin, a growth assay was conducted with the wild type and mutant strains. As show in Figure 4, the mutant strains are able to grow in the presence of higher concentration of bacteriocin as compared to the wild type S. aureus strain. Specifically, for both bacter ocins, the mutants displayed growth at twice the inhibitory concentration of the wild-typ strain. Remarkably, mutant strains 1 and 2 displayed a growth capacity exceeding fou fold the inhibitory concentration of cerein 7B, whereas mutants 7 and 8 exhibited a com parable growth capability of at least four times the inhibitory concentration for cerei B4080.
(a) (b) Figure 4. This figure illustrates the difference in sensitivity between S. aureus ATCC 6538 and th isolated mutants: (a) presents the quantification of the variation in sensitivity to cerein 7B betwee the wild-type strain of S. aureus and the six isolated mutants; (b) presents the quantification of th variation in sensitivity to cerein B4080 between the wild-type strain of S. aureus and the two isolate mutants.
Resistance-Associated Mutations in Two Component Systems
To gain insights into the underlying mechanism responsible for conferring resistanc and decreased sensitivity towards cerein 7B and cerein B4080, we conducted whole-ge nome sequencing (WGS) to investigate the presence of mutations in the genomes. Thos mutations are reported in Table 2. The eight isolated mutants were found to be differen from one another, but they all contained mutations in either the histidine kinase walK o the response regulator walR genes.
Isolated Mutant Mutation Position
Mutation Amino Acid Mutation Protein Affected c-t R222V WalR Figure 4. This figure illustrates the difference in sensitivity between S. aureus ATCC 6538 and the isolated mutants: (a) presents the quantification of the variation in sensitivity to cerein 7B between the wild-type strain of S. aureus and the six isolated mutants; (b) presents the quantification of the variation in sensitivity to cerein B4080 between the wild-type strain of S. aureus and the two isolated mutants.
Resistance-Associated Mutations in Two Component Systems
To gain insights into the underlying mechanism responsible for conferring resistance and decreased sensitivity towards cerein 7B and cerein B4080, we conducted whole-genome sequencing (WGS) to investigate the presence of mutations in the genomes. Those mutations are reported in Table 2. The eight isolated mutants were found to be different from one another, but they all contained mutations in either the histidine kinase walK or the response regulator walR genes.
These genes are part of the walK/R two-component regulatory system, which is also known as yycFG or vicRK [25]. This TCS plays a crucial role in regulating various cellular processes in S. aureus, including virulence regulation, biofilm production, oxidative stress resistance, cell wall synthesis, and metabolic processes [26]. The WalK protein is composed of five domains, which include the extracellular domain, the PAS domain, which acts as a molecular sensor, the phosphor-acceptor domain, responsible for the autophosphorylation of the histidine, the HATPase_c domain, which utilizes the energy provided by ATP hydrolysis, and the HAMP domain, which regulates the phosphorylation of the receptor.
The study reports novel mutations occurring in three of those domains: the G223A was located in the HAMP protein domain, the I303S in the PAS protein domain, and the R376P, as well as the previously known mutation, the V383I in the phosphor-acceptor protein domain. WalR is composed of only two domains, the receiver domain affected by the mutations S105R and A111V, and the transcriptional regulator domain affected by the R222V mutation. It is known from previously published single nucleotide mutations in various protein domains of WalK/R that can lead to a decrease in vancomycin sensitivity [27]. Those mutations are reported to lead to a thickening of the cell wall, causing a decrease in sensitivity due to the peptidoglycan-clogging theory, which postulates that the passage of vancomycin molecules across thickened peptidoglycan layers is hindered and delayed, leading to resistance. The study reports a novel A55P mutation in the auxiliary protein YycH, which interacts with YycI and the WalK histidine kinase to activate the WalK/R two-component system in Staphylococcus aureus, regulating cell wall metabolism. Mutations in yycH and yycI have previously been shown to reduce vancomycin susceptibility in clinical VISA strains [28].
Mutations outside of the genes walK/R were also found. Notably, all isolated mutants except for mutant number 3 had the same H228N mutation in the aroE gene, which encodes a shikimate dehydrogenase (NADP(+)), an essential protein. Unlike the other mutants, mutant number 3 differed from the others in that it lacked a mutation in the walk gene but possessed a mutation in the walR gene (A111V). This suggests a possible connection between mutations in walk and the aroE genes, although this connection remains unclear. Four other mutations were found in other genes: K385I in tkt, Y339T in nikA, V196L in SLL5, and V304M in menD. The K385I mutation in tkt may affect the pentose phosphate pathway, which generates pentose sugars and NADPH [29]. The Y339T mutation in nikA is expected to alter nickel transport [30], and the V304M mutation in menD may affect the biosynthesis of menaquinone, a form of vitamin K2. These mutations may have functional consequences on their respective metabolic pathways. An additional gene was discovered to be mutated in mutant 7, coding for SSL5, a Staphylococcal superantigen-like protein [31].
DNA Differences in between the S. aureus ATCC 6538 used in this study and its reference genome are available in Supplementary Material Table S2.
Mutations Leading to Resistance against Cerein7b Are Associated to Decreased Sensitivity to Vancomycin
The presence of mutations in a two-component system being involved in the cell wall regulation and known to be involved in vancomycin resistance [27] led to the hypothesis that the isolated mutants might have gained a decreased sensitivity to vancomycin. Out of the mutations identified in this study, one of them had already been reported in the literature as involved in VISA, the V383I [32], while the others are to our knowledge newly discovered.
We performed an antibiotic sensitivity assay to assess the sensitivity of the mutants to vancomycin, Table 3. Mutants 1, 2, 6, 7, and 8 exhibited decreased sensitivity to vancomycin, with mutants 1 and 6 showing a two-fold change in MIC value and mutant 2 showing a four-fold change. No other variations in antibiotic resistance were observed between the wild type and isolated mutants.
Resistant Mutant Display Cell Wall Thickening
The decrease in sensitivity towards vancomycin in S. aureus due to mutations in WalK/R has been linked to a cell wall thickening in previous works [33]. To investigate possible changes in cell wall thickness, we used Transmission Electron Microscopy (TEM) on the siz isolated mutants against the cerein 7B. A first set of observations was realized on the wildtype strain and isolated mutant 2 in both the exponential phase and the stationary phase, and showed that a thicker cell wall is present but only at the stationary phase, with an average cell wall thickness of 43.8 ± 8.0 nm for the wild-type strain and 55.0 ± 16.3 nm for the isolated mutant 2, Supplementary Material Figure S3. All five remaining isolated mutants were then tested at the stationary phase, and an increase in cell wall thickness was systematically observed with average values of 61.8 ± 14.3 nm, 48.2 ± 13.0 nm, 50.9 ± 8.3 nm, 47.5 ± 16.0 nm, and 47.0 ± 10.15 nm for the mutants 1, 3, 4, 5, and 6, respectively ( Figure 5). This confirms the hypothesis that those mutations led to an increase in cell wall thickness. Multiple comparisons using one-way ANOVA revealed that all observed changes in cell wall thickness were statistically significant.
Extreme readings of cell wall thickness were also observed. Those are due to instances where the bacteria fail to divide correctly, leading to extra cell wall material. Figure 6 shows the difference between the wild-type strain with normal cell wall thickness and with the isolated mutant number 2, notably showing an image of an event where the cell fails to divide properly. Similar instances of failed cell divisions were observed in all isolated mutants, except for number 4, Supplementary Material Figure S1. This implies that this phenotype of increased cell wall thickening comes at the cost of events of division failure.
Changes in Gene Expression Levels Linked to Cell Wall Alterations
In order to elucidate the mechanism by which current mutations impact the cell wall, an RNA sequencing analysis (RNA-seq) was performed on both the wild-type strain and one of the mutant strains. The selection of the mutant strain number 2 was based on its significant alteration of minimum inhibitory concentration (MIC) for vancomycin. The RNAseq analysis revealed significant differential expression of 259 genes, with 130 upregulated and 129 downregulated genes (Figure 7). Subsequent analysis indicated that the observed changes in cell wall structure could be attributed to alterations in the expression of several of those genes. RNA sequencing data revealed downregulation of genes involved in cell wall synthesis and maintenance. The lytm gene and SAOUHSC_02170 genes, coding for peptidoglycan hydrolases, were downregulated, leading to the hypothesis that observed cell wall phenotype could be attributed to a reduction of peptidoglycan hydrolysis [34]. SceD, a lytic transglycosylase with autolytic activity, and SAOUHSC_00427, a gene coding for an autolysin, were also downregulated [35]. Those could lead to cell division failure as autolysis is requiring for this process to occurs normally. Additionally, the genes SAOUHSC_00773, SAOUHSC_02855, and SAOUHSC_02883 coding for protein containing LysM domain, a protein motif for binding to peptidoglycans, were also downregulated [36].
Changes in Gene Expression Levels Linked to Cell Wall Alterations
In order to elucidate the mechanism by which current mutations impact th an RNA sequencing analysis (RNA-seq) was performed on both the wild-type one of the mutant strains. The selection of the mutant strain number 2 was b significant alteration of minimum inhibitory concentration (MIC) for vanco RNA-seq analysis revealed significant differential expression of 259 genes, w regulated and 129 downregulated genes (Figure 7). Subsequent analysis indica observed changes in cell wall structure could be attributed to alterations in the of several of those genes.
RNA sequencing data revealed downregulation of genes involved in ce thesis and maintenance. The lytm gene and SAOUHSC_02170 genes, coding doglycan hydrolases, were downregulated, leading to the hypothesis that ob wall phenotype could be attributed to a reduction of peptidoglycan hydrolysis a lytic transglycosylase with autolytic activity, and SAOUHSC_00427, a gene an autolysin, were also downregulated [35]. Those could lead to cell divisio autolysis is requiring for this process to occurs normally. Additionally, the ge HSC_00773, SAOUHSC_02855, and SAOUHSC_02883 coding for protein LysM domain, a protein motif for binding to peptidoglycans, were also dow [36].
The Study Findings
In the field, there is a growing consensus that bacteriocins with selective activities that take into account microbial interactions in the targeted microbiome represent a promising solution for the treatment of infections without further exacerbating antimicrobial resistance. However, the discovery of such compounds with the desired selectivity is challenging. Here, we showed that the use of the PARAGEN collection [20] led to the discovery of such bacteriocins for the treatment of S. aureus skin infections. Both the cerein 7B and cerein B4080 have the advantage of selectively inhibiting the growth of S. aureus without affecting the growth of the commensal skin bacteria, S. epidermidis. The upregulation of genes such as mprF and dltABCD was observed, resulting in increased expression levels. Overexpression of mprF leads to the addition of lysine to the phosphatidylglycerol present in the cell membrane, while overexpression of dltABCD results in the addition of alanine to the teichoic acids on the cell wall. Both of these changes result in a less negatively charged membrane, thereby decreasing its sensitivity to cationic antimicrobial peptides [37,38]. Furthermore, downregulation of vraF and vraG was also observed. These genes are involved in the formation of a complex with the GraSR protein, leading to an increase in resistance to cationic antimicrobial peptides [39]. Those changes could explain a decrease in sensitivity to both bacteriocins as they are cationic peptides.
The Study Findings
In the field, there is a growing consensus that bacteriocins with selective activities that take into account microbial interactions in the targeted microbiome represent a promising solution for the treatment of infections without further exacerbating antimicrobial resistance. However, the discovery of such compounds with the desired selectivity is challenging.
Here, we showed that the use of the PARAGEN collection [20] led to the discovery of such bacteriocins for the treatment of S. aureus skin infections. Both the cerein 7B and cerein B4080 have the advantage of selectively inhibiting the growth of S. aureus without affecting the growth of the commensal skin bacteria, S. epidermidis.
Both cerein 7B and cerein B4080 are bacteriocins produced by Bacillus strains; specifically, Bacillus cereus Bc7 [21] for cerein 7B and Bacillus cereus B4080 for cerein B4080. These Bacillus strains are typically found in soil environments and are not commonly associated with the skin microbiome. As such, the observed selectivity of these bacteriocins against skin bacteria is unexpected.
Our findings suggest that cerein 7B anchors to the bacterial membrane through lysine and tryptophan residues, while its antimicrobial activity is dependent on GXXXG motifs and cysteines, which stabilize its alpha helix. However, the selectivity of the peptide between S. aureus and S. epidermidis remains unexplained. This selectivity could be due to differences in cell wall composition, surface charge, permeability, fluidity, and membrane stability of the bacteria. Alternatively, a membrane protein may serve as the target recognized by the bacteriocin.
There is a sense in previous studies [40] that bacteriocins are less likely to cause resistance than antibiotics. In our study, mutants that were able to grow in medium containing the bacteriocins were isolated and had an apparition rate of 1 for 1 million or 2 for a million. This indicates that the occurrence rate is higher in this case than what is typically observed, as the frequency of spontaneous mutations for antibiotic resistance is approximately 10 8 to 10 9 [41]. More concerning are the results obtained by an antibiogram performed at the Saint-Luc hospital in Brussels, which highlighted a decrease in sensitivity to vancomycin for some of the isolated mutants.
The development of resistance in S. aureus against antimicrobial peptides has already been observed and can involve two-component systems [42]. In the aspRS system, the presence of antimicrobial peptides is detected by the sensor, resulting in increased expression of genes such as mprF and dltABCD. Overexpression of these genes leads to changes in the cell membrane and cell wall, resulting in reduced sensitivity to cationic antimicrobial peptides [37,38]. Our study reveals a link between walk/R point mutations and the gene levels expression of mprF and dltABCD, likely to be responsible of sensitivity reduction to the selective bacteriocins discovered.
In our research, a WGS performed on those isolated mutants led to the observation that the two-component system walK/R, previously known to be involved in vancomycin resistance, was responsible for those changes and was involved in the resistance against both bacteriocins tested in this study. In our study, new mutations in walK/R that are causing a decrease in vancomycin sensitivity were discovered: the G223A and R376P in WalK; and the R222V and S105R in WalR. This information can help grow the current knowledge of this problematic resistance and improve diagnostics.
TEM analysis revealed that a thickening of the cell wall was present for all isolated mutants at the stationary phase, and that this phenotype led to occasional division failure. This is in line with previous work which proposed the peptidoglycan-clogging theory, which postulates that the passage of vancomycin molecules across thickened peptidoglycan layers is hindered and delayed, leading to resistance [27]. This phenotype of a thicker cell wall comes at the cost of division failure and therefore reduces fitness compared to the wild type in environments where no antimicrobial pressure is present. RNA sequencing results suggest that the observed cell wall thickening phenomenon may be attributed to the downregulation of peptidoglycan hydrolases. Furthermore, the failure in cell division could potentially be driven by the downregulation of autolysin.
Significance of the Findings-Implication for the Field
The findings of this study shed light on the potential and limitations of bacteriocins as an alternative to antibiotics. Despite their narrow spectrum of activity and lower propensity to induce resistance [28], our research has shown that cross-resistance can occur between bacteriocins and antibiotics. It is important to consider the implications of these findings for the development of selective antimicrobial compounds. Rather than looking for new compounds which theoretically do not induce any resistance; we suggest that selective compounds may be a more effective and sustainable approach. By targeting the pathogenic bacteria responsible for infections, commensal bacteria can be preserved, and the few pathogenic cells that have adapted to the selective pressure of the antimicrobial compounds would face harsh competition from commensal bacteria, likely leading to their elimination. This approach could pave the way for more effective and sustainable antimicrobial strategies in the future, ultimately resulting in fewer chronic infections and a lower spread of resistance.
Limitations of the Study
The present study has some limitations and weaknesses that need to be addressed. Firstly, a bacteriocin with selectivity between S. aureus and S. epidermidis is promising, but the complexity of the various skin microbiome interactions is not fully represented in our model. Further studies with a much broader representation of the skin microbiome should be conducted to better understand the impact of antimicrobial activity of bacteriocins on the skin microbiome. Secondly, while the DNA mutations in S. aureus isolated mutants and their effects on decreased sensitivity and changes in cell wall thickness were characterized, they were studied in groups. The individual introduction of those mutations one by one would be necessary to attribute them to the phenotype changes that were observed.
Spot Assay
To conduct the spot assay experiment, a solid M17 growth medium (37 g of M17 growth media and 10 g/L of glucose, and 1.5% agar; the final pH was adjusted to 6.9) is prepared as the first layer in a petri dish. The second layer is composed of the same M17 medium but containing a reduced agar content of 0.4% and is mixed with 100 microliters of an overnight preculture from the chosen indicator strain per 10 mL of the medium and poured onto the first layer. After laminar flow exposure for 20 min, a 2 microliter drop of 1 mg/mL cerein 7B or cerein B4080 solution is added at a specific location on the plate. The petri dish is then incubated overnight at 37 • C. An inhibition halo at the drop's location indicates antimicrobial effectiveness, while its absence suggests a lack of activity. For S. hominis ATCC 27844, S. epidermidis ATCC 12228, L. lactis IL403, and S. aureus ATCC 6538, M17 grow medium was used. For Propionibacterium acnes ATCC 6919, the Reinforced Clostridial Agar (RCA) growth medium, composed of peptone (10 g/L), yeast extract (13 g/L), glucose 5 g/L), soluble starch (1 g/L), cysteine hydrochloride (0.5 g/L), sodium acetate (3 g/L), and sodium chloride (5 g/L), at adjusted pH 6.8, was used in anaerobic condition for culture growth.
Minimum Inhibition Concentraion (MIC)
The experimental procedure involved the preparation of a preculture of the indicator strain at a known concentration of 10 5 CFU/mL. From this preculture, 37.5 microliters were transferred and mixed into 15 mL of growth liquid culture media. In a 96-well plate, 150 microliters of the indicator strain solution were dispensed into each well of a line, with the exception of the last column, where 200 microliters of liquid growth media were added. The total volume of each well was made up to 200 microliters by the addition of a solution containing the antimicrobial compound. A determined starting concentration of the antimicrobial compound was introduced in the first condition and was halved for each subsequent condition. This created a total of 11 conditions, with an additional condition added without any antimicrobial compound. Subsequent to the addition of the antimicrobial compound, the plate was incubated at 37 • C for 24 h. After the incubation period, the absorbance of each well was measured at 600 nm using a Hidex plate reader. To eliminate the absorbance value of the growth media alone, the values obtained in the last column, which served as a negative control with no bacterial growth, were subtracted from the values of the other wells in the same column. The resulting data for the remaining 11 wells in each column were plotted on a graph with the bacteriocin concentration on the x-axis and the normalized absorbance value on the y-axis. The EC50 value was calculated as the concentration of the antimicrobial compound at which 50% growth inhibition was achieved.
Structural Prediction Alpha Fold
In this research project, Alpha Fold [43], a deep learning-based method for predicting protein structure, was employed to predict the structures of various bacterial proteins. The specific implementation utilized was the "AlphaFold2-advanced.ipynb" notebook, as described in the publication by Jumper et al. [44]. The default parameters of the algorithm were employed, and the input provided to the program was the amino acid sequence of the target protein. Additionally, the homo oligomer value was set to 1.
In Vitro Production of Cerein 7B Bacteriocin and Mutated Versions
The DNA of the mutated version of the cerein 7B were made based on the method use in the PARAGEN collection [20]. The DNA template for the protein of interest was amplified and purified using standard molecular biology techniques. This DNA template was then produced using the Pure Express Protein Synthesis in vitro kit New England BioLabs, manual E6800 was utilized to assemble the reaction mixture, in accordance with the manufacturer's protocol. The reagents and enzymes provided in the kit were combined in a specific ratio and order. The reaction mixture was then incubated at 37 • C for 2 h to facilitate the transcription and translation of the DNA template into the protein of interest. Following the incubation period, the synthesized protein was immediately utilized for the realization of spot assays.
Isolation of Mutants Able to Grow in Presence of Bacteriocins
The mutants were isolated on a petri dish plate made of M17 growth medium (37 g of M17 growth media and 10 g/L of glucose, and 1.5% agar; the final pH was adjusted to 6.9, and the cerein 7B bacteriocin at a 40 microgram/mL final concentration of the cerein B4080 at 200 microgram/mL final concentration). A pre-culture of S. aureus ATCC 6538 was grown in M17 liquid media. After overnight incubation at 37 • C, 50 microliters of the solution from this preculture were added onto the medium layer in the petri dish plate. The plates were then incubated overnight at 37 • C, and the number of colonies able to grow on the medium was counted once the incubation completed. Some colonies were grown in M17 liquid growth media, containing the bacteriocin at the same concentration for an overnight culture. The next day, they were stored at −80 • C with a glycerol concentration of 20%. The colony-forming units (CFUs) from the pre-culture of S. aureus ATCC 6538 was determined by serial dilution to determine the rate of resistance apparition.
S. aureus DNA Sequencing
The genomic DNA of S. aureus cells was extracted using the "GenElute™ Bacterial Genomic DNA Kit Protocol" from Merck. Following DNA extraction, the samples for the wild-type strain and the isolated mutants 1, 2, 7, and 8 underwent Illumina sequencing at Genewiz. Subsequently, the obtained DNA sequence data were analyzed for the presence of mutations in comparison to the wild-type strain S. aureus ATCC 6538 and the isolated mutants using the Galaxy platform. The data were uploaded to the Galaxy web-based platform and analyzed using the publicly available server at usegalaxy.org [45].
MinION sequencing was also used for the wild-type strain and the isolated mutants 3, 4, 5, and 6. The libraries for the S. aureus ATCC 6539 wild type DNA and the isolated mutant's DNA were prepared using the "Rapid Barcoding Kit SQK-RBK004" provided by Oxford Nanopore Technology. The MinION Mk1B device, containing a flow cell R9, was used for this purpose. Base calling was performed using the Guppy software [46], while read filtering was conducted using Chopper with a minimum Q-score of 10. The genome alignment step was carried out using Minimap2 [47] followed by Samtools [48]. Single nucleotide polymorphism (SNP) calling was performed using VCFTools [49] and a custom Python script.
Vancomycin Antibiogram
The BD Phoenix automated system was employed to determine the antimicrobial susceptibility of the various bacterial strains using the PMIC-90 panel (Becton Dickinson, Sparks, MD, USA). The system utilizes the principle of broth microdilution to determine the minimum inhibitory concentrations (MIC) of vancomycin against the tested strains. The results were interpreted according to the Clinical and Laboratory Standards Institute (CLSI) guidelines.
Transmission Electronic Microscopy (TEM)
Bacteria samples were collected by centrifugation from liquid cultures, fixed for 1 h at room temperature in 2.5% glutaraldehyde in culture medium, rinsed in 0,1 M cacodylate buffer, and postfixed in 2% OsO4 in the same buffer. After serial dehydration in increasing ethanol concentrations, samples were embedded in agar 100 (Agar Scientific Ltd., Stansted, UK) and left to polymerize for 2 days at 60 • C. Ultrathin sections (50-70 nm thick) were collected in Formvar-carbon-coated copper grids by using a Leica EM UC6 ultramicrotome and stained with uranyl acetate and lead citrate by standard procedures. Observations were made on a Tecnai 10 transmission electron microscope (FEI). Morphometric analyses and images processing were performed using the SIS iTEM (Olympus) software. Cell wall length were measure on all cells presented in the images, for each cell, 4 measures were taken at each cardinal point. This methodology led to extreme reading when phenomenon of failed cell division was encountered.
RNA Sequencing and Analysis
The RNA extraction was realized on cell cultures once they reached the stationary growth phase by using the NucleoSpin RNA kit (Mascherey-Nagel). The cell lysis step was adapted, i.e., by using a lysosomes concentration 10 times higher than the one recommended in the kit. The quantity and quality of RNA extracted was measure by Qubit. The RNA sequencing was then performed by Seqalis.
The RNA-seq libraries were sequenced on an Illumina NextSeq 2000 system using 2 × 100 bp paired-end sequencing. The raw sequencing reads were pre-processed using Cutadapt v4.2 to remove adapter sequences and low-quality reads. The pre-processed reads were then aligned to the reference genome using STAR v2.7.10b with default parameters. Gene expression levels were estimated using featureCounts v2.0.3. The count data were imported into R using the DESeq2 package. Differential gene expression analysis was performed using DESeq2 with a false discovery rate (FDR) of less than 0.05 and an absolute log2-fold change greater than 1 considered significant. The differentially expressed genes were annotated using the Gene Ontology (GO) database and pathway analysis was performed using the Kyoto Encyclopedia of Genes and Genomes (KEGG) database.
Conclusions
In summary, this study provides valuable insights into the potential of bacteriocins as selective antimicrobial compounds for the treatment of S. aureus skin infections. The PARAGEN collection has facilitated the discovery of two bacteriocins, cerein 7B and cerein B4080, which exhibit selective inhibition against S. aureus while leaving commensal skin bacteria unaffected. Nevertheless, the study also highlights the possibility of crossresistance between these bacteriocins and vancomycin, indicating the need for a cautious approach towards their clinical use as such use could worsen the current problem of vancomycin resistance in S. aureus. Furthermore, the identification of new mutations in the walK/R two-component system, which cause decreased vancomycin sensitivity and cell wall thickening, represents a significant contribution to our understanding of problematic resistance and offers new diagnostic possibilities. The findings of this study have crucial implications for the development of more effective and sustainable antimicrobial strategies, which could lead to a significant reduction in chronic infections and a decrease in the spread of resistance. | 2023-05-25T15:05:35.821Z | 2023-05-23T00:00:00.000 | {
"year": 2023,
"sha1": "df941ab6d00955bd642a86d209a22aeff5401366",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.3390/antibiotics12060947",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d9f228bdf1f5b16e74d234cc48cddb6e9147951f",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
209377443 | pes2o/s2orc | v3-fos-license | The CD40-ATP-P2X7 Receptor Pathway: Cell to Cell Cross-Talk to Promote Inflammation and Programmed Cell Death of Endothelial Cells
Extracellular adenosine 5′-triphosphate (ATP) functions not only as a neurotransmitter but is also released by non-excitable cells and mediates cell–cell communication involving glia. In pathological conditions, extracellular ATP released by astrocytes may act as a “danger” signal that activates microglia and promotes neuroinflammation. This review summarizes in vitro and in vivo studies that identified CD40 as a novel trigger of ATP release and purinergic-induced inflammation. The use of transgenic mice with expression of CD40 restricted to retinal Müller glia and a model of diabetic retinopathy (a disease where the CD40 pathway is activated) established that CD40 induces release of ATP in Müller glia and triggers in microglia/macrophages purinergic receptor-dependent inflammatory responses that drive the development of retinopathy. The CD40-ATP-P2X7 pathway not only amplifies inflammation but also induces death of retinal endothelial cells, an event key to the development of capillary degeneration and retinal ischemia. Taken together, CD40 expressed in non-hematopoietic cells is sufficient to mediate inflammation and tissue pathology as well as cause death of retinal endothelial cells. This process likely contributes to development of degenerate capillaries, a hallmark of diabetic and ischemic retinopathies. Blockade of signaling pathways downstream of CD40 operative in non-hematopoietic cells may offer a novel means of treating diabetic and ischemic retinopathies.
INTRODUCTION
Glia orchestrate homeostasis in neural tissue through cell-to-cell interactions. Communication among glial subsets and communication between glia and other cells of the nervous system are also important during the development of disorders with an inflammatory component. ATP released by astrocytes appears to cause neuroinflammation by activating pro-inflammatory responses in microglia (1). Retinopathies caused by diabetes and ischemia are driven to a significant extent by chronic inflammation (2)(3)(4). The CD40 pathway is activated in these retinopathies and CD40 has emerged as a central mediator of inflammatory responses and pathology in these disorders (5)(6)(7). Herein I will review our work that identified CD40 expressed in retinal Müller glia as a trigger for secretion of ATP that in turn engages the P2X 7 receptor leading to pro-inflammatory cytokine production by monocyte/macrophages/microglia and programmed cell death of retinal endothelial cells (7,8). Through this process, CD40 present in a non-hematopoietic cell amplifies inflammation and causes tissue pathology.
Studies in patients with congenital absence of functional CD154 (Hyper IgM syndrome, X-HIM) provide clinical evidence for the central role of this pathway in adaptive immunity (16). CD40-CD154 interaction promotes dendritic cell maturation inducing licensing of these cells for efficient T cell priming (14,17,18). This pathway stimulates IL-12 secretion by dendritic cells that in turn promotes CD4 + T cell differentiation into Th1 cells (14,17,18). It also supports CD8 + cytotoxic T lymphocytes (CTL) development and prevents CTL exhaustion (19). CD40-CD154 interaction promotes pro-inflammatory cytokine production by macrophages and activates effector functions that are central to control of intracellular pathogens (14,18). Indeed, the most important clinical feature of patients with X-HIM is the increased susceptibility to opportunistic infections normally controlled by cell-mediated immunity (16). The CD40-CD154 pathway is also central for humoral immune responses including B cell proliferation, germinal center formation, antibody production, immunoglobulin class switch, and the generation of B cell memory (14,17,18).
In contrast to hematopoietic cells, little is known about the physiologic role of CD40 in non-hematopoietic cells. It has been proposed that CD40 promotes survival of neurons in the brain since old CD40 −/− mice (16 months of age) have reduced expression of neurofilament isoforms and exhibit evidence compatible with increased neuronal programmed cell death (TUNEL + neurons) (12). In addition, in developing neural tissue, CD40 promotes axon growth in sympathetic neurons and has effects on dendrite growth that vary depending on the class of neurons: CD40 promotes dendrite growth in hippocampal excitatory neurons while it suppresses dendrite growth in striatal inhibitory neurons (20,21). It is not known whether CD40 regulates the development and survival of retinal neurons. Moreover, the physiologic function of CD40 expressed in non-hematopoietic compartments in other organs is unclear. This may be explained by the low levels of CD40 expression in these compartments under basal conditions. In contrast, CD40 is upregulated in various inflammatory disorders and, through ligand engagement, CD40 triggers pro-inflammatory responses in endothelial cells, vascular smooth muscle cells and epithelial cells that play a key role in the pathogenesis of various disorders such as inflammatory bowel disease, systemic lupus erythematosus, rheumatoid arthritis, multiple sclerosis, graft rejection, and atherosclerosis (22,23). These responses include increased protein expression of adhesion molecules, chemokines, metalloproteinases, and tissue factor (22,23). The effects of CD40 ligation on retinal non-hematopoietic cells are discussed below.
DIABETIC AND OTHER ISCHEMIC RETINOPATHIES
Diabetes mellitus has become one of the most important health problems in the world. It is estimated that there are 422 million patients with diabetes worldwide (World Health Organization; www.who.int/diabetes/global-report). Diabetic retinopathy (DR) is a major complication of diabetes and eventually occurs in ∼35% of patients with diabetes (24). In addition, DR is the most common cause of vision loss among working-age adults in developed countries (25). The development of DR appears to be multifactorial and mechanisms such as oxidative stress, increased polyol and hexosamine pathway flux, protein kinase C activation, increased formation of advanced glycation-end products and alterations in systemic and local lipid metabolism have been linked to the development of the disease (26,27). Ample experimental data indicate that low-grade chronic inflammation also plays an important role in the development of DR (2)(3)(4).
The vitreous of patients with DR (28) and retinal endothelial cells from diabetic humans and rodents exhibit increased expression of ICAM-1, an event that promotes adherence of leukocytes to the retinal vasculature (leukostasis) (29,30). This phenomenon is important since blockade of ICAM-1-CD18 interaction diminishes the development of degenerate capillaries in diabetic mice (31). These structures are a hallmark of early diabetic retinopathy and are formed as a consequence of the death of endothelial cells and pericytes, leading to the transformation of capillaries into collapsed sheaths of collagen/extracellular matrix structures that lack blood flow (32). The ensuing ischemia can promote transition to proliferative DR (PDR) that is characterized by retinal neovascularization. DR is also accompanied by increased expression of TNF-α and IL-1β (33)(34)(35)(36). Microglia/macrophages express TNF-α in the diabetic retina (34). TNF-α and IL-1β play a pathogenic role in DR since they contribute to diabetes-induced degeneration of retinal capillaries (37,38). Inducible nitric oxide synthase (NOS2) is expressed in the retinas of patients with DR and of diabetic rodents (39,40). Furthermore, diabetic NOS2 −/− mice have reduced retinal leukostasis and capillary degeneration (41,42). CCL2 levels are increased in the vitreous fluid in patients with PDR (43) and in retinas of diabetic rodents (44). This chemokine appears to play a pathogenic role in DR since there is a correlation between CCL2 protein levels in the vitreous with the severity of DR (43).
CD40 IN THE DEVELOPMENT OF DIABETIC AND I/R-INDUCED RETINOPATHIES
CD40 is expressed in the retina at the level of endothelial cells, Müller glia (important macroglia in the retina), microglia, ganglion cells, and retinal pigment epithelial cells (5,6,55,56). The levels of CD40 expression are low under basal conditions. However, induction or upregulation of CD40 expression is a feature of inflammatory disorders driven by CD40 (57). Indeed, CD40 mRNA is upregulated in the retina of mice with diabetes and mice subjected to retinal I/R (5, 6). Immunohistochemistry and flow cytometry studies to assess protein expression revealed that CD40 is upregulated in retinal endothelial cells, Müller glia and microglia of diabetic mice (6). Importantly, CD40 −/− mice are protected from I/R-induced retinopathy and early diabetic retinopathy (5,6).
CD40 is central for the development of retinal inflammation and retinopathy induced by I/R. In contrast to wild-type mice, CD40 −/− mice subjected to I/R are protected from upregulation of ICAM-1, CXCL1, NOS2, and COX-2 mRNA levels (5). The reduced expression of NOS2 and COX-2 is explained at least in part by diminished recruitment of NOS2 + COX-2 + leukocytes into the retina of CD40 −/− mice (5). Importantly, the loss of ganglion cells and the development of capillary degeneration are markedly attenuated in ischemic retinas of CD40 −/− mice (5). The protection from development of ischemic retinopathy observed in CD40 −/− mice is likely explained by diminished leukocyte infiltration and reduced expression of pro-inflammatory molecules since blockade of ICAM-1, NOS2, or COX-2 protect from retinal pathology after ischemia (51,52,54). Altogether, CD40 is a central mediator of inflammation and neuro-vascular degeneration after I/Rinduced injury of the retina. The model that likely explains these findings is as follows: ischemia-induced activation of CD40 in retinal endothelial cells triggers ICAM-1 and KC/CXCL1 upregulation leading to recruitment of NOS2 and COX-2expressing leukocytes that would in turn promote neurovascular degeneration in the retina (5). However, it is also possible that Müller glia from ischemic retinas could be a source of increased NOS2 and/or COX-2 expression after activation via CD40.
The upregulation of CD40 and CD154 indicate that this pathway is activated in diabetes. CD40 protein expression in increased in the retina of diabetic mice and in the kidneys of patients with diabetic nephropathy (6,59). CD40 mRNA levels are upregulated in the retinas of diabetic mice (6). Peripheral blood mononuclear cells from poorly controlled patients with type I diabetes exhibit increased mRNA levels of the functional type I isoform of CD40 (60). It is not known whether changes in micro RNA that control CD40 transcription [i.e., miR-155, miR-424, miR-503 (61, 62)] explain the upregulation of CD40 mRNA. In addition, CD154 protein levels are elevated in the blood from patients with diabetic microangiopathy and mice with diabetes (7,63,64). CD154 upregulation is biologically relevant since serum CD154 from diabetics triggers pro-inflammatory responses in endothelial cells and monocytes (63). It is likely that CD154 levels are also increased in the retina because microthrombosis occurs in diabetic retinopathy and activated platelets express CD154 (65).
CD40 IN MÜLLER GLIA RECRUITS INFLAMMATORY RESPONSES IN BYSTANDER MICROGLIA/MACROPHAGES
Leukocytes are recognized key players in the development of inflammatory disorders. Indeed, expression of NOS2 or poly(ADP-ribosyl) polymerase 1 (PARP1) in bone marrow cells is necessary for the development of early DR (66). Similarly, CD40 in hematopoietic cells has been deemed a central driver of inflammation. However, studies in mice using bone marrow chimeras revealed that CD40 expressed in non-hematopoietic cells is also required for inflammation (5). Absence of CD40 in the retina inhibits ICAM-1 mRNA upregulation, leukocyte recruitment to the retina and neurovascular degeneration after I/R of the retina (5). Importantly, studies using transgenic mice have established that CD40 expression in a nonhematopoietic cell-Müller glia-is sufficient for development of an inflammatory disorder (7).
Müller glia link with neurons and capillaries, and are central to retina homeostasis (67,68). Müller glia become dysfunctional and acquire expression of proinflammatory genes in diabetic and other ischemic retinopathies (69)(70)(71). The fact that Müller glia express CD40 raised the possibility that CD40 present in these cells may be an important activator of inflammation and retinal injury. Studies in transgenic mice that expressed CD40 restricted to Müller glia demonstrated that, after induction of diabetes, the presence of CD40 in these cells was sufficient for upregulation of ICAM-1, NOS2, TNF-α, IL-1β, CCL2 mRNA levels as well as for development of leukostasis and capillary degeneration (7). This work identified CD40 in Müller glia as a central regulator of inflammation and development of early diabetic retinopathy.
Despite the fact that CD40 in Müller glia from diabetic mice drives TNF-α and IL-1β in vivo, work done in vitro revealed that human and rodent Müller glia are unable to secrete these pro-inflammatory cytokines in response to CD40 ligation even though these cells react to CD40 stimulation (CCL2 secretion and ICAM-1 protein upregulation) (7). This apparent discrepancy raised the possibility that CD40 in Müller glia acts on bystander microglia/macrophages to promote expression of TNF-α and IL-1β.
Testing whether Müller glia activated by CD40 induce IL-1β and TNF-α production in bystander monocytes/macrophages was done by adding human CD154 to human CD40 + Müller glia incubated with CD40 − human monocytic cells (to avoid the effects of direct CD40 ligation on these cells), or by adding human CD154 to human CD40-expressing mouse Müller glia incubated with mouse macrophages (human CD154 does not stimulate mouse CD40 expressed in macrophages) (7). While Müller glia and monocyte/macrophages failed to secrete TNF-α and IL-1β in response to CD154, addition of CD154 to the co-culture of these cells triggered TNF-α and IL-1β production (7). The in vitro studies have an in vivo correlate since diabetic mice that express CD40 restricted to Müller glia upregulate TNF-α protein levels in microglia/macrophages but not in Müller glia while the latter cells upregulate CCL2 protein levels (7). Taken together, these studies revealed that Müller glia activated by CD40 induce proinflammatory responses in bystander microglia/macrophages.
THE CD40-ATP-P2X7 PATHWAY AND INFLAMMATORY RESPONSES IN BYSTANDER MICROGLIA/MACROPHAGES
ATP functions not only as a neurotransmitter for neurons but can also be secreted by non-excitable cells (72,73). Moreover, various cell types express P2 purinergic receptors. These receptors are divided into ATP-gated ionotropic P2X receptors and metabotropic, G protein-coupled P2Y receptors (72,73). The seven subtypes of P2X receptors are ligand-gated channels permeable to Ca 2+ , Na + , and K + . P2X 7 receptor is characterized by the ability to form large trans-membrane pores in response to repetitive or prolonged exposure to ATP (72,73). P2X 7 receptor is key for IL-1β and TNF-α secretion by microglia/macrophages stimulated with ATP (74,75). Indeed, secretion of ATP by astrocytes may cause P2X 7 -dependent microglial activation that would drive neuroinflammatory and degenerative disorders (76).
In vitro and in vivo studies were conducted to determine whether CD40 acts through ATP-P2X 7 signaling to induce cytokine production in bystander myeloid cells. These studies showed that CD40 is an inducer of ATP release in Müller glia (7). Moreover, purinergic signaling explains TNF-α and IL-1β secretion in bystander monocytes/macrophages incubated with Müller glia activated by CD40. Blockade of the P2X 7 receptor either by pharmacologic approaches, knockdown of P2X 7 or the use of macrophages from P2X −/− 7 mice results in marked inhibition of TNF-α and IL-1β secretion (7). In addition, a purinergic receptor ligand (Bz-ATP) enhances cytokine production by monocytic cells (7).
As described above, studies in diabetic transgenic mice that express CD40 only in Müller glia revealed that TNF-α is expressed in a distinct compartment-microglia/macrophages (7). Moreover, P2X 7 receptor mRNA levels are enhanced in the retinas of diabetic mice and P2X 7 receptor protein expression is increased in microglia/macrophages from these animals (7). This is relevant since increased levels of P2X 7 receptor facilitate the effects of the receptor (77). Mice treated with the P2X 7 receptor inhibitor BBG as well as P2X −/− 7 mice are protected from diabetes-induced upregulation of IL-1β and TNFα mRNA levels (7). The mice are also protected from increased expression of ICAM-1 and NOS2, molecules that are upregulated by IL-1β and TNF-α (78,79). Taken together, Müller glia activated by CD40 secrete extracellular ATP and drive P2X 7 receptor-dependent pro-inflammatory cytokine expression in bystander microglia/macrophages in vitro and in vivo (Figure 1 and Table 1). These findings support a model whereby CD40 engagement in non-hematopoietic cells triggers inflammatory
CD40 IN MÜLLER GLIA AND PROGRAMMED CELL DEATH OF BYSTANDER RETINAL ENDOTHELIAL CELLS
Retinal endothelial cells undergo programmed cell death (PCD) in the diabetic retina (32,(84)(85)(86). This process would contribute to the development of capillary degeneration, a central feature of early diabetic retinopathy (32). CD40 is necessary for the development of capillary degeneration (6,7) and yet, ligation of CD40 in endothelial cells does not induce PCD likely because CD40 typically triggers pro-survival signals (87). This raised the possibility of CD40 promoting death of retinal endothelial cells by acting through other cells of the retina. Müller cells were a likely culprit since they encircle retinal endothelial cells.
Whereas direct CD40 ligation in retinal endothelial cells does not cause PCD, CD40 stimulation enhances PCD of endothelial cells when they are incubated with CD40 + Müller cells (8). This effect is not driven by NOS2, oxidative stress, TNF-α, IL-1β, or Fas ligand (8). As described above, CD40 ligation in Müller glia increases release of ATP. CD40 ligation in retinal endothelial cells upregulates P2X 7 receptor expression making these cells susceptible to ATP-induced PCD (8). Indeed, pharmacologic inhibition of P2X 7 receptor prevents PCD of the endothelial cells (8). These results are consistent with the ability of the P2X 7 to form trans-membrane pores that are permeable to hydrophilic molecules of up to 900 Da (88) and mediate cell death (89,90). The in vitro studies described above have an in vivo correlate since retinal P2X 7 mRNA levels and P2X 7 receptor expression in retinal endothelial cells are increased in diabetic mice in a CD40-dependent manner (8), CD40 appears to be necessary for PCD of retinal endothelial cells from diabetic mice (8), and CD40 is known to be required for retinal capillary degeneration (6,7). Taken together, CD40 has a dual role in promoting PCD of retinal endothelial cells: it causes release of extracellular ATP by Müller glia and makes retinal endothelial cells susceptible to P2X 7 -driven PCD (Figure 2 and Table 1). The latter effect may be explained by CD40-driven upregulation of the P2X 7 receptor in endothelial cells that would overcome the prosurvival signals activated by CD40 ligation. This mechanism may contribute to increased susceptibility to ATP-mediated PCD that appears to occur in diabetes (91). Other potential mechanisms by which CD40 increases susceptibility to P2X 7 receptor-mediated PCD may include modulation of ATP-gated channel expression, ectoATPase activity, and/or coupling to downstream cell signaling pathways that promote cell death. Finally, while CD40induced activation of ATP-P2X 7 receptor signaling mediates PCD of retinal endothelial cells, CD40 may also promote death of these cells and capillary degeneration through mechanisms that include: enhancement of retinal leukostasis, upregulation of NOS2, TNF-α, and IL-1β in the retinas of diabetic mice (6,7), events linked to PCD of retinal endothelial cells and capillary degeneration (31,37,38,40,42,86).
In summary, the studies discussed here discovered the CD40-ATP-P2X 7 receptor pathway and revealed that this pathway links Frontiers in Immunology | www.frontiersin.org a macroglia to microglia/macrophages and endothelial cells for the induction of inflammatory responses and endothelial cell death, respectively. By enabling myeloid cells to secrete TNFα and IL-1β, this process would circumvent the poor capacity of CD40 to directly trigger secretion of these cytokines in nonhematopoietic cells, thus causing amplification of inflammation. These findings may be operative in I/R-induced retinopathy given its similarity to DR. The CD40-ATP-P2X 7 receptor pathway may also be relevant to neuro-inflammatory and neuro-degenerative brain disorders. For example, astrocytes acquire CD40 expression after incubation with IFN-γ (92), neural tissue injury or in a transgenic mouse model of amyotrophic lateral sclerosis (93), a disease driven by CD40. Thus, the CD40-ATP-P2X 7 receptor pathway may potentiate pro-inflammatory cytokine production by microglia further driving neuro-inflammation.
Finally, this pathway may be functional in other diseases driven by CD40 such as inflammatory bowel disease, atherosclerosis and lupus nephritis. CD40 present in non-hematopoietic cells of the intestine, blood vessels and kidney may induce release of ATP that would bind purinergic receptors present in infiltrating myeloid cells.
The existence of the CD40-ATP-P2X 7 receptor pathway may have therapeutic implications. Pre-clinical data revealed that administration of anti-CD154 mAb to inhibit CD40-CD154 signaling effectively controlled various inflammatory and neurodegenerative disorders (22,23). Unfortunately, anti-CD154 mAbs caused thromboembolic complications in humans that are unrelated to inhibition of CD40 (94). Targeting signaling pathways downstream of CD40 may represent an alternative approach to treat CD40-driven diseases. CD40 functions by recruiting TNF Receptor Associated Factors (TRAF) to its TRAF2,3 or TRAF6 binding sites (95). Blockade of CD40-TRAF2,3 signaling markedly impairs pro-inflammatory responses in non-hematopoietic cells (58,96). Blocking this signaling pathway may also inhibit pro-inflammatory responses in neighboring myeloid cells. Pharmacologic approaches to inhibit CD40-TRAF2,3 signaling (cell penetrating CD40-TRAF2,3 blocking peptide or small molecule CD40-TRAF2,3 inhibitor) may prove an effective approach to treat diabetic and ischemic retinopathies, and potentially other CD40-driven inflammatory disorders. Given that TRAF6 is critical for dendritic cell maturation and development (96,97), CD40mediated IL-12 production by dendritic cells (98) and induction of antimicrobial effector mechanisms in macrophages (99,100), pharmacologic inhibition of CD40-TRAF2,3 signaling would minimize the risk of opportunistic infections by leaving CD40-TRAF6 signaling intact.
AUTHOR CONTRIBUTIONS
The author confirms being the sole contributor of this work and has approved it for publication.
ACKNOWLEDGMENTS
The author thanks all the members of the Subauste lab for their feedback on this manuscript. | 2019-12-17T14:10:01.055Z | 2019-12-17T00:00:00.000 | {
"year": 2019,
"sha1": "4dd6e5b63bea0a75f177264dc6b1857d2f170cf4",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fimmu.2019.02958/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4dd6e5b63bea0a75f177264dc6b1857d2f170cf4",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
20258386 | pes2o/s2orc | v3-fos-license | Effect of Dose-Response of Zinc and Manganese on Siderophores Production
Problem statement: This study was conducted to find and determine whet her the siderophores of the four environmental Pseudomonas spp. isolates possess a sequestering activity towards essential transition metals (Zn and Mn) oth er than iron. Approach: Four fluorescent Pseudomonads isolated from various environments, we re characterized analytically (Isoelectric focusing), biologically (pyoverdine-mediated uptake ) and genetically (16S rDNA sequencing). By means of spectrophotometric measurements, it was po s ible to establish and compare the levels of pyoverdine production, in two different nutrient-po or media. Results: The strains were assigned, by sequencing, to P. fluorescens, P. aeruginosa, P. putida and P. mosselii isolated, respectively from soil, compost, sea water and waste water treatment plant. These bacterial strains were recognized as producing diver’s yellow-green siderophores types, when grown under conditions of iron starvation. The highest metabolite concentration was obtained w ith PsC132 and PsTp171 strains isolated respectively from compost and waste water treatment plant, in CAA medium. Strains grown in CAA medium exhibit a higher PVD level compared to SM me dium. Mn (II) was found to promote pyoverdine biosynthesis, but rather, Zn (II) had no significant effect on siderophore production when compared to control medium. For both strains PsS29 and PsC132, the increase of iron concentration quenched siderophore production especially above 20 μM. Pyoverdine level declined with the high concentration of zinc but increased with Manganese concentration ranging up to 70 μM (in cas e of PsC132) and 300 μM (in case of PsS29). Conclusion/Recommendations: The ability of fluorescent Pseudomonas, isolated from wastewater treatment plant and from c post, to sequester zinc, point to a unique advantage of these species for divers bior emediation applications.
INTRODUCTION
A number of transition metals are needed by bacteria as vital constituents, but their availability in the environment may not suffice to support microbial growth state (Ambrosi et al., 2002;Adarsh et al., 2007). Some authors found that production of pyoverdines contributes to the bio-control capacity of the fluorescent Pseudomonas. Conversely, other studies failed to establish a link between production of pyoverdine and antagonism against phytopathogenic fungi Kumar et al.,2008;Saidi et al., 2009;Yang et al., 2009. Pseudomonas spp. has been shown to produce siderophores able to chelate any available iron (Henry et al., 1991). Due to the critical need for iron in aerobic metabolism, bacteria living in neutral environments are normally faced to the nutritional iron deficit resulting from the low solubility of iron in its oxidized state (Winkelmann et al., 1987;Ambrosi et al., 2002). In order to satisfy their need to iron, microorganisms start to excrete large amounts of specific Fe 3+ scavenging molecules (siderophores), when cells are grown under iron deficiency (Braun and Braun, 2002). The Fe (III)siderophore complex is then transported into bacterial cell via cognate-specific receptor to enzymatic reduction Cornelis and Matthijs, 2002). Pyoverdine (PVD), the fluorescent siderophore produced by the rRNA groupI species of genus Pseudomonas, constitutes a large family of iron chelators (Wahyudi et al., 2011). This Yellow-green fluorescent pigment is composed of three structural parts: dihydroxyquinoline chromophore responsible of the fluorescence, a variable peptide part comprising 6 to 12 amino acids and a side chain, generally a dicarboxylic acid or a dicarboxylic amid Cornelis and Matthijs, 2002). As the peptide part interacts with specific cell surface receptors, pyoverdine type recognition allows for Pseudomonas strain classification (Meyer et al., 2002).
Although essential metals have important biological role, at high levels they can damage cell membranes, alter enzyme specificity, disrupt cellular functions, damage the DNA structure (Bruins et al., 2000;Canovas et al., 2003;Teitzel et al., 2006) and can reduce crop yields and soil fertility (Stuczynski et al., 2003).
The objectif of the research was to evaluate environmental Pseudomonas spp in siderophores sequestering activity towards essential transition metals (Zn and Mn) other than iron. The effect of pyoverdin production was studied in two metal poor mediums (succinate and casamino Acid). The dose-response effect of Fe (III), Zn (II) and Mn (II) was then tested on siderophore production.
Isolation of Pseudomonas fluorescent strains:
In this study the bacterial strains were isolated from various media: soil (PsS29), compost (PsC132), sea water (PsWs140) and waste water treatment plant (PsTp171).
Fluorescent pseudomonad colonies were isolated on King's B medium (Scharlau) and identified under UV light at 366 nm. Purified single colonies were further spread onto KB agar plates to obtain pure cultures. Stock cultures were made in Luria Bertani broth containing 50% (w/v) glycerol and stored at-80°C.
IEF analysis and PVD-mediated iron uptake:
The iron-poor liquid Casamino Acid (CAA) growth medium used for this study was composed as follows (per liter): 5 g of low-iron Bacto Casamino Acid (Difco), 1.54 g of K 2 HPO 4 · 3H 2 O and 0.25 g of MgSO 4 · 7H 2 O. CAA medium was mainly used for PVD-IEF analysis and PVD purification through the Amberlite XAD-4 (XAD) procedure as previously described by Meyer et al. (2002). The cultures were incubated on a rotary shaker (200 rpm) at 25°C. The model 111 mini-IEF cell from Bio-Rad was used. Casting of the gels (5% polyacrylamide containing 2% Bio-Lyte 3/10 ampholytes) and electric focusing were performed according to the manufacturer's recommendations. One-microliter samples of PVDs (aqueous XADpurified solutions (6.5 mg mL −1 )), or of culture supernatants (40-h CAA-grown culture supernatant concentrated 20-fold by lyophilisation) were used in this experiment. PVD bands in the gel were visualized under UV light at 365 nm and photographed immediately after focusing. Their respective isoelectric pH values (pHi values) were determined with "Easy win 32" program as described by Fuchs et al. (2001). This allowed assigning each band to the corresponding pHi value. A mixture of seven known pyoverdines bands (3.95, 4.6, 5.2, 7.25, 7.75, 8.8 and 9.2) was used as internal pHi standard (Meyer et al., 2002).
The PVD-mediated iron uptake analysis was conducted as previously described by Meyer et al. (2002). Iron-starved cells were then incubated in succinate medium under non-proliferating conditions in the presence of a label mix containing 59 Fe-PVD complex. Aliquots of the bacterial suspension were withdrawn at different time intervals and rapidly filtered on 0.45 µm porosity membrane. Cells remaining on the filters were thoroughly washed and their radioactivity, measuring the amount of label iron incorporated during the incubation time, was determined using a Gamma 4000 Beckman radioactivity counter. Control assays without bacteria were performed simultaneously to verify the complete solubility of labelled iron through PVD complexation.
The 16S rRNA gene PCR products were purified using the QIAquick Wizard PCR purification Kit (Promega, USA), according to manufacturer's instructions. Sequences of the PCR products obtained with Ps-for/Ps-rev primers were aligned and corrected manually with Chromas Pro (version 1.34). Similarity matrix of 16S rRNA gene sequences with closest neighbours and identification were achieved using RDP utilities (Ribosomal Database Project II: http://rdp.cme.msu.edu/html).
All these media were prepared with deionized water. To prevent siderophore interaction with other elements, glassware was cleaned in 6 M HCl and repeatedly rinsed with ultrapure water.
The culture broth was inoculated with actively grown culture (16 h in King's B medium) and grown in 60 mL of iron deficient succinate broth and CAA media at 28°C for 48 h under constant shaking of 150 rpm using an incubator shaker (ZHWY-2102 P). Samples of 1 mL were taken at 5, 10, 24, 36 and 48 h interval times respectively. Bacterial growth was estimated by spectrophotometry at 600 nm. The culture broth were then withdrawn and centrifuged at 10,000 rpm for 15 min at 4°C. Decimal dilutions of supernatants were done in deionised water. The amount of siderophores excreted into the culture medium was determined by spectrophotometry at 405 nm (Spectro UVS-2700 Dual BEAM LABOMED, INC) in 1 cm cells against media blank. Pyoverdin levels were expressed as the ratio of A405/A600 (Stintzi et al., 2000). Three repeated experiments were envisaged.
Effect of metal concentration on growth and pyoverdine production: Metal Salts were used in the following forms: ZnSO 4 · 7 H 2 O (iron content, <10 ppm); MnSO 4 · H 2 O (iron content, <0.001%) and FeSO4· 7 H 2 O (iron content, >99.9%). Stocks of 10mM ZnSo 4 and MnSo 4 salts were prepared and sterilized with 0.22 µm filters under aseptic condition. These stock solutions were incorporated in autoclaved CAA media (CAA + Zn and CAA + Mn) and SM media (SM + Zn and SM + Mn) at a final concentration of 60 µM for each metal.
In order to determine the threshold level of metals at which growth and/or siderophore biosynthesis are stimulated or repressed, PsS29 and PsC132 strains in CAA medium were monitored as a function of increasing amounts of Fe (III), Zn (II) and Mn (II) from 0.1-475 µM in 5 mL CAA medium.
Strain characterisation:
In an attempt to assign isolates to bacterial species, the strains referenced as PsS29, PsC132, PsWs140 and PsTp171 and selected in this investigation, showed a 16S rDNA sequence related to four Pseudomonas species, namely: P. fluorescens, P. aeruginosa, P. putida and P. mosselii, respectively. In order to investigate whether the four Pseudomonas species produce different pyoverdins, the siderotyping method was used. Table 1 illustrates the different PVD-IEF patterns upon analyzing the culture supernatants of the four strains grown under irondeficient conditions (CAA medium). PsS29 and PsTp171 were characterized by acidic PVD-IEF profiles, with bands ranging between pHi 4.0 and 5.1. PVD-IEF profiles from strain PsWs140 produced two main bands (pHi 8.9 and 7.3) and a minor band with pHi value of 8.5. The fourth strain (PsC132) showed two bands with pHi values of 8.5 and 6.9. (Table 1 demonstrate a schematic PVD pattern of the four tested strains). For the purpose to confirm the classification reached by PVD-IEF characterization, the four strains were analysed for their capacity to incorporate iron under the form of a PVD-iron complex. The strains PsS29, PsTp171, PsWs140 and PsC132 cross-reacted with their own PVDs and with the type strains PL9, LBSA1, G168 and PAO1, respectively (Table 1). As example of homologous and heterologous 59 Fe incorporation, the strain PsWs140 showed primordially, a strict specificity of recognition toward its own pyoverdine. Figure 1 showed that the strain PsWs140 exclusively incorporated iron bound to its pyoverdine with a 100% efficiency (PVD number 30) indeed to the pyoverdine of P. putida strain G168 with a 78% efficiency ( PVD number 19) (Data not shown for other strains). (Stintzi et al., 2000) Strains growth related to pigment synthesis: Spectrophotometric analysis of the un-diluted bacteria supernatant showed an absorption area between 350 and 450 nm with a sharp peak at about 400 nm (Fig. 2), characterizing the PVD siderophore type. The maximum absorbance obtained for the strain PsS29 was at 400 nm. The other strains saved the maximum absorbance between 405 and 410 nm. The determination of siderophore production by the strains used in this study allowed their separation in three types: PsC132 produced the highest siderophore concentration; followed by strains PsTp171 and PsWs140. The least siderophore production was obtained by PsS29 (Fig. 2a). The cells grown on the CAA medium presented siderophore content nearly 2.5fold higher as compared to cells developed in SM medium. No PVD production was detectable for strain the PsS29 in SM medium (Fig. 2b).
Strains grown in CAA medium (Fig. 4a) exhibit a higher PVD level ranging from 1.98-fold (PsC132) to 15.26-fold (PsWs140) compared to SM medium (Fig. 4b). So, this is due to a significantly higher purity of the CAA medium (lower iron contamination).
Considering the PVD levels related to growth cycle (Fig. 4) the siderophore released by the studied strains in CAA and SM media started after 5h of incubation and increased up to 36-48h then declined afterward. In SM medium, apart PsS29 enable to produce PVD, PsWs140 and PsTp171 strains showed no significant increase in the level of PVD during incubation time (Fig. 4b). The strains were found to produce maximum siderophore quantity during the stationary phase of culture growth (Sharma and Johri, 2003). A 405 /A 600 ratio expressed the pyoverdine level (Stintzi et al., 2000) While studying the influence of heavy metals, it was observed that the presence of Mn 2+ in the extracellular medium (CAA and SM) promoted significantly PVD production (Braud et al., 2009;Sharma and Johri, 2003). After 48h of growth, PsC132 present a slightly increase in siderophore level in CAA and SM media enriched with Mn supplementation (respectively 1.03-fold and 1.06-fold increase). The presence of 60 µM Mn 2+ increased PVD production of PsTp171 strain by nearly 1.2-fold in CAA medium ( Fig. 3 and 4a) and nearly 1.37-fold in SM medium (Fig. 4b). However, supplementing CAA and SM media with exogenous Zn2+, in case of PsTp171 strain, caused a decrease in siderophore levels (respectively 1.05-fold and 1.91-fold decrease) as compared to the control (CAA and SM without metal supplementation) ( Fig. 3-5).
DISCUSSION
The identification based on the 16S rDNA sequencing was reconfirmed by using the siderotyping, an easy and powerful method, giving a rapid discrimination between fluorescent pseudomonads producing a particular pyoverdin (Meyer et al., 2002). Therefore, the four studied strains produced compounds belonging to four different siderovars indicating the presence of various pyoverdin structures. The 59Fe incorporation technique confirms this data. The detection of different siderovars confirms the diversity of the four strains used.
The determination of the absorbance of the clear supernatants obtained from all cultures developed for 48 h in CAA and SM medium was performed to determine whether the maximum absorbance was at 400 nm. As shown in Fig. 2, the maximum absorbance varied from 400 to 410 nm which may indicate the diversity of compounds produced by these strains. This multiplicity may be due to the nature and the number of the aminoacyl residues in the peptide moiety (Carrillo-Castaneda et al., 2005).
Moreover, the level of siderophore production was compared in the two medium. The pyoverdine production was greater in CAA media. Numerous investigations have shown that the synthesis of PVDs by fluorescent pseudomonads was affected by different environmental factors, notably the chemical nature of the organic carbon and energy source, the degree of aeration of the growth medium, pH, light and trace elements (Gouda and Greppin, 1965;Meyer et al., 1978). Although the different media usually have varied levels of iron contamination, Sharma and Johri (2003) suggested that synthetic media are in all cases better than the complex medium for siderophore production.
Moreover, Carrillo-Castaneda et al. (2005) demonstrated that iron concentration in the growth medium is an important nutritional factor which determines siderophore biosynthesis.
Strains PsTp171 and PsC132, producing the largest amounts of pyoverdine are isolated respectively from wastewater treatment plant and from compost. Several authors noted the important role of strains isolated from these origins in bioaccumulation of heavy metals. Isolates from complex sources were faced frequently to compete towards micronutrients such as heavy metals via siderophore production (Lovley et al., 1997;Hassen et al., 2001). Hussein et al. (2005) noted that a group of Pseudomonas sp. isolated from the effluent of wastewater treatment plant in western Alexandria, possess the ability to tolerate and to uptake different heavy metals (Cu (II), Ni (II)).
Metal ions have definite influence on siderophore production. While Mn 2+ increased the siderophore production, Zn 2+ decreased this production. This result is consistent with the work of Sayyed et al. (2005), who carried out a notably decrease in the amount of siderophore produced in the case of SM media supplemented with Zn 2+ . Authors reported that the metal ion can substitute Fe 2+ in the intracellular control of siderophoregenesis. In addition, Braud et al. (2010) indicated that besides pyochelin, PVD is able to sequester metals from the extracellular medium of the bacteria, decreasing metal diffusion into the bacteria. In their study, PVD was able to sequester Al 3+ , Co 2+ , Cu 2+ , Eu 3+ , Ni 2+ , Pb 2+ , Tb 3+ and Zn 2+ from the extracellular medium. Baysse et al. (2000) noted that the repression of pyoverdin production by vanadium and explained that uptake of several metals by siderophores was possible.
The dose-response effect of heavy metals on PVD level by the highest (PsC132) and the lowest (PsS29) siderophore producing strains was investigated using the CAA medium growth. Results show that pyoverdine production was inversely related to Fe 3+ concentration, whereas growth was directly proportional to iron concentration (Meyer et al., 1978;Visca et al., 1992;Villegas et al., 2002;Manwar et al., 2004). Djibaoui and Bensoltane (2005) observed a complete decline of siderophore production with 200 µg/L of iron as threshold level. This fact reflects the iron requirement for microbiological cellular processes (Sayyed et al., 2005).
On the other hand, concentrations higher then 250 µM of zinc declined the siderophore production. Visca et al. (1992) demonstrated that the pyoverdine production was not affected by metals (Zn(II), Mo(VI), Co(II), Ni(II) and Cu(II)) at concentrations up to 10 µM but was repressed at higher concentrations. Therefore, PsC.132 and PsS.29 were highly tolerant to Mn since their pyoverdine production was completely repressed only by high Mn concentrations (> 475 µM).
CONCLUSION
This study revealed by means of the spectrophotometry measurement, classification of environmental strains by levels of siderophore production. Both fluorescent Pseudomonads, P. aeruginosa and P. mosselii were able to provide higher yields of PVDs especially in CAA medium and in media supplemented with Mn 2+ . Moreover, at concentration >20 mM of iron, siderophore biosynthesis is completely quenched.
In culture medium supplemented with zinc, the amount of PVD is lower than that excreted in control medium (no added metal). This result indicates that pyoverdins of studied strains might be able to complex zinc instead of iron. These strains would have the ability to chelae one of essential component and make it inaccessible to other bacteria (competition phenomenon). In addition, this property could be used in the case of decontamination area contaminated with an excess of zinc. Bioremediation using bioaccumulation and chelating of heavy metals contaminating industrial waste may be an alternative processes and/or additives to conventional methods (physical and chemical). Therefore, the ability of fluorescent Pseudomonas, isolated from wastewater treatment plant and from compost, to sequester zinc, point to a unique advantage of these species for divers bioremediation applications. | 2019-03-19T13:13:57.053Z | 2012-03-10T00:00:00.000 | {
"year": 2012,
"sha1": "e89018a26e3ae7f53e31a11deb9a8631950e24fb",
"oa_license": "CCBY",
"oa_url": "http://thescipub.com/pdf/10.3844/ajessp.2012.143.151",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "15bae174ba44df46472a71d47891a5d0b3a9f5d1",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
204836762 | pes2o/s2orc | v3-fos-license | Evaluation of novel cationic gene based liposomes with cyclodextrin prepared by thin film hydration and microfluidic systems
In gene delivery, non-viral vectors have become the preferred carrier system for DNA delivery. They can overcome major viral issues such as immunogenicity and mutagenicity. Cationic lipid-mediated gene transfer is one of the most commonly used non-viral vectors, which have been shown to be a safe and effective carrier. However, their use in gene delivery often exhibits low transfection efficiency and stability. The aim of this study was to examine the effectiveness of novel non-viral gene delivery systems. This study has investigated the encapsulation and transfection efficiency of cationic liposomes prepared from DOTAP and carboxymethyl-β-cyclodextrin (CD). The encapsulation efficiency of the CD-lipoplex complexes were also studied with and without the addition of Pluronic-F127, using both microfluidic and thin film hydration methods. In vitro transfection efficiencies of these complexes were determined in COS7 and SH-SY5Y cell lines. Formulation stability was evaluated using liposomes size, zeta potential and polydispersity index. In addition, the external morphology was studied using transmission electron microcopy (TEM). Results revealed that formulations produced by microfluidic method had smaller, more uniform and homogenious size and zeta-potential as well as higher encapsulation efficiency when compared with liposomes manufactured by thin film hydration method. Overall, the results of this study show that carboxymethyl-β-cyclodextrin increased lipoplexes’ encapsulation efficiency using both NanoAssemblr and rotary evaporator manufacturing processes. However, this increase was reduced slightly following the addition of Pluronic-F127. The addition of carboxymethyl-β-cyclodextrin to cationic liposomes resulted in an increase in transfection efficiency in mammalian cell lines. However, this increase appeared to be cell line specific, COS7 showed higher transfection efficiency compared to SH-SY5Y.
Liposomes are a form of spherical vesicles that consist of either one or many phospholipid bilayers that interacting in an energetically favourable way. Generally, liposomes are vesicles of self-assembled phospholipid molecules. Those molecules composed of hydrophilic head groups (typically tertiary or quaternary amino group) attached to hydrophobic tails (generally long-chain fatty acids) by a linker [1][2][3] . The amphiphilic nature of these lipid molecules causes them to form bilayers spontaneously in aqueous environments. This results in a small spherical structure in which the surface polar heads shield the non-polar interior against water. The positively charged amine groups enhance binding with negative groups in DNA for example 4 . Moreover, liposomes are attractive as a gene vector due to their ability to carry DNA to various target cells. In addition, liposome formulations have been established to be a safe carrier, with such formulations being used worldwide in different therapeutic and vaccinology products. Liposomes have also been used as a drug carrier to control drug delivery to protect the drug payload from rapid degradation, to enhance drug concentration in targeted tissues and to lower doses of the required drug and hence lowering toxicity. The versatile structure and low immunogenicity of liposomes have been shown to be a promising gene transfer system. Liposomes can entrap different molecules such as nucleic acids and may even protect DNA against enzymatic degradation within the cell. Liposomes also can enhance cellular uptake,
Results and Discussion
DNA condensation is a prerequisite for successful gene delivery. Carboxymethyl-β-cyclodextrin has shown to be effective in gene delivery and our previous study 14 demonstrated that carboxymethyl-β-cyclodextrin has the ability to condense DNA at a molar ratio of 1:3 with 22% encapsulation efficiency. To investigate this further carboxymethyl-β-cyclodextrin (CD) was incorporated with cationic lipid (DOTAP), netural lipid (DOPE) and cholesterol to form liposomes to study the effect on gene cationic liposomes transfection and to evaluate the degree of gene encapsulation efficiency.
Liposomes size, zeta potential, poly dispersity (pdi) and morphology. The disappearance of liposomes from blood circulation is primarily due to uptake of the liposomes by the mononuclear phagocytic system. A decrease in liposome size reduces complement recognition when the liposome size is between 70 and 200 nm 15,16 . Several articles published recently have also suggested that the particle size of gene delivery system should not exceed 150 nm. A recent study reported that 135 nm is the optimal size for gene delivery. This current study looked at the effect of liposome preparation methods and liposomes composition on liposome size. Microfluidic method using the NanoAssemblr TM and the hydration method using the rotary evaporator were used to study these methods. In order to optimise results of this study, the NanoAssemblr was run at 12 ml/min, 9 ml/min, 5 ml/min and 2 ml/min and ratio at 1:0.5, 1:1,1:3 and 1:5 of organic:aqueous. During optimisation a significant advantage was observed with the use of NanoAssemblr over the rotary evaporator, this was due to it's the NanoAssemblr's ability to control liposomes size using flow rate, TFR, and flow rate ratio(FRR). It was observed that liposome size was changed when changing from 12 ml/min to 2 ml/min and when changing flow ratio from 1:1 to 1:5 (data not included). Statistical analysis of TFR and FRR results show that the reduction in liposomes size resulted from the change in TFR was not significantly different (p > 0.05) and the change in FRR was only significant between ratio 1:1 and 1:3 or 1:1 and 1:5 and there is no significant difference in between ratio 1:3 and 1:5. The rationale behind choosing ratio 1:3 over 1:1, at 1:3 liposomes will have less cationic lipid, and since high cationic lipid is associated with an increase in cell toxicity 1:3 ratio would be the better choice.This gives a control of liposomes size and hence no need for after production size reduction. However, zeta potential was only affected by change in flow ratio but not by total flow rate. These results confirmed that the nanoAssemblr at flow rate of 2 ml/min and at ratio of 1:3 lipid to aqueous, was able to produce size of less than 165 nm, with homogeneous size distribution. Therefore, these parameters were used to prepare liposomes for further studies such as gene encapsulation efficiency and cell transfection. Also, the NanoAssemblr had the added benefit of a a one-step preparation procedure and also had a shorter preparation time.
The sizes of liposomes prepared by the NanoAssemblr and the rotary evaporator can be seen in Table 1. Liposomes prepared by NanoAssemblr sizes were between 79.51 nm and 161 nm and between 109 nm and 294 nm for the rotary evaporator. These results are consistent with other studies [17][18][19][20] that outline that the rotary evaporator method needs to be followed by ∼20 min sonication, yet size still between 109 and 294 nm. This study has also shown that liposomes composition has major effect on liposomes size.
The addition of carboxymethyl-β-cyclodextrin or carboxymethyl-β-cyclodextrin and Pluronic F-127 (Table 1) resulted in an increase in liposomes size. This increase in size could be due to cyclodextrin displacing cholesterol in the liposome formulation, which decreases liposome rigidity and increase its size 21 . Results in Table 1 have also www.nature.com/scientificreports www.nature.com/scientificreports/ shown an increase in size of liposomes (F1, F3, F5, F7) when DNA is incorporated (lipoplexes, F2, F4, F6, F8), which is in agreement with other published research 22 .
The Polydispersity Index (PDI) or particle distribution has a significant impact on liposomes stability and bioavailability. For liposomes to be stable, safe, and efficient liposomes preparation must be homogenous. An acceptable liposomes formulation for drug delivery should have PDI value below 0.3 23,24 . The liposomes prepared by the NanoAssemblr met this criteria, it was observed that many liposomes prepared by the rotary evaporator method were above this value (Table 1).
A positive zeta-potential not only has the benefit for enhancing pDNA loading efficiency but also for improving the effective accumulation in the target cells. A positive value will also impact liposomes stability, many studies have shown that ζ-potential values that range between +16 and +55 mV are high enough to ensure colloidal stability due to the electrostatic repulsion between particles.
Zeta potential(ζ) results (Table 1) revealed that all formulations were positively charged, before and after the addition of pDNA. Zeta potential values of lipoplexes were similar to the values obtained from liposomes alone, however there was a reduction in the zeta potential following the addition of pDNA. This could be a result of the electrostatic interaction between the cationic lipid and the negatively charged backbone of the pDNA 21 . Different zeta potential reading was reported for formulations prepared by each method (NanoAssemblr or rotary evaporator). NanoAssemblr showed lower zeta potential between 24.5 mv and 44.1 mv, whereas the rotary evaporator showed higher zeta value between 45 mv and 56 mv (Table 1). This can be explained by formulation homogenous suspensions achieved by the NanoAssemblr resulting in majority of the amino group been occupied which induce protonation on the surface of liposomes.
Transmission Electron Microscopy (TEM) was used to investigate the external of the liposomes morphology. Morphology of fresh liposomes prepared by NanoAssemblr and rotary evaporator are illustrated in Figs 1 and 2. TEM images were taken at the same magnification of 40000x for fresh liposomes with and without cyclodextrin. All images showed clear spherical shape of liposomes, with unilamellar as well as very small number of multilamellar liposomes observed in some samples. No change in liposomes morphology was observed after the addition of cyclodextrin.
Encapsulation efficiency. Previous studies have shown that the addition of β-cyclodextrin to liposomes has resulted in increase of encapsulation efficiency of different small drug molecules (e.g. 25,26 ). However, the role of cyclodextrin-liposome complex in gene therapy has not been characterised.
The encapsulation efficiency of pDNA was investigated in this current study by incorporating carboxymethyl-β-cyclodextrin with cationic liposomes. The percentage of encapsulation efficiency was calculated using Eq. 1. Results displayed in Table 2 revealed that all liposomes formulations prepared by both NanoAssemblr and rotary evaporator methods were able to encapsulate pDNA up to 89.6%.
The addition of carboxymethyl-β-cyclodextrin to cationic lipoplexes, F4, has resulted in increase in the encapsulation efficiency by 15% and 9% using the nanoAssemblr and rotary evaporator respectively compared to lipoplexes without (F2), ( Table 2). This increase can be explained by the attraction of the electrostatic binding between the phosphate group of the DNA backbone and the lipophilic inner cavity of the www.nature.com/scientificreports www.nature.com/scientificreports/ cyclodextrin and the amino group in cationic liposomes. The increase in encapsulation efficiency after the addition of carboxymethyl-β-cyclodextrin to cationic liposomes in gene delivery has not been reported previously. Samples containing cyclodextrin and pluronic F-127 (F6) showed lower encapsulation efficiency than formulation F4, but still slightly better compared with formulation F2. This can be explained by that the addition of Pluronic F-127 would compete with pDNA for the cavity of cyclodextrin, resulting in a reduction of plasmid DNA entrapment confirmed with Pluronic F-127 in F8 lipoplexes.
The effectiveness of the microfluidic method was recently outlined in the encapsulation of pDNA for the transfection of COS 7 cells 20 . However, there was no direct comparison between the two methods. Thus, the results of this study allowed direct comparison and outlines an advantage of liposome preparation using microfluidic method rather than thin film hydration process. The improvement in the encapsulation efficiency using microfluidic method could be a result of the production of homogeneous liposomes formulations, and also to the presence of ethanol in liposomes formulations that can also enhance the encapsulation efficiency by making the lipid membrane susceptible to structural rearrangements 27 .
Gel electrophoresis. DNA migrates through an agarose gel matrix by the action of an electric field according to its charge, size, and morphology. pDNA must survive in either supercoiled or open circular form in order to retain optimal gene expression, detection of double strand DNA is not enough to determine if the pDNA still in its active form. Liposomes and liposplexes were prepared as outlined in the methods section. All formulations were centrifuged for 45 min then re-suspended in distilled water. The use of centrifugation and DNase I will be an indication of where pDNA condensed inside the liposomes or on the outside of the liposome vesicles. Chloroform/methanol 2:1 was used in order to break the liposomes shell and release any trapped DNA, this can www.nature.com/scientificreports www.nature.com/scientificreports/ be used as an indication of the quantity of pDNA inside the liposomes. Agarose gel electrophoresis of cationic lipid:DNA complexes was subsequently used to assess the relative amounts of DNA either free or incorporated into the lipid:DNA complex. Table 3. Different liposome formulations prepared by both Rotary evaporator (thin film hydration method) and NanoAssemblr (microfluidic system).
Figures 3 and 4
show that all formulation managed to condense DNA (lanes 3-6) with no DNA migration, as a result of binding and neutralisation of DNA by cationic liposomes. Lanes 9-12 represent lipoplex formulations after liposome shell disruption with chloroform/methanol and subsequent release the pDNA, hence DNA migration seen in lanes 9-12 is as a result of DNA being released from the liposomes following liposome shell disruption.
In In order to assess DNA protection and its degradation in the presence of DNase I, using agarose gel electrophoresis of plasmid DNA as a qualitative measure of DNA stability. As shown in Figs 5 and 6 control plasmid DNA was digested by DNase I (lane 2). However all lipoplex formulations demonstrated protection from DNase I, lanes 4-7. After breaking lipoplexes shell and adding DNase I (see lanes [8][9][10][11], DNase I was able to digest the condensed pDNA. In replace the viral vector. The features of liposomes are strictly related to the chemical properties of the cationic and neutral lipids used for their preparations. It is well established that the transfection efficiency can be affected by the composition of the transfection reagent, liposomes size and zeta potential 15,28 as well as liposomes to DNA ratio 29 . In this study each formulation was tested at four different ratios of DNA: liposomes 1:2, 1:5, 1:10 and 1:20 formulations, this identified the most suitable ratio (1:5) to be used with the aim to reduce any possible toxic www.nature.com/scientificreports www.nature.com/scientificreports/ effect from liposomal compositions. Multiple, independent (n = 3) experiments for each condition to control for biological and methodological variations were used.
The NanoAssemblr method gave the highest encapsulation efficiency, produced homogenised lipoplexes size of about 160 nm, and found to be more practical. Hence, nanoAssemblr produced liposomes were used for transfection optimiation of DNA to liposome ratio. Figure 7 shows, the change in liposomes (empty liposomes) size after the addition of pDNA (lipoplexes) at different ratios. Lipoplexes' size was seen to increase compared to liposomes alone and this is in agreement with many studies. Figure 8 illustrates the zeta potential of liposomes following addition of pDNA to form lipoplexes. In all four different ratios (1:2, 1:5, 1:10 and 1:20 DNA:liposome), at low ratio of 1:2 mostly all formulations became negatively altered. This is due to part neutralisation of the positively charged liposomes by pDNA molecules, which are carrying negative charges. The ratios of 1:5, 1:10 and 1:20 did not alter the original zeta potential of the plain liposomes. Their positively zeta-potential values will not only improve the stablilty of liposomal suspensions by the effect of repulsion but also will enhance the cell uptake, as the cell surface is negatively charged.
Results in Fig. 8 show that zeta potential results were consistent with those of transfection activity (Figs 9 and 10). At low ratio 1:2 there was very low transfection efficiency, this is potentially as a result of the presence of less positively charged lipoplexes. In higher ratios the increased zeta-potential resulted in increased transfection efficiency. Moreover, after ratio of 1:10 (DNA:liposomes) there was increased transfection efficiency with increasing the ratio, as zeta potential became stable (Fig. 8). These results suggested that zeta potential of cationic liposomes had an imporatnat effect on the gene transfection. These results were consistent iwht the findings of Farrow et al. 30 and Wasungu et al. 31 who observed that a positive zeta potential is key in cell transfection.
pDNA must be released into the cytoplasm and transported into the nucleus where transcription takes place. Liposomes formulations with four ratios of DNA:liposomes were tested on two different cell lines COS 7 and SH-SY5Y. Figures 9 and 10 revealed that all four formulations with ratio of 1:2 had the lowest transfection efficiency. As outlined previously this can be related to the zeta potential, where ratio 1:2 had the lowest zeta potential. At ratio of 1:5 transfection efficiency was optimal and increasing the ration did not lead to increased transfection efficiency. Kim et al. 32 stated that a specific cell lines can favor certain lipid compositions, using different ratio of DOTAP:DOPE can help to achieve optimal conditions in gene delivery. It was shown that differences in uptake pathways could affect the intracellular fate of complexes, potentially contributing to the differences in transfection efficiency and this difference is related to the differences in membrane structure. This study has shown that all formulations resulted in higher transfection efficiency in COS 7 than in SH-SY5Y cell lines. This may be explained by Clathrin-dependent endocytosis that accounts for the majority of internalised complexes that penetrated COS7 cells and its limits to particles under 200 nm and hence all prepared formulations are within size of less than 200 nm. www.nature.com/scientificreports www.nature.com/scientificreports/ The expression of GFP following the transfection of COS7 and SH-SY5Y cell lines using different lipoplex formulations was assessed with fluorescence microscopy (Figs 11 and 12), compared with commercially available liposomes and quantified by flow cytometry. The highest level of GFP expression was noticed observed after the addition of carboxymethyl -β-cyclodextrin to cationic liposomes (DOTAP, DOPE and cholesterol, F4). The increase in cell transfection after the addition of cyclodextrin could be as a result of reduction in zeta potential which resulted in reduction of aggregation of cationic lipid with protein present in media. Zidovetzki and Levitan 33 explained that the increase in transfection in the presence of cyclodextrin is assigned to the hydrophobic cavity of cyslodextrin which has the ability to attract cholesterol from the cell membrane. Carboxymethyl -β-cyclodextrin donates cholesterol to cells and by itself causes the efflux of cholesterol from cell membranes, resulting in modulation of the fluidity/rigidity and permeability of the cell membrane. Moreover, the addition of cyclodextrin to cationic lipid has also resulted in an increase of pDNA encapsulation efficiency which might be www.nature.com/scientificreports www.nature.com/scientificreports/ a reason behind the increase in transfection efficiency. The addition of Pluronic F-127 to cationic liposomes (F6, F8) also improved transfection efficiency over F2.
Based on the above results ratio 1:5 pDNA:liposome was used to compare the transfection efficiency of the lipoplexes prepared by NanoAssemblr and Rotary Evaporator. Accordingly, results in Figs 13 and 14 have shown that the use of microfluidic system dramatically improved transfection efficiency over thin film hydration method. These results can be explained by that the nanoAssemblr has shown to increase the pDNA encapsulation efficiency and produced smaller and homogenise particle size this was supported by other studies 20,34 . This showed that liposomes with smaller and homogenise size will result in higher transfection efficiency. cell viability. The use of nanoparticles such as liposomes in drug delivery has resulted in reduction of unwanted adverse effect, such as the use of liposomes to deliver doxorubicin has resulted in reduction of cardiac toxicity 35 . In gene delivery, cationic liposomes are the cause of the toxicity. However, in this study DOPA was used in the manufacture of liposomes to reduce their toxicity.
Cationic Liposomes toxicity is mainly due to the positive charge 36 . The head group comprises of primary, secondary, tertiary amines or quaternary ammonium, these positively charged head group may interact with negatively charged components in the cells.
This interaction results on promoting inflammation, cytotoxicity and genotoxicity. www.nature.com/scientificreports www.nature.com/scientificreports/ It was reported that the cationic head group within liposomes, and not liposomes themselves, can lead to production of reactive oxygen in lung cells of mice and thus initiating inflammation and toxicity 37 . Due to binding to proteins such as apolipoproteins and immunoglobulins, charged cationic liposomes are easily recognised by reticuloendothelial cells 35 . Many studies have demonstrated that shieling the positively charge will result in reduction of cationic liposomes toxicity. Results shown in Figs 15 and 16. Ratios 1:10 and 1:20 which have the higher liposomes ratio have resulted in higher cell toxicity due to higher lipid contents; the cell viability was independent on the addition of CD and PL based on this study. These results therefore are consistnet with the theory of cationic liposomes in that increasing the lipid concentration will lead to increasing liposomes cytotoxicity 38 . Results in Figs 17 and 18 showed that all formulations prepared by microfluidic and thin film hydration methods were stable at 4 °C, as there was insignificant change in particle size, (p > 0.05). This was in agreement with a study by Zuidam and Crommelin 39 .
At 37 °C, all formulations have increased in size (Figs 19 and 20). However, liposomes prepared using microfluidic method showed much smaller change in particle size compared with those prepared by the thin film hydration method. This can be explained by the fact that NanoAssemblr produced smaller and more homogenised size distribution when compare to the thin film hydration method. These results were in line with that by Kastner et al. 40 who compared the stability of liposomes (Egg phosphatidylcholine (PC) and cholesterol) prepared www.nature.com/scientificreports www.nature.com/scientificreports/ using microfluidic and lipid hydration techniques. They reported that liposomes were stable under 4 °C for 60 days. However at 40 °C liposomes lost their structures and contents; also liposomes prepared by microfluidic method were much more stable than liposomes prepared by lipid hydration method. Hence, it is recommended that liposomal colloidal suspensions should be stored at low temperatures, 4 °C or less.
conclusion
In conclusion, the incorporation of carboxymethyl -β-cyclodextrin with cationic lipid has shown to improve encapsulation efficiency of pDNA, as well as transfection efficiency and cell viability, with and without the addition of Pluronic F-127. The NanoAssembler method showed to produce homogenies size, low PDI and increased the pDNA encapsulation efficiency. It also produced smaller liposomes with uniform profiles which are less succeptible to aggregation. Moreover, this work has demonstrated the use of microfluidic hydrodynamic flow focusing (HFF) method and its advantage over the rotary evaporator (thin film hydration method), which HFF has the ability to control size in one step. Change in size during storage at 4°C for liposomes prepared by microfludic Figure 18. The change in liposomes' size over 12 weeks storage at 4 °C. Liposomes were prepared using microfluidic method, for formulations' compostion refer to Table 3 (in experimental section). Figure 19. The change in liposomes' size over 12 weeks storage at 37 °C. Liposomes were prepared using thin film hydration (TFH) method, for formulations' compostion refer to plasmid DnA preparation. In this study, pDNA expressing green fluorescent protein (pc DNA3.1-GFP) was amplified by transformation of E. coli to produce large quantity of the plasmid. Cells were plated into the ampicillin containing agar plates and stored at 37 °C overnight. One colony was picked from the plate and placed into 100 mls of LB (Luria-Bertaini) medium and left for 48 hours on the shaker. Following this, the medium was purified using a Maxiprep kit. following manufacture protocol (Invitrogen, UK). Purity and quantity of the plasmid were checked using NanoDrop lite (themo, UK) purity was 1.9 and quantity was diluted to produce 1 μg/μl using TE buffer. This was also confirmed by taking UV measurement at 260 nm and 280 nm wavelengths.
Liposomes preparation by thin film hydration method. DOTAP, DOPE and cholesterol with a molar ratio of 8:8:2 (Table 3) were dissolved in round flask glass, with 2 mls of ethanol. The solvents were evaporated over two hours at 60 °C using the rotary evaporate pressure set at 465 mbar. Liquid nitrogen applied to dry any left-over solvent. Pluronic F127 and Carboxymethyl-β-cyclodextrin were dissolved in distilled water at concentration of 4 mg/ml.
The lipid was then rehydrated using an aqueous medium (distilled water or carboxymethyl-β-cyclodextrin in distilled water or Pluronic F127 or carboxymethyl-β-cyclodextrin in distilled water or Pluronic in distilled water) to produce final lipid concentration of 10 mg/ml (see Table 3 for more details). The mixture was then vortexed for 2 min and ultrasonic bath sonication for 20 minutes to produce plain liposomes. Lipoplexes (liposomes with pDNA) were prepared by adding the required amount of pDNA (at a concentration of 1 mg/ml) to 1 ml of each liposome formulation (at a lipid concentration of 1 mg/ml). For example to prepare lipoplexes with the used ratio, 1:5 ratio, of pDNA:Liposome, 200 microlitre of pDNAwas added to 1000 microlitre of the prepared liposome.
Liposomes preparation by microfluidic method. DOTAP, DOPE and cholesterol were dissolved in 1 ml ethanol with a molar ratio of 8:8:2 (see Table 3), this ration has been chosen, based on preliminary studies, as it gave a good transfection efficiency. The ethanol-lipid solution was injected into the first inlet. The aqueous phase was injected with 3 ml of distilled water contained carboxymethyl-β-cyclodextrin; Pluronic F-127 and carboxymethyl-β-cyclodextrin; or Pluronic F-127 alone (Table 3). Aqueous dispersions of the liposomes were collected from the outlet, resulting from the mixing of two adjacent streams and centrifuged at 13000 rpm for 40 minutes to remove the ethanol resides. Then, re-suspended in distilled water to make up a concentration of 10 mg/ml.
The formed liposomes were used to prepare the lipoplexes (1:5 ratio of pDNA:Liposome) as above.
In order to optimise liposomes size and zeta-potential, the NanoAssemblr was run at different|: flow rate ratio (FRR) between the lipid and water (at 1:0.5, 1:1,1:3 and 1:5) and the total flow rate (TFR), at 12 ml/min, 9 ml/min, 5 ml/min and 2 ml/min. particle size, zeta potential, polydispersity and transmission electron microscopy. Liposomes size and zeta potential are important characteristics of liposomes formulations especially in gene delivery. Negatively charged particles can be rapidly opsonised and massively cleared by fixed macrophages of the reticuloendothelial system (RES) in the blood stream. Dynamic light scattering (DLS) technique was used to report the intensity mean diameter (z-average) and the polydispersity (PDI) of all liposome formulations (Malvern Zetasizer Nano-ZS (Malvern Instruments, Worcs., UK)). DLS measures the size of liposomes suspended in distilled water. Transmission electron microscopy was applied to study the morphology of the prepared liposomes. www.nature.com/scientificreports www.nature.com/scientificreports/ Encapsulation efficiency. Encapsulation efficiency of pDNA was measured using the NanoDrop lite. 30 μl of pDNA at concentration of 1 μg/μl was added to 60 μl of liposomes at concentration of 10 mg/ml to give ratio of 1:20 DNA:Liposomes of each liposome preparation, vortex for 3 second and left at room temperature (20 °C) for 20 minutes. Samples were then centrifuged for 45 min at 13000 at 4 °C. The supernatant was separated from the pellets. The reading for pDNA concentration in the supernatant was taken and subtract from the total quantity of pDNA (see Eq. 1). The pellets were then broken by a chloroform/methanol 2:1 to extract the pDNA. 400 μl of chloroform/methanol 2:1 was added to the pellet and vortexed until all lipid dissolved. 100 μl of distilled water was added to the mixture and centrifuged for 10 minutes. The aqueous layer was used to quantify the encapsulated pDNA by measuring its UV absorption at 260 nm.
In order to confirm the results. Promega QuantiFluor ® ONE dsDNA System was used by following the manufacturer protocol. Briefly a serial of DNA concentrations was prepared from 25 ng/μl to 400 ng/μl, this will be used as standards. Using 96 plates, 200/μl of the QuantiFluor ® ONE dsDNA Dye was added to each well including standard and blank. 1 μl of each DNA dilution was added to the dye in triplicate. labelled Standards A-G. For the blank, 1 μl of 1X TE buffer were pipetted into row H in triplicate. 1 μl of each formulation (F2, F4, F6 and F8 samples were added to A,B,C and D rows in triplicate. Fluorescence was measured at (504nmEx/531nmEm), using TriStar LB 941 Microplate Reader (Berthold Technology). To calculate pDNA concentration, fluorescence of the blank sample (1X TE Buffer) was subtracted from each standard and sample. Using the data from DNA standards to generate a standard curve of fluorescence versus DNA concentration, concentration of DNA in each formulation was caluclated. Gel-electrophoresis: protection of pDnA against Dnase i. Lipoplexes were assessed against DNase I degradation as follows: 1. adding 2 units of DNase I to each intact lipoplex formulation containing 1 μg pDNA and 20 μg liposomes/100 μl buffer, to give ratio of 1:20 DNA:liposomes, at 37 °C for 30 min. 10 μL from each sample was run in gel electrophoresis to measure the protection that liposomes provide for pDNA. This can also be used to measure if pDNA has been encapsulated inside the liposomes, or is just bound to the liposome membrane.
2. Adding DNase I to the dissociated lipoplexes (20 μl of chloraphorm/methanol 2:1 were used in order to dissociate the lipid/DNA complexes before adding the DNase I). After 30 min, 10 μL from each sample was run in gel electrophoresis 3. 1 μg of pDNA was incubated with 2 units of DNase I in 100 μl buffer (this was used as a reference to compare with intact lipoplex formulations).
All samples run in 1% agarose gel for 60 min at 90 vm. Results were analysed by UV trans-illumiator with digital imaging (BioRad Laboratories, Inc). transfection efficiency. The efficiency of each liposome formulation was measured, by transfecting pDNA3.1-GFP to COS7 and SH-SY5Y cell lines. 24 hrs before transfection, 3 × 10 5 of both cell lines were plated in 6 well plates with 2.5 mls of DMEM, 10% fetal bovine serum (FCS) and 1% L-Glutamine, at 37 °C and 5% CO 2 . Cells were ≥70% confluence. Lipoplexes were prepared by diluting 2 μg of pDNA in 250 μl Opti-MEM I Reduced-Serum Medium. Although lipoplexes 1:5 pDNA:liposome was used for all characterisation but the selection of this ratio was based on testing the effect of different ratios on transfection efficiency. Hence, the required ratios of pDNA to liposomes (1:2,1:5, 1:10, 1:20) were prepared and incubated at room temperature 20 °C for 30 minutes. After incubation lipoplexes were added to each cell well drop-wise to different areas of the wells. Transfected cells were placed back in the incubator at 37 °C and 5% CO 2 , for 48 hours and then processed for flow cytometry, FACS and fluorescence microscopic analysis. TransIT-LT1 liposomal reagent was used as a positive reference and untreated cell as a negative reference. To quantify transfection efficiency, EGFP positive cells were measured using FACS, BD Accuri C6 plus (Becton Dickinson Bioscience, USA). Firstly, cells were washed with phosphate buffer saline (PBS) and in order to detach cells from plates 200 μl trypsin was added, and cells were incubated for 5 minutes. Cells were suspended in 1 ml of media and centrifuged at 400 G for 5 minutes, followed by removing the supernatant. Then celles were re-suspended in 1 ml PBS and taken to FACS for quantification of GFP. Negative Control samples (non-transfected cells) were displayed on a forward scatter (FSC) versus side scatter (SSC) dot plot to establish a collection gate and exclude cells debris. Cells transfected with TransIT-LT1 reagent, were used as a positive control sample. Transfection efficiency was expressed as the percentage of EGFP positive cells at 525 nm (FL1) after excluding dead cells. For each sample 10,000 events were collected. Each formulation was analysed in triplicate. cell viability. Cell viability was evaluated using Propidium iodide (PI) and 3-(4,5-Dimethylthiazol-2-Yl)-2,5-Diphenyltetrazolium Bromide (MTT) test. Different pDNA:liposomes were tried to see their effect on cell viability whether is due to plain liposomes' toxicity.
Propidium iodide (PI), this dye binds to double stranded DNA by intercalating between base pairs. PI is excited at 488 nm, and with a relatively large Stokes shift, emits at a maximum wavelength of 617 nm. 5 μL of PI was added to each sample including positive and negative control samples. The fluorescent signal corresponding to dead cells was measured at 650 nm (FL2).
Cell Proliferation Kit I (MTT) from Sigma-Aldrich, UK, was used: MTT 3-(4,5-dymethyl thiazol 2-y1)-2,5-diphenyl tetrazolium bromide (MTT, mitochondrial respiration analysis) following the manufacture protocol. Briefly, the (2019) 9:15120 | https://doi.org/10.1038/s41598-019-51065-4 www.nature.com/scientificreports www.nature.com/scientificreports/ assay is based on the cleavage of the tetrazolium salt MTT in the presence of an electron-coupling reagent to produce water-insoluble formazan salt 41 . The formazan dye is quantitated using a scanning multi-well spectrophotometer (TriStar LB 941 Multimode Microplate Reader, Berthold Technologies GmbH & Co). The measured absorbance directly correlates to the number of viable cells. Cells were seeded on 96-well plate (20,000 cells/well). 24 hrs after seeding, cells were treated with 1.25 μg of each plain liposome and lipoplex formulation and incubated again for a 48 hrs period. Then, 0.01 ml (from a final concentration of 0.5 mg/ml) MTT was added to each well. Then, after 4 hours of incubation at 37 °C, isopropanol with 0.04 N HCl was added. The isopropanol dissolves the formazan to give a homogeneous blue solution suitable for absorbance measurement. The absorbance of each well was measured at 570 nm using the Microplate Reader (TriStar LB 941, Berthold Technology GmbH & Co). Viability was calculated and expressed as a percentage of the positive control (i.e. untreated cells), Eq. 2: Absorbance of treated cell Absorbance of untreated cell 100 (2) Storage stability. Sizes of all prepared liposomes and lipoplexes were investigated in PBS aqueous medium (pH 7.4) for vesicles' stability at 4 °C (using a fridge) and at 37 °C (using an oven); samples were stored for 12 weeks. The samples were measured, weekly, in triplicate.
Statistical analysis. All measurements were replicated at least three times. The results were evaluated statistically with SPSS software. Univariate analysis of variance was used as statistical analysis. Levene test was used to test the sample has equal variances. Equal variances cross sample is called homogeneity of variance. Tukey test was used for normal distribution. The data are considered significant if P < 0.05. All data were stated as mean ± standard deviation. | 2019-10-23T15:24:32.297Z | 2019-10-22T00:00:00.000 | {
"year": 2019,
"sha1": "f1a0bd293a7504619e78d32304baf4bb2decc508",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-019-51065-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "f1a0bd293a7504619e78d32304baf4bb2decc508",
"s2fieldsofstudy": [
"Medicine",
"Chemistry",
"Engineering"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
} |
212676351 | pes2o/s2orc | v3-fos-license | Pin1 Binding to Phosphorylated PSD-95 Regulates the Number of Functional Excitatory Synapses
The post-synaptic density protein 95 (PSD-95) plays a central role in excitatory synapse development and synaptic plasticity. Phosphorylation of the N-terminus of PSD-95 at threonine 19 (T19) and serine 25 (S25) decreases PSD-95 stability at synapses; however, a molecular mechanism linking PSD-95 phosphorylation to altered synaptic stability is lacking. Here, we show that phosphorylation of T19/S25 recruits the phosphorylation-dependent peptidyl-prolyl cis–trans isomerase (Pin1) and reduces the palmitoylation of Cysteine 3 and Cysteine 5 in PSD-95. This reduction in PSD-95 palmitoylation accounts for the observed loss in the number of dendritic PSD-95 clusters, the increased AMPAR mobility, and the decreased number of functional excitatory synapses. We find the effects of Pin1 overexpression were all rescued by manipulations aimed at increasing the levels of PSD-95 palmitoylation. Therefore, Pin1 is a key signaling molecule that regulates the stability of excitatory synapses and may participate in the destabilization of PSD-95 following the induction of synaptic plasticity.
INTRODUCTION
The post-synaptic density (PSD) of excitatory synapses contains multiple scaffolding proteins, many of which belong to the membrane-associated guanylate kinase (MAGUK) family of scaffold proteins (Sheng and Hoogenraad, 2007). Of the MAGUKs, the post-synaptic density protein 95 (PSD-95) contributes between 300 and 400 copies to the PSD, making it one of the most abundant proteins at synapses (Chen et al., 2008). PSD-95 serves a diverse set of roles at excitatory synapses (Sheng and Hoogenraad, 2007).
Recently, PSD-95 has been implicated in the induction and expression of synaptic plasticity in pyramidal neurons (Béïque et al., 2006;Ehrlich et al., 2007;Carlisle et al., 2008;Xu et al., 2008;Sun and Turrigiano, 2011;Levy et al., 2015). For example, overexpression of PSD-95 in pyramidal CA1 neurons results in enhanced excitatory synaptic transmission, occlusion of pairing-induced long-term potentiation (LTP), and enhanced NMDAR-dependent long-term depression (LTD) (Stein et al., 2003;Ehrlich and Malinow, 2004). These findings suggest that PSD-95 is essential for bidirectional synaptic plasticity. For example, synapses with high amounts of PSD-95, i.e. during PSD-95 overexpression, prevents further accumulations of AMPARs following NMDAR-dependent LTP while at these synapses the induction of NMDAR-dependent LTD and removal of synaptic AMPARs is facilitated. On the other hand, removing or knocking down PSD-95 impairs the induction of NMDARdependent LTD (Migaud et al., 1998;Beique and Andrade, 2002;Ehrlich et al., 2007). However, the precise molecular mechanisms regulating PSD-95 stability at synapse are not fully understood.
These newly described findings highlight the loss of PSD-95 following NMDAR activation; but they don't relate to how constitutive phosphorylation of T19 and S25 regulates PSD-95 synaptic accumulation. Moreover, a great fraction of PSD-95 at the PSD is T19/S25 phosphorylated (Morabito et al., 2004), which highlights the importance of understanding the role of this molecular mechanism in the regulation of baseline excitatory synaptic transmission.
A potential regulator of PSD-95 synaptic stability during baseline conditions could be forged by the phosphorylationspecific peptidyl-prolyl cis-trans isomerase (Pin1). Pin1 is a small cytosolic and ubiquitously expressed peptidyl-prolyl isomerase, whose target recognition is independent of the increases in cytosolic Ca 2+ . Pin1 consists of two major domains: an N-terminal WW domain [containing two tryptophan (W) residues] and a C-terminal catalytically active peptidyl-prolyl isomerase (PPIase) domain (Yaffe et al., 1997;Lu et al., 2007). Via its N-terminus WW domain, Pin1 binds to substrates that are phosphorylated at serine/threonine-proline residues (Siepe and Jentsch, 2009;Moretto-Zita et al., 2010;Lonati et al., 2014). The enzymatic function of Pin1 is carried out via its C-terminal peptidyl-prolyl isomerase domain, which mediates the cis-trans peptidyl-prolyl isomerization of the phosphorylated serine/threonine-proline residues (Verdecia et al., 2000). In most targets, the cis-trans isomerization triggers a strong conformational change in the target protein and, in many cases, consequently restores biological function to its target (Lu et al., 2007).
This work tests the hypothesis that Pin1 binding via its WW domain to the phosphorylated T19/S25 in PSD-95 regulates PSD-95 accumulation at the PSD of hippocampal neurons. The association of Pin1 to these sites blocks palmitoylation of C3 and C5 in PSD-95. We find the reduction in PSD-95 palmitoylation correlates well with the decreased amounts of PSD-95 in postsynaptic dendrites, decreased number in post-synaptic spines, and reduced number of functional excitatory synapses. The remaining synapses remain functional with normal amounts of AMPARs and PSD-95 molecules. The decrease amounts of PSD-95 leads to a slight increase in the mobility of surface AMPARs. Lastly, the reduction in number of PSD-95 clusters is restored by manipulations that increased global palmitoylation. This supports the idea that the effects of Pin1 on synaptic PSD-95 clusters are due to a reduction in PSD-95 palmitoylation as opposed to the downregulation of some unknown protein. This data shows how phosphorylation of the N-terminal domain of PSD-95, from normal synaptic physiological processes, regulates the development and maintenance of functional excitatory synapses. These findings support the hypothesis that Pin1 is an important regulator of excitatory synapse function in the hippocampus.
Cloning and cDNA Plasmids
The plasmid encoding PSD-95:EGFP was a gift from S. Okabe (Tokyo University, Japan). The hPF11:EGF was used with permission from Dr. Masaki FUKATA. The triple T19A, S25A, and S35A (N3A-PSD-95) PSD-95:EGFP mutants was generated using site directed mutagenesis following the manufacturers recommendations (Agilent Technologies) and sequence verified. First the T19A and S25A double mutation was introduced using the following primers set: sense -GAAATACCGCTACCAAGATGAAGACGCGCCCCCTCTGG AACACGCGCCGGCCCACC TCCCCAACCAGGCCAATTC and antisense -GAATTGGCCTGGTTGGGGAGGTGGGC CGGCGCGTGTTCCAGAGGGGGCGCGTCTTCATCTTGGT AGCGGTATTTC.
Hippocampal Cultured Neurons
Preparation of cultured neurons was performed by plating neurons at a density of 100 to 200K per well in a 6 well plate. In brief, hippocampal neurons from E18 embryos of either sex were cultured on glass coverslips coated with Poly-lysine as in Borgdorff and Choquet (2002). Neurons were plated in Neurobasal supplemented with B27 and glutamine. The day after plating, neurons were treated with 1 µM Ara-C to stop glia and microglia proliferation. Feedings were done every 4 days using low cysteine containing media (Hogins et al., 2011). At day in vitro 8-10 neurons were transfected using Effectene or Lipofectamine 2000 following the manufacturer's recommendation. Between 1 and 2 µg of the respective cDNA was used per well. Experiments were performed on neurons between 11 and 20 DIV.
PC12 Stable Cell Lines
In brief, PC12 were cultured in NEM supplemented with 10% CS, 5% HS, and 1X PenStrep. Cells were eletroporated using a Lonza electroporator using the neuronal setting following manufacture's recommendation for pulsing and cDNA concentrations. Two days post-transfections cells were started at 1 µg/mL Puromycin which was enough to kill most cells. Surviving cells were left to grow until visually identified clones emerged. Individual clones were picked and transferred into 6 well plates to grow to confluency. Feedings were done every 4 days.
Whole Cell Electrophysiology
Individual coverslips were transferred one at a time to a submerged chamber mounted on a fixed-stage upright microscope. They were continuously perfused with oxygenated artificial CSF at 33 • C flowing (ACSF) at a rate of 2-3 ml/min containing (in mM) 115 NaCl, 3 KCl, 1 NaH 2 PO 4 , 25 NaHCO 3 , 1 MgCl 2 , 2 CaCl 2 , 1 sodium pyruvate, and 10 dextrose. Individual cells were identified at 400X magnification using infrared DIC optics and an infrared-sensitive camera. EGFP expressing cells were identified by fluorescence through an FITC filter set. Whole-cell somatic recordings were obtained with pulled glass micropipettes (somatic 2.5-5 M ). Pipettes were coated with paraffin to reduce the pipette capacitance. Pipettes were filled with intracellular solution containing (in mM) 115 K-gluconate, 10 KCl, 1 HEPES, 10 Na 2 phosphocreatine, 4 MgATP, 0.3 NaGTP, and 0.2 EGTA adjusted with KOH to pH 7.3-7.35 and osmolarity was adjusted to 290 mOsm with K-Gluconate. Miniature excitatory post-synaptic potentials were isolated by adding 0.2 µM tetrodotoxin (TTX) and 100 µM bicuculline to the ACSF. Cells were voltage clamped at −70 mV and recordings were accepted if input resistances were > 100 M , holding currents less than −100 pA and series resistances were < 20 M . No adjustment of offset potential was performed. Cover slips were changed after 30 min in the recording chamber. Electrophysiological recordings were made using a Multiclamp 200B amplifier (Axon Instruments) or a HEKA EPC 10 amplifier (HEKA). Signals were filtered at 2 kHz and digitized at 10 kHz. All data analyses were performed using the MiniAnalysis software for automatic detection of events. Events were visually inspected for correct selection. Between 50 and 200 events were then used to extract peak measurements and events times. The amplitude and single event decay time constants were measured from each mEPSC. Average values are reported for each cell.
PSD-95 Staining
In an alternative immunostaining experiments, 3-5 days posttransfection cells were fixed in 4% Paraformaldehyde at room temperature for 20 min. Cells were then rinsed three times with 1X PBS, then 5 min in 50 mM NH4Cl, and three more quick rinses in 1X PBS. Cells were permeabilized in 0.1% Tx-100 PBS for 5 min followed by three quick PBS rinses. Cells were incubated in freshly made 0.5% Sodium Borohydrate in PBS for 5 min. Cells were quickly rinse in PBS and incubated in 2 mL of 1% BSA in PBS for 45 min followed by incubation in 100 µL of anti-Pin1 (1:500) and anti-PSD-95 (1:500) for 1 h or overnight at 4 • C ( Table 1). Cells were rinsed 3X in PBS and the Alexa fluor 647 anti-mouse (1:500), and Alexa fluor 488 anti-rabbit applied at a dilution of 1:500 for 1 h in 1% BSA in PBS. Cells were rinsed 5X in PBS, post fixed in 4% PFA and mounted in slow fade mounting media (Life technologies).
Spinning Disk Confocal
Cells were imaged using 3-I Marianas live-cell dual-camera Yokogawa CSU-X spinning disk confocal. AxioObserver platform with DualCam and two Evolve EM-CCD cameras, CFP/YFP and R/G cubes using 100X/1.45 oil objective. The solid state 488, 561, and 640 lasers were used with fiber switcher to excite the corresponding fluorophores as needed. The objective was mounted onto a piezo MadCityLabs piezo Z insert which was used to collect Z-stacks or using either a DMI6000 Leica microscope (Leica Microsystems, Wetzlar, Germany) equipped with a confocal Scanner Unit CSU-X1 (Yokogawa Electric Corporation, Tokyo, Japan) using a 100X NA 1.4 oil objective (objective specs) and a QuantEM:512SC (Photometrics, Tucson, AZ, United States) or a Zeiss Axiovert 200M equipped with a confocal Scanner Unit CSU-X1 (Yokogawa Electric Corporation, Tokyo, Japan) using a EC Plan-Neofluar 100X/1.3NA Oil objective and a Photometric Cascade II was used to collect the fluorescence intensities. We used the 473, 532, 561, and 638 nm to excited the corresponding fluorophores as needed. The objective was mounted onto a piezo P721.LLQ [Physik Instrumente (PI), Karlsruhe, Germany] which was used to collect Z stacks.
Single Particle Tracking
The FIONA experiments were performed with a Nikon Ti Eclipse microscope with a Nikon APO 100 X objective (N.A. 1.49). The microscope stabilizes the sample in z-axis with the Perfect Focus System. An Agilent laser system MLC400B with 4 fiber-coupled lasers (405, 488, 561, and 640 nm) was used for illumination. Elements software from Nikon was used for data acquisition. A back illuminated EMCCD (Andor DU897) was used for recording. For 3-D imaging, a cylindrical lens (CVI Melles Griot, SCX-25.4-5000.0-C-425-675) of 10 m focal length was inserted below the back aperture of the objective. A motorized stage from ASI with a Piezo top plate (ASI PZ-2000FT) was used for x-y-z position control. A quad-band dichroic (Chroma, ZT405-488-561-640RPC) was used and bandpass emission filter 525/50, 600/50, 710/40, 641/75 was used for fluorescence imaging. Primary hippocampal cultures, labeling and single particle tracking and analysis experiments were performed as previous described in great detail. In brief, on 12-13 days in vitro (DIV), neurons were co-transfected with control, Pin1 O.E. or KD plasmids, GluA2-AP (1 µg/coverslip), and BirA-ER (1 µg/coverslip) by using Lipofectamine 2000 transfection reagent as in Lee et al. (2017). At 24 ∼ 72 h after transfection, the coverslips were transferred to warm imaging buffer (HBSS supplemented with 10 mM Hepes, 1 mM MgCl 2 , 1.2 mM CaCl 2 , and 2 mM D-glucose) for 5 min incubation and mounted onto an imaging dish (Warner RC-40LP). Neurons were incubated in imaging buffer containing 60 pM Atto647N Stretptavidin (supplemented with 30 pM biotin to help prevent crosslinking) and casein (∼40 times dilution; stock solution purchased from Vector labs, SP-5020) for 5 min at 30 • C and washed with 5 ml of imaging buffer. Finally, 1 mL of Hibernate E (Brain Bits, LLC) was added to the imaging dish that was subsequently mounted on the microscope.
After focusing the sample in bright field, the Perfect Focus System was activated to minimize the sample drift in z direction. The samples were then scanned in the GFP channel (488 excitation, 525/50 emission) to locate transfected cells. A fluorescent image of the cells was taken for reference. To track the SA labeled receptors, 640 nm laser was used for excitation in the hi-low-fluorescence mode with an appropriate band-pass filter for collecting the fluorescence.
For the tracking data, centroids of the all the SAs were localized in all the frames. A Matlab code was used to recover the trajectories of the SAs. In brief, the code finds locations of SAs in time t, and searches for nearby SA in time t + 1 as the next point on the trajectory. In the 3-D single particle tracking experiment, the maximum displacement of a SA in one time step is set to be 1 µm. The diffusion coefficients from the trajectories were calculated in Matlab by fitting the first 4 points of mean-square-displacement curve.
Optical miniNMDARs
SlideBook software was used to control acquisition of the spinning disk hardware. Cover slips were individually mounted on the imaging chamber and provided with 2 mL of solution containing (in mM) 105 NaCl, 3 KCl, 1 NaH 2 PO 4 , 10 HEPES, 25 NaHCO 3 , 0 MgCl 2 , 2 CaCl 2 ,1 sodium pyruvate, and 10 dextrose. AMPAR-mediated transmission was blocked with 10 µM CNQX and action potentials were blocked with 1 µM TTX. pH was regulated via a OKO full enclosure incubator providing moistened CO 2 . Individual cells expressing GCaMP6S and the shRNA (expressing turboRFP) were identified by fluorescence through an FITC filter set using a 40X/1.3 oil objective. Imaging was performed using the 100X/1.45 oil objective. Cells remained healthy for hours in this system as evidence of constant activity and a constant low level of GCaMP6S fluorescence. The time series collected for 3 min with an integration time of 200 ms per frame. Cells were excited with laser intensity of 0.74 mW. Single Ca 2+ spine events were isolated using region measurement tool in FIJI and exported to MiniAnalysis for peak detection using pClamp10.
Image Analysis
Cluster areas were determined from the thresholded EGFP signal by adjusting the filter setting on the thresholding function in FIJI to include exclusively dendritic spines. All areas where included in the measurements and analyzed regions were saved for post hoc verification. Threshold value was kept constant across conditions and was adjusted on a per week basis to accommodate good cluster separation in control cells. To control for week to week variability experiments are normalized on a per week basis and parameters kept constant across conditions. Analyze particles option in FIJI was used to extract features from the image.
Experimental Design and Statistical Analysis
At least two coverslips/condition were used on each data set and a minimum of two coverslips per week. Each experiment was repeated 2-3 weeks. Each coverslip was randomly assigned the group before transfection. Data collection was interleaved and controlled for time and order effects. Coverslips which looked in poor health after transfection were discarded from analysis. Samples from all group were acquired on a weekly basis to reduce variability, otherwise no data was included in final analysis. We tested for outliers on a weekly basis and they were eliminated after testing all groups using Prism online calculator at a significance level of p < 0.05. Normality testing was performed on every group using D'Agostino and Pearson omnibus normality test. Between groups statistical significance was calculated accordingly for each distribution and experiment design. Data was normalized on a weekly basis to compensate for week to week variability. Numerical averages are presented as mean ± SEM or as box plots Statistical analyses were created using GraphPad Prism 5.0. GraphPad Prism 5.0 reports statistics as and quote: "For each pair of columns, Prism reports the p-value as > 0.05, < 0.05, < 0.01, or < 0.001. The calculation of the p-value takes into account the number of comparisons you are making. If the null hypothesis is true (all data are sampled from populations with identical distributions, so all differences between groups are due to random sampling), then there is a 5% chance that at least one of the posttests will have p < 0.05. The 5% chance does not apply to each comparison but rather to the entire family of comparisons." Exact p-values are reported when provided.
Pin1 Regulates PSD-95 N-Terminus Palmitoylation
We have shown that Pin1 can interact with the phosphorylated N-terminus domain of PSD-95 (in review). Furthermore, Ca 2+ /CaM association with these same residues in the N-terminus domain of PSD-95 blocks re-palmitoylation of cysteine 3 and 5 in PSD-95 (Chowdhury et al., 2018). Because palmitoylation of PSD-95 at C3 and C5 is necessary to stabilize PSD-95 within the post-synaptic densities (El-Husseini et al., 2002;Fukata et al., 2013;Park et al., 2013;Zhang et al., 2014) and Pin1 binds T19 and S25 in PSD-95 (under review), we tested if Pin1 can also regulate the levels of PSD-95 palmitoylation. PSD-95 palmitoylation was measured using an imaging approach utilizing a genetically encoded GFP fusion intrabody (hPF11) that specifically recognizes PSD-95 when it is palmitoylated (Fukata et al., 2013). Cells expressing only hPF11:EGFP don't show clustering of the fluorescence and the fluorescence remains cytosolic ( Figure 1A). Moreover, in heterologous cells expressing wt PSD-95, hPF11 recognizes palmitoylated PSD-95 localized to intracellular clusters that increase in size when global palmitoylation is increased (Jeyifous et al., 2016). Thus, hPF11 faithfully track decreases and increases in PSD-95 palmitoylation. To quantify the effects of Pin1 on PSD-95 palmitoylation, HEK cells were transfected with (1) either wt PSD-95 or N3A, the N-terminus mutant of PSD-95, (2) the hPF11 plasmid, and (3) either Pin1 or an empty vector (as control). The effects of Pin1 overexpression was quantified as the percentage of cells containing intracellular clusters (white arrows, Figure 1B). Also we tested the effects of increasing PSD-95 palmitoylation by treating cells with Palmostatin B (Palm B). We use a 6 h pretreatment with Palm B, an inhibitor of acyl protein thioesterase 1 that blocks depalmitoylation, which has been shown to increase the number of cells with visible hPF11 intracellular clusters over DMSO treated cells (Jeyifous et al., 2016). A two-way analysis of variance (ANOVA) design was used to test the interaction between Pin1-overexpressing cells and the increase in PSD-95 palmitoylation (Palm B). In DMSO-treated control there was a significantly higher number of hPF11 intracellular clusters than cells expressing Pin1 (Figure 1B). Similarly, a series of experiments aimed at confirming the relationships between palmitoylation measured using the hPF11 intrabody and the most stablished click chemistry assay (biochemically) show that with both methods Pin1 reduced the total level of PSD-95 palmitoylation down to 30% of control levels (e-mail communication with Antinone S. and Green N.W. Fri, Apr 24, 2015, 5:08 PM). Palm-B increased the number of PF11 clusters [ Figure 1B, F(1,76) = 203.99, p < 0.0001].
The interaction between Pin1 overexpression and Palm B treatment was also significant, suggesting that Pin1 binding dominates over PSD-95 palmitoylation because Palm B failed to increase the number of hPF11 back to control levels, although a statistical significant increase in PSD-95 palmitoylation was observed. A two-way ANOVA experiment design was also used to test the interaction between Pin1 binding to the N-terminus mutant (N3A) of PSD-95 and its palmitoylation. As observed in DMSO-treated control cells, Pin1 overexpression significantly decreased the number of PF11 clusters in the N3A expressing cells (Figure 1C), suggesting that Pin1 regulates global protein palmitoylation as well as palmitoylation of PSD-95 when it binds to the N-terminus domain. The roles of Pin1 in regulating global palmitoylation have not been studied to this time. Palm B treatment significantly increased the number of PF11 clusters in cell expressing the N3A mutant (Figure 1C), thus making it insensitive to the overexpression of Pin1. Taken together, this data suggests that Pin1 binding to the N-terminus domain decreases the rates of PSD-95 palmitoylation under basal conditions and under conditions where the action of the acyl protein thioesterase 1 is blocked with Palm B pretreatment.
To quantify the effects of Pin1 on PSD-95 palmitoylation in neurons, the hPF11 experiment was repeated in cultured neurons transfected with hPF11 and wt Pin1 or an empty vector (as control). The effects of Pin1 on hPF11 immunodistribution were quantified as a change in the ratio of the hPF11:EGFP signal coming from consecutive spines to that of the adjacent dendrite (Figures 1D,E). The level of hPF11 overexpression was fairly high, and it appears as if not all the hPF11 molecules were bound to PSD-95. Therefore some of the hPF11 remained cytosolic, but even under those circumstances, we observe enrichment in dendritic spines. Similar to the effects observed for the palmitoylation-deficient mutant of PSD-95, Pin1 overexpression significantly reduced the enrichment of hPF11 in dendritic spines ( Figure 1D). In Pin1 overexpressing neurons, hPF11 showed equal enrichment between dendrites and post-synaptic spines ( Figure 1E). Furthermore, fewer than twenty percent of spines in neurons overexpressing Pin1 showed some hPF11 enrichment in dendritic spines, while about 90 percent of spines in control neurons showed enrichment of hPF11 to post-synaptic spines ( Figure 1F). Knocking down Pin1 didn't have a strong effect on the accumulation of PF11 in dendritic spines, but a slight trend toward larger PF11 clusters was observed (data not shown).
The loss of PSD-95 palmitoylation could lead to the loss of PSD-95 from dendritic spines and this loss could lead to a reduction in post-synaptic spine maturation (El-Husseini et al., 2000. To examine if Pin1 regulates the amounts of PSD-95 within post-synaptic spines and the number of post-synaptic PSD-95 clusters, cultured hippocampal neurons were transfected with pIRES2:EGFP or Pin1:IRES2:EGFP (expressing EGFP out of an Internal Ribosomal Entry Site, IRES) and immunostained for endogenous PSD-95 (Figure 2A). EGFP is used as a cytosolic marker. Strongly transfected neurons were not included in analysis. Cells overexpressing Pin1 showed a reduced number of PSD-95-positive clusters per 30 µm of dendritic section ( Figure 2B) but did not altered the area or the number of PSD-95 clusters on the surviving spines (Figures 2C,D, respectively). This data suggests that Pin1 regulates PSD-95 palmitoylation, and this association limits the amount of PSD-95 that can become part of a post-synaptic cluster and as a result fewer clusters are formed. Once the PSD-95 clusters are formed, however, they are indistinguishable from the PSD-95 clusters present in control cells.
Pin1 Regulates Post-synaptic Spines
The decrease in PSD-95 palmitoylation and number of PSD-95 clusters suggests that Pin1 could regulate dendritic spines because PSD-95 palmitoylation strongly regulates dendritic spine formation (El-Husseini et al., 2000. Furthermore, conflicting results are observed as to the role of Pin1 in the regulation of post-synaptic spines between the global KO (Antonelli et al., 2016) and the conditional KO (Stallings et al., 2018). For instance, the global Pin1 knockout mice show an increase in spine density (Antonelli et al., 2016) while a recent paper shows that deleting Pin1 from the adult hippocampus decreases it Stallings et al. (2018). These discrepancies in results, led us to reevaluate the role for post-synaptic Pin1 in dendritic spine morphology.
To calrify this controversy, the features of dendritic spines were thus analyzed in neurons overexpressing EGFP, Pin1, or Pin1 C113S (Figure 2E). The C113S mutation on the isomerase domain of Pin1 devoid the protein from its catalytic activity and this mutant form of Pin1 is not transported to the nucleus (Lufei and Cao, 2009). These two features allow us to dissociate the role of cytosolic Pin1 binding from isomerization as well as exclude any potential nuclear effects of Pin1. The number of post-synaptic spines per micron of dendrite was strongly decreased in neurons overexpressing Pin1 or the isomerase mutant C113S, suggesting that binding and not isomerization dominates this effect ( Figure 2F). The width of post-synaptic spine heads was significantly reduced ( Figure 2G). To analyze if Pin1 overexpression alters the diversity of post-synaptic spines, a factorial ANOVA was conducted to compare the main effects of overexpressed protein and the interaction effect between overexpression and the fraction of spines (spine type). The overexpression included three levels of analysis (EGFP, Pin1, and C113S) and the spine type also included three levels (mushroom, stubby, and thin). A 2-way ANOVA revealed that the overexpression was not statistically significant across the groups but the interaction (between the overexpressed protein and the type of spine) was significant. A Bonferroni post-test revealed that the diversity of spines is lost in Pin1 and C113S overexpressing neurons ( Figure 2H). Similar to the results in the global Pin1 KO, knocking down Pin1 with an shRNA strategy increased the number of postsynaptic spines per micron of dendrite (Figures 2H-K). The Pin1 shRNA knocked down Pin1 down to 30% of control levels ( Figure 2I). As for the overexpression experiment, a slight change toward post-synaptic spines with larger headwidth was observed (Figures 2J,L), and most evident in the cumulative distribution curves (data not shown). To analyze if Pin1 knockdown altered the diversity of post-synaptic spines, a factorial ANOVA was conducted to compare the main effects of overexpressed protein and the interaction effect between overexpression and the fraction of spines (spine type). The overexpression included three levels [Control, Kd (1), and Kd (3)] and the spine type included three levels (mushroom, stubby, and thin). The overexpression was not statistically significant and no interaction was observed. Only a trend toward more mushroom spines was observed in cells expressing the knockdown plasmids ( Figure 2M). Taken together, the overexpression and knockdown data indicates that post-synaptic Pin1 is a strong negative regulator of the number of postsynaptic spines.
Post-synaptic Pin1 Regulates the Number of Functional Excitatory Synapses
The reduction in the levels of PSD-95 palmitoylation and the loss of post-synaptic spines and PSD-95 clusters suggested that Pin1 could regulate many aspects of excitatory synaptic transmission. To evaluate if Pin1 affects the nano-organization of PSD-95 in post-synaptic spines, we performed superresolution microscopy of endogenous PSD-95 in cultured neurons transfected with EGFP, Pin1 or the C113S mutant ( Figure 3A). Previous studies have shown that AMPARs and PSD-95 lie in nanodomains, small clusters of about 70 nm in diameter (Nair et al., 2013;Cai et al., 2014;Constals et al., 2015;Li et al., 2016;Lee et al., 2017). It is hypothesized that the nanodomain organization of synaptic proteins plays a key role in excitatory synaptic transmission. We found that Pin1 overexpression did not affect the size of the PSD-95 nanodomains (Figure 3B) or the number of post-synaptic PSD-95 nanodomains per spine (Figures 3C,D). We also measured the size and number of AMPAR nanodomains because it has been hypothesized that these are the minimal unit of AMPARmediated transmission (Nair et al., 2013). We performed FIONA on neurons expressing EGFP or Pin1 and EGFP ( Figure 3E) as in Yildiz et al. (2003), Cai et al. (2014), Lee et al. (2017). Pin1 overexpression did not alter the size of GluA2 nanodomains, the area, or the number of nanodomains (Figures 3F-H).
Given that Pin1 reduced the amounts of PSD-95 and on the remaining spines the amounts of PSD-95 and AMPARs remained unaltered, we reason that maybe Pin1 changed the mobility of AMPARs. The mobility of AMPARs is regulated by the induction of LTP and by the amounts of PSD-95 (Makino and Malinow, 2009;Czöndör et al., 2012), thus we reason that Pin1 may change the mobility of surface expressed AMPARs. To evaluate if Pin1 could affect the dynamics of the AMPAR we performed single particle tracking experiments on neurons transfected with (1) a biotinylated form of GluA2 subunit, (2) the biotin ligase BirA expressing plasmid and (3) either pIRES2:EGFP or Pin1:IRES2:EGFP ( Figure 4A). We observed that Pin1 overexpression increased the mobility of slowly moving particles with instantaneous diffusion coefficient (D inst ) lower than 0.007 µm 2 /s (Figures 4B-D, Mann-Whitney test, p < 0.0001) while knocking down Pin1 had the converse effect of slowing down the mobility of AMPARs (Figures 4E,F, Mann-Whitney test, p < 0.0001). Although, EGFP and mRFP turbo differ somewhat in the mean D inst , we tried to control to the best of our capacities, on a weekly basis, all factors known to regulate AMPAR mobility (i.e. developmental age, effects of transfection of culture viability, labeling density, etc.). Thus, The decrease in post-synaptic spines, PSD-95 amounts, and the change in AMPAR mobility coupled with the normal size of GluA2 nanodomains in spines and dendrites suggests that Pin1 could regulate the number of functional excitatory synapses but not the strength of the remaining ones. To test this hypothesis, AMPA-and NMDA-receptor-mediated transmission was measured. The AMPAR-mediated transmission was recorded by isolating miniature EPSCs (mEPSCs) from cultured neurons transfected with EGFP or wt Pin1. As expected from the loss of post-synaptic spines, Pin1 overexpression strongly decreased the frequency of mEPSCs (Figures 5A,B) but did not alter the amplitude of the mEPSCs (Figures 5C,D).
To confirm the effects of Pin1 on NMDAR currents as in Antonelli et al. (2016), NMDAR-mediated transmission was evaluated on by isolating single spine miniature NMDAR (mNMDARs) events by using a modified ACSF containing 0 mM Mg 2+ , 1 µM TTX, and 10 µM DNQX. Ca 2+ entry through the NMDAR was imaged using the high-affinity Ca 2+ sensor GCaMP6S in a spinning disk microscopy (see Materials and Methods" and Supplementary Movie I). Ca 2+ signals were detected in isolated dendritic spines (regions 1, 2, and 3 Figure 6A) and plotted as in Reese and Kavalali (2016), were characterized by a rapid increase in intracellular Ca 2+ (Figure 6C and single events Figure 6D), and were blocked by APV and MK801 (Figures 6E-G). To test if Pin1 regulates these single spine mNMDARs in neurons, two of the three shRNA constructs were transfected into neurons. Knockdown Pin1 doubled the amplitude of mNMDARs (Figures 7A,B) as well as doubled the frequency of single spine mNMDARs ( Figure 7C). No measurements on dendritic Ca 2+ release were done. The increase in size and frequency of the mNMDARs supports the idea that post-synaptic Pin1 negatively regulates the amount of NMDARs either via binding to the N-terminus domain or plausibly via the hinge domain as implied by Antonelli et al. (2016). Furthermore, this data supports the hypothesis that Pin1 regulates the number of functional excitatory synapses in hippocampal neurons via its association with PSD-95.
Overexpressing PSD-95 Overcomes the Effects of Overexpressing Pin1
The previous results support the hypothesis that Pin1 affects dendritic spines by interfering with the palmitoylation of PSD-95 and the development or maturation of post-synaptic spines. If the decreased number of post-synaptic spines and PSD-95 clusters are due to Pin1's interference with PSD-95 palmitoylation, overexpression of PSD-95 should rescue the morphological effects as more of the PSD-95, from the total pool, will be palmitoylated PSD-95 (Figure 8A scheme). To test this hypothesis, PSD-95 was overexpressed with EGFP, Pin1 or the isomerization mutant C113S. Cultured hippocampal neurons were immunostained for PSD-95 ( Figure 8B). PSD-95 overexpression restored the density of PSD-95 per dendritic area in cells overexpressing Pin1 ( Figure 8C) previously observed in Figure 8B. In this experiment, we were unable to accurately quantify the effects of Pin1 overexpression on endogenous PSD-95 due to the strong labeling intensity of the cells overexpressing PSD-95. PSD-95 overexpression also restored the area of PSD-95 clusters well above the level of non-transfected neurons (which are dimly visible in the image); however, Pin1 binding still exerted a limiting effect on the size of PSD-95 puncta ( Figure 8D). Similar to wt PSD-95, overexpression of the N3A mutant of PSD-95 also restored the density of PSD-95 per dendritic area in cells overexpressing Pin1 ( Figure 8E) and the area of PSD-95 clusters ( Figure 8F). Pin1 overexpression still exerted a limiting effect on the size of N3A PSD-95 puncta (Figure 8G), suggesting that these effects are independent of binding to the N-terminus domain of PSD-95. These findings support the idea that Pin1 binding to the phosphorylated N-terminus domain of PSD-95 regulates how much PSD-95 is added onto nascent or preexisting PSD clusters. This regulation contributes to a steady-state process of synaptic weakening that is downstream of proline-directed kinases known to phosphorylate this region.
DISCUSSION
It is known that the PSD is enriched with proteins containing phosphorylated serine/threonine-proline (S/T-P) motifs clustered within a short region of the proteins (Coba et al., 2009). However, only a handful of papers show that some of these are Pin1 targets (Park et al., 2013;Antonelli et al., 2016). This paper tackles this issue by showing a role for Pin1 in the regulation of PSD-95. Specifically, we show how Pin1 binding to the N-terminus PEST domain of PSD-95 regulates the number of functional excitatory synapses. At the biochemical level, Pin1 binds phosphorylated T19 and S25 within the N-terminus domain of PSD-95, and this association strongly decreases the amount of palmitoylated PSD-95. We measured decreases in PSD-95 palmitoylation by Pin1 using two methods, a data set using the click chemistry palmitoylation assay (data not shown) and the hPF11:EGFP intrabody. And with both methods we observed a 70% reduction in total PSD-95 palmitoylation by Pin1. Moreover, the hPF11 results add a new layer to this picture in which it shows the spatial location of palmitoylated PSD-95 in cells. In HEK 293T cells we observed palmitoylated PSD-95 in organelles, which resemble the Golgi apparatus, and in neurons the hPF11 was enriched within dendritic spines with an enrichment ratio of 1.5. The intracellular distribution in HEK 293T cells resembles the signal from intracellular organelles when DHHC2 or DHHC15 are co-expressed with PSD-95 (Fukata et al., 2004;Jeyifous et al., 2016).
The decrease in PSD-95 palmitoylation limits the capacity of PSD-95 into PSDs, ultimately reducing the number of functional synapses. Although Pin1 overexpression produces a strong loss of the number of PSD-95 clusters along the dendritic tree, some clusters and synapses have normal amounts of PSD-95 and post-synaptic AMPARs. We think that the EGFP 1.00 ± 0.04, n = 42; Pin1 0.14 ± 0.05, n = 66; C113S 1.32 ± 0.06, n = 51, One-way ANOVA F(2,156) = 7.84, p = 0.0006. (G) N3A overexpression restores the area of PSD-95 clusters well above the level of non-transfected. EGFP 1.00 ± 0.03, n = 42; Pin1 0.86 ± 0.03, n = 66; C113S 1.02 ± 0.04, n = 51 55 Kruskal-Wallis chi-square = 16.21, df = 2, p < 0.0003. remaining normal synapses may emerge due to multiple scenarios. One scenario is that there is not enough Pin1 to bind all phosphorylated PSD-95, which is very likely given that we overexpressed Pin1 by just a little (∼2-fold). The second scenario is that not all PSD-95 is phosphorylated at T19 and S25 as shown by Morabito et al. (2004). The fact that some synapses lose PSD-95 while others do not suggests that the Pin1-overexpressing neurons experience a normalization mechanism to compensate for the loss of PSD-95 palmitoylation over time, perhaps in the manner described by the findings of Levy et al. (2015) where preexisting palmitoylated PSD-95 is redistributed to its remaining functional synapses. This phenomena is also seen when PSD-95 is knocked down (Béïque et al., 2006;Ehrlich et al., 2007;Sun and Turrigiano, 2011;Levy et al., 2015).
The reduced number of dendritic clusters of PSD-95 was rescued by two manipulations aimed at increasing the total levels of palmitoylated PSD-95 (i.e. Palm B pretreatments and PSD-95 overexpression). In HEK cells, the effects of Palm B were dependent on whether PSD-95 could be phosphorylated, although Pin1 appeared to affect also global palmitoylation. Pin1 strongly limited the Palm B-mediated increase in intracellular clusters in cells expressing wt PSD-95 but not in cells overexpressing the N3A mutant ( Figure 1C). These results suggest that Pin1 binds this region to prevent PSD-95 palmitoylation. In neurons, the effects of PSD-95 overexpression were less sensitive to the phosphorylation state of PSD-95 because PSD-95 overexpression of either wt or the N3A mutant.
The Pin1 global knockout and our results show an increase in the size and number of post-synaptic spines. We find that overexpression of Pin1 triggered a loss of functional synapses and a reduction in the diversity of the types of post-synaptic spines. Similarly, Stallings et al., 2018 in found a decrease in spine number when the TAT-WW domain protein was applied to neurons. These results are in line with our findings as we observe spine loss when we overexpressed dominant negative forms of Pin1 (including the WW domain, data not shown). This bidirectional regulation (down or upregulation) of the number of spines by Pin1 suggests that Pin1 is a negative regulator of spine development in hippocampal neurons.
Another interesting aspect of our data is that Pin1 binding, and not isomerization, is sufficient for the effects on PSD-95 palmitoylation. Although Pin1 can isomerize PSD-95, no physiological role was observed for the C-terminal isomerase domain in Pin1. This conclusion may be incomplete because the WW domain of Pin1, over the second time scale, can cause shifts in cis-trans equilibrium that are indistinguishable from the conformational changes induced by the isomerase domain (Eichner et al., 2016). This consideration is particularly applicable to this data because the experiments in this work were performed several days after transfecting neurons. These long periods provide enough time for the WW domain to mediate a "quasi isomerization" of the phosphorylated targets that would look indistinguishable from the effects of the isomerase domain. Furthermore, this "quasi isomerization" could be increasingly complicated by the slower turnover rate of PSD-95. Whether or not the isomerase domain is needed for the decrease in PSD-95 palmitoylation is still an open matter for discussion.
Of physiological relevance, Pin1 is strongly upregulated in the striatum and substantia nigra of mice injected with 30mg/kg of MPTP (Ghosh et al., 2013). MPTP administration triggered a six-fold increase in total Pin1 protein levels 12 to 24 h post-stimulation, suggesting that Pin1 protein levels can quickly respond to alterations in neuronal physiology. In our experimental conditions, the observed physiological effects were obtained when Pin1 was overexpressed at levels no more than twice the levels of control cells, suggesting that this mechanism may play a role in striatal physiology as well. In conclusion, this data implicates Pin1 in the regulation of PSD-95 at synapses in normal physiology, synaptic plasticity, or pathological conditions.
DATA AVAILABILITY STATEMENT
The datasets generated for this study are available on request to the corresponding author.
ETHICS STATEMENT
All procedures were in accordance with guidelines of and approved by the Institutional Animal Care and Use Committee at The University of Chicago.
AUTHOR CONTRIBUTIONS
JD conceived the initial idea. JD and DN performed experiments, generated graphs and analyzed data. All authors contributed to all parts of writing and editing the manuscript.
FUNDING
AMPAzeta IIF Marie Curie Fellowship and NIH Grant NS103159 to JD, NIH NS100019 and NSF PHY-1430124 to PS, and T32 NS0072419-28 to Eve Marder. | 2020-03-13T13:13:57.211Z | 2020-03-13T00:00:00.000 | {
"year": 2020,
"sha1": "b9a77f67d80f40a4be406aba83ffe01a39eb31c0",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fnmol.2020.00010/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "b9a77f67d80f40a4be406aba83ffe01a39eb31c0",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
264074506 | pes2o/s2orc | v3-fos-license | Intranasal Delivery of Oncolytic Adenovirus XVir-N-31 via Optimized Shuttle Cells Significantly Extends Survival of Glioblastoma-Bearing Mice
Simple Summary Glioblastomas (GBMs) are difficult-to-treat, deadly brain tumors and may infiltrate the whole brain. Cancer-killing (oncolytic) viruses have been used to treat GBMs. However, oncolytic virotherapy needs surgery, as the viruses have to be injected directly into the tumor. Human hepatic stellate cells were loaded with the oncolytic virus XVir-N-31 and applied into the noses of GBM-bearing mice via a non-surgical method. The virus-loaded cells rapidly migrated towards the brain tumor and invaded GBM cells located far away from the original tumor. In the brain, these shuttle cells released XVir-N-31, which then infected and killed the cancer cells. In consequence, the mice that received XVir-N-31-loaded shuttle cells via the nose showed delayed tumor growth and better survival. In addition, when the intranasal delivery was combined with an intratumoral injection of XVir-N-31, 25% of the mice did not develop any tumors and survived a long time. Abstract A glioblastoma (GBM) is an aggressive and lethal primary brain tumor with restricted treatment options and a dismal prognosis. Oncolytic virotherapy (OVT) has developed as a promising approach for GBM treatment. However, reaching invasive GBM cells may be hindered by tumor-surrounding, non-neoplastic cells when the oncolytic virus (OV) is applied intratumorally. Using two xenograft GBM mouse models and immunofluorescence analyses, we investigated the intranasal delivery of the oncolytic adenovirus (OAV) XVir-N-31 via virus-loaded, optimized shuttle cells. Intranasal administration (INA) was selected due to its non-invasive nature and the potential to bypass the blood–brain barrier (BBB). Our findings demonstrate that the INA of XVir-N-31-loaded shuttle cells successfully delivered OAVs to the core tumor and invasive GBM cells, significantly prolonged the survival of the GBM-bearing mice, induced immunogenic cell death and finally reduced the tumor burden, all this highlighting the therapeutic potential of this innovative approach. Overall, this study provides compelling evidence for the effectiveness of the INA of XVir-N-31 via shuttle cells as a promising therapeutic strategy for GBM. The non-invasive nature of the INA of OV-loaded shuttle cells holds great promise for future clinical translation. However, further research is required to assess the efficacy of this approach to ultimately progress in human clinical trials.
Introduction
Glioblastomas (GBMs) are the most frequent malignant primary brain tumors in adults.The average survival rate of patients diagnosed with this tumor is less than 20 months, albeit with updated therapy options [1].The infiltrative, malignant progression of this tumor and its resistance to chemotherapy and irradiation impact its devastating prognosis.Furthermore, a lack of immune surveillance by means of GBM cells enabling an immunosuppressive microenvironment is a key characteristic of GBM [2].Additionally, a major hurdle for developing efficient anti-GBM therapies is the blood-brain barrier (BBB), which restricts the systemic delivery of many drugs to the tumor.Thus, the development of novel approaches aiming to efficiently deliver new or established therapeutics specifically to the malignant tissue are urgently needed.
A promising approach to treat GBM is oncolytic virotherapy (OVT) [3].Either wildtype or genetically modified oncolytic viruses (OVs) are capable of replicating in neoplastic cells, ultimately spreading within the tumor and destroying it.Simultaneously, OVs leave non-neoplastic cells unharmed (for reviews, see [4,5]).Despite favorable outcomes, OVT has some limitations.Primarily, for the treatment of brain tumors, OVs must be applied intratumorally (IT) because patients often already have developed antibodies against the OVs from prior exposure that will rapidly inactivate the virus if applied intravenously [6,7].Additionally, the entry of OVs into the brain is blocked by the BBB, which protects the brain against pathogens [6].Moreover, OVs developed from viruses of the same origin or subtype will be rendered inactive by the patient's immune system if applied several times [7].Another major hurdle for OVT in GBM is the invasive and malignant fluid phenotype of the tumor [8].Consequently, OVs applied intratumorally will not be able to reach the infiltrative GBM cells separated by non-neoplastic cells from the tumor core where the OV was administered.Therefore, it is essential to optimize OVT to try to capitalize on its full potential to treat GBM.
Oncolytic adenoviruses (OAVs), for instance, can prompt immunogenic cell death (ICD), attested to by the release of danger-associated molecular pattern (DAMP) proteins, like high-mobility group B1 (HMGB1) or heat shock proteins (HSPs) [9,10].Subsequently, the anti-tumor immune response and anti-tumoral effects are substantially induced [11].Furthermore, pathogen-associated molecular pattern (PAMP) molecules, like nucleic acids or viral proteins, are released by OV-infected cells, eventually stimulating the production of pro-inflammatory cytokines, such as interferons [12].Finally, this draws dendritic cells (DCs) and advances the uptake and presentation of tumor cell debris alongside tumorspecific neo-antigens by them, ultimately priming anti-tumoral T-cell responses [13].
The intranasal administration (INA) of cells to the brain, since its groundwork discovery, has effectively proven to be non-invasive, targeted and efficacious [14,15].INA allows a wide variety of therapeutic agents to be transported to the central nervous system (CNS), circumventing the BBB hurdle.For instance, viruses, plasmids, liposomes, cells, nanoparticles and OV-loaded cells can be delivered via INA to the CNS [15][16][17][18].Furthermore, our earlier research confirmed the delivery of mesenchymal stem cells (MSCs) to the tumor site via INA in a GBM mouse model [19].Accordingly, the use of shuttle cells such as MSCs to camouflage and effectively deliver OVs to the tumor has already shown promising results [20].In our study, we aimed to maximize the potential of OVT to primarily target the invasive and infiltrative GBM cells.For this, we used the OAV XVir-N-31 (also named Ad-Delo3RGD) [21], which, in our previous work as well as in other preclinical tumor models, demonstrated extensive therapeutic efficacy, ICD induction capabilities and curative potential when applied intratumorally [10,[22][23][24][25].The deletion of the adenoviral E1A13S protein renders the replication of XVir-N-31 dependent on the nuclear YB-1 expression, which is markedly upregulated in resistant GBM cells [26].We then utilized our optimized, highly motile, mCherry-expressing, hepatic, stellate shuttle cells, LX-2 FR , and applied them intranasally post-XVir-N-31-infection [27].Our newly developed LX-2 FR cells demonstrated, in a delayed-replication cycle, the production of infectious virus particles whilst retaining their superior migratory capabilities [27].In the present study, we showed that a single INA of XVir-N-31-loaded LX-2 FR cells significantly increased the survival and reduced the tumor sizes in an orthotopic mouse model harboring GBMs derived from established LN-229 GBM cells, as well as in a more representative, highly infiltratively growing, R28 glioma stem cell (GSC)-derived GBM mouse model [28].
Cell Lines and Viruses
LN-229 human glioma cells (Cellosaurus ID: CVCL_0393) were a kind gift from N. Tribolet (Geneva, Switzerland) and are described in detail by Ishii et al. [29].HEK293 cells were acquired from Microbix (Mississauga, ON, Canada; Cellosaurus ID: CVCL_0045).Both LN-229 and HEK293 cells were cultured in Dulbecco's modified Eagle's medium (DMEM) containing 10% fetal calf serum (FCS) and 1% penicillin-streptomycin (P/S).The R28 glioma stem cell (GSC) line was kindly provided by C. Beier (University Odense, Denmark) and maintained as tumor spheres in stem cell-permissive DMEM/F12 medium (Sigma Aldrich, Steinheim, Germany) supplemented with human recombinant epidermal growth factor (EGF) (BD Biosciences, Heidelberg, Germany), human recombinant basic fibroblast growth factor (bFGF) (R&D Systems Europe, Ltd., Minneapolis, MN, USA), human leukemia inhibitory factor (Millipore; 20 ng/mL each) and 2% B27 supplement (Thermo Fisher Scientific, Inc., Waltham, MA, USA).The R28 cell line is further described by the group of D. Beier [28].LN-229 and R28 cells expressing the green fluorescence protein (GFP) were produced via infection with Lenti-GFP (Amsbio, Frankfurt/Main, Germany).LX-2 cells, a kind gift from Scott Friedman (the Icahn School of Medicine at Mount Sinai, NY, USA; Cellosaurus ID: CVCL_5792), were cultivated in DMEM containing 2% FCS, 1% glutamine and 1% P/S (all from Sigma Aldrich, Darmstadt, Germany) and are described in detail by Xu et al. [30].LX-2 mCherry-positive "fast running" shuttle cells (LX-2 FR ) were generated via our previously developed and characterized method of the selection of highly migratory subpopulation of cells [19], followed by an infection with Lenti-mCherry, and are described in detail by El-Ayoubi et al. [27].All cells were cultured at 37 • C in a humidified, 5% CO 2 -containing atmosphere.All cell lines underwent a cell line authentication analysis in May 2023 (Eurofins, Ebersberg, Germany; please refer to Supplementary Figure S6) and were regularly tested for absence of mycoplasma using the MycoAlert mycoplasma detection kit (Lonza, Cologne, Germany).
XVir-N-31 was prepared, purified and titrated as previously described by Mantwill et al. and Klawitter et al. [10,25].To load the LX-2 FR cells with XVir-N-31, cells were infected with a multiplicity of infection (MOI) of 200 for 5 h, which is necessary to infect > 95% of the cells but does not influence the cells' motility [27], then were intensively washed with PBS to remove residual OVs that had not been taken up and were then directly used for INA [27].
Tumor Volumetry
Mouse brains were fixed in 4% paraformaldehyde (PFA), dehydrated in 20% and 30% sucrose and cryosectioned.Tissue slices were stained with Mayer's Hematoxylin Solution and 0.5% Eosin Y/ethanol solution (both Sigma-Aldrich, St. Louis, MO, USA) and washed under running tap water.Subsequent dehydration using an alcohol dilution series was followed by Permount mounting (Fisher Chemical; #202282).To calculate the tumor size, the starts and ends of the tumors were determined, and the area of the tumor was measured every 100 µm using ImageJ, as described by Klawitter et al. [10].The surface area multiplied by the thickness of the section (until the next section) gave the partial volume.The sum of all the partial volumes was used to estimate the complete tumor volume.
Animal Experiments
Animal experiments were conducted in accordance with the German Animal Welfare Act and its guidelines (e.g., 3R principle) and were approved by the regional council of Tübingen (approval N02/20G).NOD.Cg-Prkdcscid Il2rgtm1Wjl/SzJ mice (NSG mice; the Jackson Lab, Bar Harbor, ME, USA) were bred in IVC cages in the animal facility of the Hertie Institute under pathogen-free conditions.The stereotactic implantation of GBM cells has been described in detail by Klawitter et al. and Czolk et al. [10,24].Mice of both genders aged 2-6 months were randomized into the treatment groups.In summary, post-anesthesia and -analgesia, 1 × 10 5 R28 GFP or LN-229 GFP cells were stereotactically implanted into the right striatum.The mice were extensively monitored to avoid and reduce pain.INA was performed as described by El-Ayoubi et al. and Yu-Taeger et al. [27,31], using either PBS, 4 × 10 6 unloaded or XVir-N-31-loaded LX-2 FR shuttle cells.The intratumoral injection of XVir-N-31 (3 × 10 8 IFU) was performed as described by Klawitter et al. [10].Time points for INA were 28 days and, for intratumoral (IT) application, 21 days after tumor cell implantation in R28 GFP GBM-bearing mice, and 7 days for INA in LN-229 GFP GBM-bearing mice.
Statistical Analysis
For all in vivo experiments, the group and sample sizes are indicated in the figure legends.Kaplan-Meier survival studies were analyzed using the log-rank (Mantel-Cox) test.Further statistical analyses were conducted with a two-tailed Student's t-test or oneway ANOVA using GraphPad Prism 9.5.1 (GraphPad Inc., San Diego, CA, USA).The results are represented as the mean ± standard error mean (SEM), and p-values of <0.05 were considered statistically significant (n.s.: not significant; * p < 0.05; ** p < 0.01; *** p < 0.001; **** p < 0.0001).
XVir-N-31 Reaches the GBM after INA of OV-Loaded LX-2 FR Cells
As adenovirus subtype 5, the origin of XVir-N-31, does not efficiently replicate in mouse cells, human GBM xenograft models in mice were used in this study.As previous in vivo experiments showed that the INA of optimized, "fast running", highly motile, hepatic, stellate shuttle cells (LX-2 FR ) reached LN-229-derived GBMs in mice, hitting both the tumor core as well as its infiltration zones [27], we wanted to investigate whether loading these cells with XVir-N-31 displayed a similar outcome, as well as whether INA can be used as a therapeutic to cargo OVs to the tumor.Therefore, we performed a single INA of Cancers 2023, 15, 4912 5 of 16 4 × 10 6 LX-2 FR shuttle cells that had been infected with a 200 MOI of XVir-N-31 (LX-2/XVir) 5 h prior to the INA to LN-229 GFP GBM-bearing mice at a time point at which the tumor developed a size of approximately 2 mm in diameter.We performed immunofluorescence analyses to identify the shuttle cells, XVir-N-31 and its replication in the tumor region at several time points post-treatment.A strong colocalization of the XVir-N-31 and LX-2 FR cells in close proximity to the tumor was observed at day 3 post-INA (Figure 1).At this time point, the presence of XVir-N-31, indicated by the adenoviral hexon protein, was exclusively detected in the LX-2 FR cells.At later time points (days 12 and 18 post-INA), no LX-2 FR cells were detectable anymore, most likely due to the OV-mediated cell lysis and the release of OV progeny.At these later time points, XVir-N-31 had spread throughout the tumor, with its replication now confined to GBM cells.
As adenovirus subtype 5, the origin of XVir-N-31, does not efficiently replicate in mouse cells, human GBM xenograft models in mice were used in this study.As previous in vivo experiments showed that the INA of optimized, "fast running", highly motile, hepatic, stellate shuttle cells (LX-2 FR ) reached LN-229-derived GBMs in mice, hitting both the tumor core as well as its infiltration zones [27], we wanted to investigate whether loading these cells with XVir-N-31 displayed a similar outcome, as well as whether INA can be used as a therapeutic to cargo OVs to the tumor.Therefore, we performed a single INA of 4 × 10 6 LX-2 FR shuttle cells that had been infected with a 200 MOI of XVir-N-31 (LX-2/XVir) 5 h prior to the INA to LN-229 GFP GBM-bearing mice at a time point at which the tumor developed a size of approximately 2 mm in diameter.We performed immunofluorescence analyses to identify the shuttle cells, XVir-N-31 and its replication in the tumor region at several time points post-treatment.A strong colocalization of the XVir-N-31 and LX-2 FR cells in close proximity to the tumor was observed at day 3 post-INA (Figure 1).At this time point, the presence of XVir-N-31, indicated by the adenoviral hexon protein, was exclusively detected in the LX-2 FR cells.At later time points (days 12 and 18 post-INA), no LX-2 FR cells were detectable anymore, most likely due to the OV-mediated cell lysis and the release of OV progeny.At these later time points, XVir-N-31 had spread throughout the tumor, with its replication now confined to GBM cells.Furthermore, in a satellite tumor, which is often seen in multifocal GBMs and is suggested to be derived from infiltrating GBM cells [32,33], the hexon protein, which indicates XVir-N-31 replication, was also evident 18 days after treatment (Figure 2).No hexon staining outside the tumor areas was observed (Figure 1, day 18).The prominent hexon staining within the core tumor and its satellite not only indicates the presence of XVir-N-31, but also its infectious chain reaction in GBM cells.Furthermore, in a satellite tumor, which is often seen in multifocal GBMs and is suggested to be derived from infiltrating GBM cells [32,33], the hexon protein, which indicates XVir-N-31 replication, was also evident 18 days after treatment (Figure 2).No hexon staining outside the tumor areas was observed (Figure 1, day 18).The prominent hexon staining within the core tumor and its satellite not only indicates the presence of XVir-N-31, but also its infectious chain reaction in GBM cells.
In LN-229-Derived GBMs, the INA of LX-2/XVir Induced Immunogenic Cell Death, Reduced Tumor Growth and Extended the Survival of Tumor-Bearing Mice
To investigate the therapeutic potential of the INA of LX-2/XVir, we firstly examined its capability to induce immunogenic cell death (ICD).ICD induction in tumor cells through OVs is a warrant of their therapeutic efficiency and an important driver in the induction of a specific anti-tumoral immune response [6,10].A substantial indication of ICD is the release of DAMPs.We examined the DAMPs HMGB1 and HSP70 in addition to the immunogenic protein YB-1 [34,35].In accordance with the spreading of XVir-N-31 in the LN-229 GBMs after the INA of LX-2/XVir (Figure 1), HMGB1 was clearly evident within the tumor area 12 days after the INA of LX-2/XVir, but not if unloaded shuttle cells were intranasally applied.HMGB1 staining clearly manifested at later time points after the INA.Interestingly,
In LN-229-Derived GBMs, the INA of LX-2/XVir Induced Immunogenic Cell Death, Reduced Tumor Growth and Extended the Survival of Tumor-Bearing Mice
To investigate the therapeutic potential of the INA of LX-2/XVir, we firstly examined its capability to induce immunogenic cell death (ICD).ICD induction in tumor cells through OVs is a warrant of their therapeutic efficiency and an important driver in the induction of a specific anti-tumoral immune response [6,10].A substantial indication of ICD is the release of DAMPs.We examined the DAMPs HMGB1 and HSP70 in addition to the immunogenic protein YB-1 [34,35].In accordance with the spreading of XVir-N-31 in the LN-229 GBMs after the INA of LX-2/XVir (Figure 1), HMGB1 was clearly evident within the tumor area 12 days after the INA of LX-2/XVir, but not if unloaded shuttle cells were intranasally applied.HMGB1 staining clearly manifested at later time points after the INA.Interestingly, 18 days after the INA, some hot spots of the HMGB1 staining were visible, whilst in most of the tumor area, HMGB1 was uniformly distributed (Figure 3, lower two panels).Comparable results were observed for HSP70 and YB-1.All three proteins colocalized with the adenoviral hexon protein, indicating the presence of XVir-N-31-infected cells in this area (Figure 4).Furthermore, in the infiltration zone adjacent to the tumor core, we also detected HMGB1, HSP70 and YB1, consistently colocalizing with the adeno- Comparable results were observed for HSP70 and YB-1.All three proteins colocalized with the adenoviral hexon protein, indicating the presence of XVir-N-31-infected cells in this area (Figure 4).Furthermore, in the infiltration zone adjacent to the tumor core, we also detected HMGB1, HSP70 and YB1, consistently colocalizing with the adenoviral hexon protein (Supplementary Figure S1), indicating that ICD was induced exclusively in the XVir-N-31-infected GBM cells.These promising results prompted us to examine the therapeutic impact of LX-2/XVir-based INA in LN-229 GFP GBM-bearing mice by determining the survival as well as the tumor growth.Mice that received a single INA of LX-2/XVir showed significantly smaller tumors than the sham-treated mice, in which the tumor cells spread in nearly half of the hemisphere (Figure 5A,B).Additionally, the INA of LX-2/XVir significantly extended the survival of the mice compared to those animals that received either PBS (sham) or unloaded shuttle cells (Figure 5C,D).Moreover, the treatment was not harmful to the animals, as none of the mice showed any treatment-related symptoms in behavior or even weight loss (Figure 5E).These promising results prompted us to examine the therapeutic impact of LX-2/XVirbased INA in LN-229 GFP GBM-bearing mice by determining the survival as well as the tumor growth.Mice that received a single INA of LX-2/XVir showed significantly smaller tumors than the sham-treated mice, in which the tumor cells spread in nearly half of the hemisphere (Figure 5A,B).Additionally, the INA of LX-2/XVir significantly extended the survival of the mice compared to those animals that received either PBS (sham) or unloaded shuttle cells (Figure 5C,D).Moreover, the treatment was not harmful to the animals, as none of the mice showed any treatment-related symptoms in behavior or even weight loss (Figure 5E).
INA of LX-2/XVir Provides Therapeutical Impact in Mice Bearing Highly Infiltrating, Glioma Stem Cell-Derived GBMs
LN-229 GBMs present tumors that show tumor cell infiltration in mice and can be used as a tumor model for rapidly growing tumors.Unfortunately, LN-229 GFP -derived GBMs do not show the high level of infiltrative growth that is observed in most human patients.In contrast, glioma stem cells (GSCs), like the R28 cells that were derived from primary human glioblastomas and cultured as neurospheres in a stem cell medium, more closely mirror the phenotype and genotype of primary tumors than do established GBM cell lines that were cultured in a high-glucose and FCS-containing medium.It has been described that fetal calf serum induces the differentiation of stem cells and therefore influences the phenotype of serum-cultured cells [36].In the brains of immunocompromised mice, GSCs developed tumors with an elevated infiltrative growth capacity compared to LN-229-derived GBMs [36,37].Therefore, GSC-derived GBMs might present a more clinically relevant model of experimental GBM.As our goal was to reach, via the INA of LX-2/XVir, tumor cells not only in the original tumor area, but also the infiltrated tumor cells not eradicated via intratumorally injected OVs, we used NSG mice bearing R28 GFP GSCderived tumors, which grow slowly [25] but highly infiltrate the surrounding brain parenchyma and are even able to invade the contralateral hemisphere (Supplementary Figure S2).Even if the volumes of the R28 GFP tumors in the sham-treated animals were highly variable, the tumor sizes were significantly reduced after the INA of LX-2/XVir (Figure
INA of LX-2/XVir Provides Therapeutical Impact in Mice Bearing Highly Infiltrating, Glioma Stem Cell-Derived GBMs
LN-229 GBMs present tumors that show tumor cell infiltration in mice and can be used as a tumor model for rapidly growing tumors.Unfortunately, LN-229 GFP -derived GBMs do not show the high level of infiltrative growth that is observed in most human patients.In contrast, glioma stem cells (GSCs), like the R28 cells that were derived from primary human glioblastomas and cultured as neurospheres in a stem cell medium, more closely mirror the phenotype and genotype of primary tumors than do established GBM cell lines that were cultured in a high-glucose and FCS-containing medium.It has been described that fetal calf serum induces the differentiation of stem cells and therefore influences the phenotype of serum-cultured cells [36].In the brains of immunocompromised mice, GSCs developed tumors with an elevated infiltrative growth capacity compared to LN-229-derived GBMs [36,37].Therefore, GSC-derived GBMs might present a more clinically relevant model of experimental GBM.As our goal was to reach, via the INA of LX-2/XVir, tumor cells not only in the original tumor area, but also the infiltrated tumor cells not eradicated via intratumorally injected OVs, we used NSG mice bearing R28 GFP GSC-derived tumors, which grow slowly [25] but highly infiltrate the surrounding brain parenchyma and are even able to invade the contralateral hemisphere (Supplementary Figure S2).Even if the volumes of the R28 GFP tumors in the sham-treated animals were highly variable, the Cancers 2023, 15, 4912 9 of 16 tumor sizes were significantly reduced after the INA of LX-2/XVir (Figure 6A).At the time point at which we measured the tumor volumes, which was the day that the first mouse displayed tumor burden symptoms, some tumors were tiny and nearly not detectable (Figure 6B).
Cancers 2023, 15, x FOR PEER REVIEW 10 of 17 6A).At the time point at which we measured the tumor volumes, which was the day that the mouse displayed tumor burden symptoms, some tumors were tiny and nearly not detectable (Figure 6B).As we successfully used intratumoral (IT) injections of XVir-N-31 to treat R28 GBMs [25], we were interested in whether the therapeutic impact of LX-2/XVir-based INA is comparable to that of intratumorally (IT) applied XVir-N-31, and whether a combination of INA and IT might further enhance the therapeutic impact of this OVT.As shown in Figure 6C, both the LX-2/XVir-based INA and the intratumoral injection of XVir-N-31 prolonged the survival of the R28 GFP GBM-bearing mice.The median survival of the shamtreated mice that received only the vehicle (PBS) was 151 days, the median survival for the mice that intratumorally received XVir-N-31 was 214 days and the median survival for the group of animals we treated via LX-2/XVir-based INA was 224 days, signifying the superior effect of the INA:LX-2/XVir-based OVT (Figure 6D).Again, the mice did not suffer from the therapy, and they did not lose weight (Figure 6E).Notably, the combination As we successfully used intratumoral (IT) injections of XVir-N-31 to treat R28 GBMs [25], we were interested in whether the therapeutic impact of LX-2/XVir-based INA is comparable to that of intratumorally (IT) applied XVir-N-31, and whether a combination of INA and IT might further enhance the therapeutic impact of this OVT.As shown in Figure 6C, both the LX-2/XVir-based INA and the intratumoral injection of XVir-N-31 prolonged the survival of the R28 GFP GBM-bearing mice.The median survival of the sham-treated mice that received only the vehicle (PBS) was 151 days, the median survival for the mice that intratumorally received XVir-N-31 was 214 days and the median survival for the group of animals we treated via LX-2/XVir-based INA was 224 days, signifying the superior effect of the INA:LX-2/XVir-based OVT (Figure 6D).Again, the mice did not suffer from the therapy, and they did not lose weight (Figure 6E).Notably, the combination of IT and INA further prolonged the animals' survival, and in 2/8 mice at the end of the experiment (day 350 post-GBM-cell-implantation), no tumors were detected (Supplementary Figure S3).Finally, the INA of LX-2/XVir gave a significant survival advantage, and as we observed no tumors in 25% of the mice in the combination group, we investigated whether XVir-N-31 is still actively present for a long time after treatment.Therefore, hexon staining was performed in the tumor areas of the R28 GFP GBM-bearing mice that had to be sacrificed after the display of tumor symptoms.Surprisingly, with no trace of the LX-2 FR shuttle cells, the hexon staining was specifically colocalized with the R28 GFP cells, independent of whether the mice received the OV intratumorally or via INA (Figure 7).However, we identified slightly more hexon staining in the INA and the combination of INA and IT than in the single-IT-treated animals.In an animal of the INA group, we also observed hexon staining in an infiltration area of an R28 GFP GBM (Figure 7B). of IT and INA further prolonged the animals' survival, and in 2/8 mice at the end of the experiment (day 350 post-GBM-cell-implantation), no tumors were detected (Supplementary Figure S3).
XVir-N-31 Is Present Long Term after INA of LX-2/XVir
Finally, as the INA of LX-2/XVir gave a significant survival advantage, and as we observed no tumors in 25% of the mice in the combination group, we investigated whether XVir-N-31 is still actively present for a long time after treatment.Therefore, hexon staining was performed in the tumor areas of the R28 GFP GBM-bearing mice that had to be sacrificed after the display of tumor symptoms.Surprisingly, with no trace of the LX-2 FR shuttle cells, the hexon staining was specifically colocalized with the R28 GFP cells, independent of whether the mice received the OV intratumorally or via INA (Figure 7).However, we identified slightly more hexon staining in the INA and the combination of INA and IT than in the single-IT-treated animals.In an animal of the INA group, we also observed hexon staining in an infiltration area of an R28 GFP GBM (Figure 7B).We also examined whether the capability of XVir-N-31 to induce ICD was maintained at this late time point after treatment.Indeed, in the mice that received XVir-N-31 We also examined whether the capability of XVir-N-31 to induce ICD was maintained at this late time point after treatment.Indeed, in the mice that received XVir-N-31 intratumorally, HMGB1, HSP70 and YB-1 were evident in the core tumor for a long time post-treatment.Similarly, the INA:LX-2/XVir (140 d post-treatment)-and INA/IT combination (182 d post-treatment)-treated mice also showed HMGB1, HSP70 and YB-1 staining in the tumor, which, in most tumor areas, was more or less equally distributed, whereas in a few areas, like in LN-229 GFP tumors at day 18 (Figure 3), hot spots of HMGB1 staining were visible (Figure 8, Supplementary Figure S4).Interestingly, whilst R28 GFP GSCs spread from the core tumor throughout the brain, demonstrating the strong infiltrative ability of these cells, neither HMGB1, HSP70, YB-1 nor hexon stains were uniformly evident throughout the brain or the complete tumor.Even in animals that received the combination treatment, there were tumor areas in which none of these proteins was detected (Supplementary Figure S5).Collectively, these observations indicate that after more than 6 months post-INA-based-OVT, XVir-N-31 is able to replicate and additionally induces ICD in infected tumor cells.3), hot spots of HMGB1 staining were visible (Figure 8, Supplementary Figure S4).Interestingly, whilst R28 GFP GSCs spread from the core tumor throughout the brain, demonstrating the strong infiltrative ability of these cells, neither HMGB1, HSP70, YB-1 nor hexon stains were uniformly evident throughout the brain or the complete tumor.Even in animals that received the combination treatment, there were tumor areas in which none of these proteins was detected (Supplementary Figure S5).Collectively, these observations indicate that after more than 6 months post-INA-based-OVT, XVir-N-31 is able to replicate and additionally induces ICD in infected tumor cells.
Discussion
Even with recent advances, and in spite of promising clinical and preclinical data [38][39][40], OVT has some limitations in the treatment of GBM.Firstly, most patients possess antibodies against the therapeutically available OVs, especially against adenoviruses, leading to their fast inactivation.Therefore, OVs targeting solid tumors cannot be applied systemically but rather have to be injected intratumorally.However, when applied intratumorally, non-neoplastic cells surrounding the injection site or the tumor core hamper the spreading of OVs to invasively growing GBM cells that are often located distantly from the intratumoral virus injection site.Yet, these invasive GBM cells frequently harbor stem cell characteristics and are mainly therapy-resistant [36,37].Therefore, it is crucial to eliminate the invaded GBM cells to provide a "longer term" survival benefit to the patient.Secondly, an intact BBB, which is present in the tumor infiltration zone and healthy brain where invaded GBM cells are localized, restricts the delivery of therapeutics, including OVs.The intact BBB is an additional hurdle for an effective GBM cell-targeted therapy.In our recent study, we investigated the potential of the intranasal delivery of optimized
Discussion
Even with recent advances, and in spite of promising clinical and preclinical data [38][39][40], OVT has some limitations in the treatment of GBM.Firstly, most patients possess antibodies against the therapeutically available OVs, especially against adenoviruses, leading to their fast inactivation.Therefore, OVs targeting solid tumors cannot be applied systemically but rather have to be injected intratumorally.However, when applied intratumorally, nonneoplastic cells surrounding the injection site or the tumor core hamper the spreading of OVs to invasively growing GBM cells that are often located distantly from the intratumoral virus injection site.Yet, these invasive GBM cells frequently harbor stem cell characteristics and are mainly therapy-resistant [36,37].Therefore, it is crucial to eliminate the invaded GBM cells to provide a "longer term" survival benefit to the patient.Secondly, an intact BBB, which is present in the tumor infiltration zone and healthy brain where invaded GBM cells are localized, restricts the delivery of therapeutics, including OVs.The intact BBB is an additional hurdle for an effective GBM cell-targeted therapy.In our recent study, we investigated the potential of the intranasal delivery of optimized shuttle cells loaded with the OAV XVir-N-31 to not only reach GBM cells in the tumor core, which are hit by the intratumoral delivery of OVs, but also to reach and ultimately destroy invaded GBM cells distant from the original core tumor.In several foregoing studies, it has been shown that XVir-N-31 competently replicates in glioma cells, eventually inducing oncolysis and prolonging the survival of GBM-bearing mice [10,24,25].Compared to the intratumoral injection or convection-enhanced delivery of OVs to brain tumors that need surgery [41], INA is non-invasive and a more targeted method to transport drugs like OVs into the brain than their systemic delivery.
To confirm that the INA of LX-2/XVir is an effective method to deliver XVir-N-31 to (invaded) GBM cells, that it replicates in and eliminates GBM cells and that it might provide a survival benefit that is equal to or even better than its intratumoral delivery counterpart, we used two orthotopic xenograft GBM mouse models.Whilst LN-229 GFP tumors show rapid growth and minor invasive potential [42], R28 GFP GSC-derived tumors grow slowly but invasively [25] (Supplementary Figure S2).Three days after the INA, the XVir-N-31loaded shuttle cells were present in close proximity to the tumor area (Figure 1), indicating sufficient transport into the brain and towards the tumor.A time period of 72-96 h has been shown to cover one replication cycle of XVir-N-31 in LX-2 cells and defines the time point at which the cells will stop migration, as they will be lysed via OV replication and virus release.This might cast doubt on whether, in humans, this period is long enough for the shuttle cells to travel from the nose to the brain and towards invaded GBM cells.However, LX-2 FR shuttle cells delivered via INA were observed in all the brain areas of mice at about 6 h after INA, with no effect on the cells' migratory capacity after loading them with XVir-N-31 [27].We believe that the observed rapid transfer into the brain might give OV-loaded LX-2 FR shuttle cells enough time, even in humans, to travel from the nose to the brain and, once there, reach GBM cells.Shortly after, XVir-N-31 spread to the GBM cells in the tumor core, the infiltration zones as well as the microsatellite tumors, where it further replicated, as shown by the hexon staining (Figures 1, 2 and 4, Supplementary Figure S1).In the R28 GFP tumors, XVir-N-31 replication was detectable even a long time (up to more than 180 d) post-treatment, indicating a sufficient transport of XVir-N-31 and its infection of GBM cells via the INA of OV-loaded optimized shuttle cells, as well as a sufficient OV amount to start the chain reaction of virus replication in tumor cells.The successful cargo of XVir-N-31 to GBM cells was confirmed by the prolonged survival of both the LN-229 GFP and R28 GFP GBM-bearing mice that received the INA of LX-2/XVir.Additionally, these mice harbored smaller tumors than the mice that intranasally received either PBS or unloaded shuttle cells (Figures 5 and 6).
We were interested in determining whether INA, as a non-invasive method for delivering cargo OVs into the brain, is equally effective in its therapeutic impact compared to the intratumoral delivery of OVs.To investigate this, we used the highly invasive R28 GFP GBM mouse model, from which we knew from preliminary studies that a single intratumoral injection of XVir-N-31 significantly prolonged survival [25], and we applied the OAV either via the INA of LX-2/XVir or via an IT injection of XVir-N-31.In addition, we combined both application methods.For the combination, we first performed the IT injection of XVir-N-31, and seven days later we performed the INA of LX-2/XVir.As shown in Figure 6, the therapeutic effects of the INA and IT delivery of XVir-N-31 were comparable based on the survival analysis, suggesting that the INA-of-LX-2/XVir approach is a feasible option to treat GBMs.Notably, the combination of the INA/IT-based delivery of XVir-N-31 further extended the mice's survival, and in 2 out of 8 mice, no tumors were detectable one year after therapy.However, in those mice of the combination group that developed tumors and had to be sacrificed, areas in the tumor presented no virus detection, indicating that, in this highly infiltrative tumor, further hurdles, like the partial encapsulation of GBM cells by non-neoplastic cells, or the very rapid growth of the GBM cells in some regions, might limit the therapeutic impact.Therefore, in future studies, the IT/INA combination should be further optimized, for example, by using the recurrent INA of LX-2/XVir or shuttle cell loading with different OV species.The former bears the putative potential that, even if XVir-N-31 will be applied to the immunoprivileged brain, and the OAV is disguised by the INA of virus-loaded shuttle cells, neutralizing antibodies might be generated after the first cycle of OVT.
As the therapeutic impact of an OV-based therapy is not only evoked by the oncolysismediated killing of tumor cells, but also, and probably largely, by the induction of ICD and the subsequent enabling of an anti-tumoral immune response [10,43,44], we also determined whether the XVir-N-31-based OVT induces ICD and the duration after treatment that the induction of the ICD lasts.Indeed, ICD, identified via the detection of DAMPs such as HMGB1 and HSP70, or via the immunogenic protein YB-1 [9,12,45], was induced in both the core as well the infiltration zones of the LN-229 GFP and R28 GFP GBMs (Figures 3, 4 and 8, Supplementary Figure S1), where it colocalized with the adenoviral hexon protein, indicating that ICD is exclusively induced in XVir-N-31-infected cells.In the GBM-bearing mice that received the LX-2/XVir-based OVT, the massive induction of ICD, as indicated by HMGB1, was observed at later time points after treatment.Interestingly, in some tumor areas, but independently of whether INA or IT was performed, hot spots of HMGB1 staining were observed (Figure 3, second-last panel).As ICD induction is strictly dependent on XVir-N-31 replication [10], we believe that these hot spots indicate ongoing OV replication in these areas.In addition, the detection of HMGB1 and HSP70 months after treatment that we observed in the R28 GFP GBM-bearing mice that received XVir-N-31-based OVT suggests that ICD induction lasts as long as virus replication occurs (Figure 8, Supplementary Figure S4).Regrettably, there were also tumor areas where no virus replication was detectable and therefore no ICD induction was visible (Supplementary Figure S5), indicating that even using INA-based OVT or a combination of INA and IT induces a uniform distribution of the OV in the tumor area.
Unfortunately, in our xenograft models using immunocompromised mice, it was not possible to measure the immunostimulating effects of XVir-N-31 on the tumor growth and survival.Nevertheless, the therapeutic benefit of ICD induction via XVir-N-31 was demonstrated in our previous study, in which we used immunohumanized mice and conducted an XVir-N-31-based OVT [10].We believe that in an immunocompetent system, particularly the IT:XVir plus INA:LX-2/XVir combination might further enhance the therapeutic impact we still observed in immunocompromised mice (Figures 5 and 6, Supplementary Figure S3).
Recently, INA succeeded in an FIH clinical trial and was proven safe in pediatric patients after the INA of allogeneic stem cells [46].To this end, our study provides the first direct evidence and the translational background for the establishment of intranasal, OV-loaded, somatic differentiated cells (such as LX-2FR) for targeting invaded GBM cells alone or in combination with the local OV therapy of the tumor core.
Conclusions
In conclusion, the INA of LX-2/XVir represents a promising therapeutic approach for GBM.This study demonstrated that this non-invasive, safe and effective delivery method significantly extends the survival of GBM-bearing mice, offering new hope in the fight against this aggressive brain tumor.The utilization of XVir-N-31 harnesses the tumor-selective replication and subsequent destruction of GBM cells, while the optimized LX-2 FR shuttle cells efficiently deliver XVir-N-31 into the tumor and also invade GBM cells, which are often located at a far distance from the original tumor.This non-invasive, intranasal route not only enhances drug delivery but also minimizes systemic side effects and provides the potential for repeated treatments.The findings accentuate the potential clinical significance of INA in human patients, providing a less invasive and safer approach than conventional treatments.Further translational research is warranted to validate these results in human clinical trials, but this study represents a crucial step towards a
17 Figure 3 .
Figure 3. HMGB1 is present in LN-229 GFP GBMs post-INA of LX-2/XVir.Upper panel presents immunofluorescence (IF) stains of mice that received INA of unloaded shuttle cells, whereas lower panels present IF analyses of mice that received INA of LX-2/XVir.IF analyses were performed at the indicated time points after INA (n = 3 mice per group; representative pictures are shown; bars = 20 µm).
Figure 3 .
Figure 3. HMGB1 is present in LN-229 GFP GBMs post-INA of LX-2/XVir.Upper panel presents immunofluorescence (IF) stains of mice that received INA of unloaded shuttle cells, whereas lower panels present IF analyses of mice that received INA of LX-2/XVir.IF analyses were performed at the indicated time points after INA (n = 3 mice per group; representative pictures are shown; bars = 20 µm).
Figure 5 .
Figure 5. INA of LX-2/XVir significantly prolongs survival and reduces tumor volume in LN-229 GFP GBM-bearing mice.(A) INA of LX-2/XVir significantly reduced the tumor volume compared to sham-treated mice.All mice were sacrificed at the time point at which the first mouse developed tumor-associated symptoms.Brains were stained with HE and tumor volumetry was performed as indicated in the Materials and Methods section (n = 7-8 per group, means and SEMs, t-test, * p < 0.05).(B) Representative images of the tumors.The arrow indicates a small tumor we detected in one animal of the treatment group.Pictures were taken at 10 x magnification and were combined using a tile scan function.(C) Kaplan-Meier survival analysis of mice bearing LN-229 GFP GBMs that received either INA of PBS (sham), INA of 4 × 10 6 unloaded LX-2 FR shuttle cells or INA of 4 × 10 6 LX-2 FR cells infected with 200 MOI of XVir-N-31 (n = 8 mice per group, ANOVA; * p < 0.05; *** p < 0.001).(D) Mean and median survival.(E) Weight curves of the mice of the experimental groups (n = 7-8 mice per group, means and SEMs).
Figure 5 .
Figure 5. INA of LX-2/XVir significantly prolongs survival and reduces tumor volume in LN-229 GFP GBM-bearing mice.(A) INA of LX-2/XVir significantly reduced the tumor volume compared to sham-treated mice.All mice were sacrificed at the time point at which the first mouse developed tumor-associated symptoms.Brains were stained with HE and tumor volumetry was performed as indicated in the Materials and Methods section (n = 7-8 mice per group, means and SEMs, t-test, * p < 0.05).(B) Representative images of the tumors.The arrow indicates a small tumor we detected in one animal of the treatment group.Pictures were taken at 10 x magnification and were combined using a tile scan function.(C) Kaplan-Meier survival analysis of mice bearing LN-229 GFP GBMs that received either INA of PBS (sham), INA of 4 × 10 6 unloaded LX-2 FR shuttle cells or INA of 4 × 10 6 LX-2 FR cells infected with 200 MOI of XVir-N-31 (n = 8 mice per group, ANOVA; * p < 0.05; *** p < 0.001).(D) Mean and median survival.(E) Weight curves of the mice of the experimental groups (n = 7-8 mice per group, means and SEMs).
Figure 6 .
Figure 6.INA of LX-2/XVir significantly prolongs survival and reduces tumor volume in mice bearing R28 GFP -derived GBMs.(A) Tumor volumetry showed significantly smaller tumors in mice that received INA of LX-2/XVir compared to sham-treated mice that received INA of PBS.Mice were sacrificed at the time point at which the first mouse presented tumor burden symptoms (n = 5-6 mice per group; means and SEMs; ANOVA; * p < 0.05).(B) Representative HE images of the tumors.(C) Kaplan-Meier survival analysis of mice bearing R28 GFP GBMs that received either INA of PBS plus a single IT injection of PBS (sham), INA of LX-2/XVir plus an IT injection of PBS, INA of PBS plus an IT injection of 3 × 10 8 IFU XVir-N-31 (IT:XVir) or the combination of INA of LX-2/XVir plus IT of XVir (n = 7-8 mice per group; ANOVA; ** p < 0.01).(D) Mean and median survival.The mean for the combination group was calculated presuming the experiment had to be terminated at 350 days post-tumor-implantation (*: Mean survival calculation at experiment termination at day 350 post tumor-implantation.(E) Weight curves of the mice of the experimental groups (n = 7-8 mice per group, means and SEMs).
Figure 6 .
Figure 6.INA of LX-2/XVir significantly prolongs survival and reduces tumor volume in mice bearing R28 GFP -derived GBMs.(A) Tumor volumetry showed significantly smaller tumors in mice that received INA of LX-2/XVir compared to sham-treated mice that received INA of PBS.Mice were sacrificed at the time point at which the first mouse presented tumor burden symptoms (n = 5-6 mice per group; means and SEMs; ANOVA; * p < 0.05).(B) Representative HE images of the tumors.(C) Kaplan-Meier survival analysis of mice bearing R28 GFP GBMs that received either INA of PBS plus a single IT injection of PBS (sham), INA of LX-2/XVir plus an IT injection of PBS, INA of PBS plus an IT injection of 3 × 10 8 IFU XVir-N-31 (IT:XVir) or the combination of INA of LX-2/XVir plus IT of XVir (n = 7-8 mice per group; ANOVA; ** p < 0.01).(D) Mean and median survival.The mean for the combination group was calculated presuming the experiment had to be terminated at 350 days post-tumor-implantation (*: Mean survival calculation at experiment termination at day 350 post tumor-implantation.(E) Weight curves of the mice of the experimental groups (n = 7-8 mice per group, means and SEMs).
Figure 7 .
Figure 7. XVir-N-31 still replicates in R28 GFP -derived GBMs a long time after OVT.The mice were treated as indicated in Figure 6.(sham: INA:PBS + IT:PBS; IT: INA:PBS + IT:XVir; INA: INA:LX-2/XVir + IT:PBS; Combi: INA:LX-2/XVir + IT:XVir).Days indicate the time points at which the animals had to be sacrificed due to tumor-associated symptoms, and, by this time, the immunofluorescence analysis had been performed.(A) Absence of hexon staining in the tumor region of shamtreated mice, whilst hexon staining was detectable in all OVT-treated animals.(B) Hexon staining in the infiltration zone of an R28 GFP -derived GBM 140 days after INA:LX-2/XVir plus IT:PBS (n = 3 mice per group; representative pictures are shown; bars = 20 µm).
Figure 7 .
Figure 7. XVir-N-31 still replicates in R28 GFP -derived GBMs a long time after OVT.The mice were treated as indicated in Figure 6.(sham: INA:PBS + IT:PBS; IT: INA:PBS + IT:XVir; INA: INA:LX-2/XVir + IT:PBS; Combi: INA:LX-2/XVir + IT:XVir).Days indicate the time points at which the animals had to be sacrificed due to tumor-associated symptoms, and, by this time, the immunofluorescence analysis had been performed.(A) Absence of hexon staining in the tumor region of sham-treated mice, whilst hexon staining was detectable in all OVT-treated animals.(B) Hexon staining in the infiltration zone of an R28 GFP -derived GBM 140 days after INA:LX-2/XVir plus IT:PBS (n = 3 mice per group; representative pictures are shown; bars = 20 µm).
Cancers 2023 ,
15, x FOR PEER REVIEW 12 of 17 intratumorally, HMGB1, HSP70 and YB-1 were evident in the core tumor for a long time post-treatment.Similarly, the INA:LX-2/XVir (140 d post-treatment)-and INA/IT combination (182 d post-treatment)-treated mice also showed HMGB1, HSP70 and YB-1 staining in the tumor, which, in most tumor areas, was more or less equally distributed, whereas in a few areas, like in LN-229 GFP tumors at day 18 (Figure
Figure 8 .
Figure 8. XVir-N-31 still induces ICD in R28 GFP -derived GBMs a long time after OVT.HMGB1 is detected in 3 different mice of each group harboring R28 GFP tumors a long time after treatment.Absence of HMGB1 in sham (INA:PBS + IT:PBS)-treated mice, but detection of HMGB1 in all treatment groups for a long time post-XVir-N-31-administration, regardless of the administration method, indicates the induction of ICD in XVir-N-31-infected GBM cells.The time points indicate the period post-treatment that the mice had to be sacrificed due to tumor-associated symptoms (n = 3 mice per group; representative pictures are shown; bars = 20 µm).
Figure 8 .
Figure 8. XVir-N-31 still induces ICD in R28 GFP -derived GBMs a long time after OVT.HMGB1 is detected in 3 different mice of each group harboring R28 GFP tumors a long time after treatment.Absence of HMGB1 in sham (INA:PBS + IT:PBS)-treated mice, but detection of HMGB1 in all treatment groups for a long time post-XVir-N-31-administration, regardless of the administration method, indicates the induction of ICD in XVir-N-31-infected GBM cells.The time points indicate the period post-treatment that the mice had to be sacrificed due to tumor-associated symptoms (n = 3 mice per group; representative pictures are shown; bars = 20 µm). | 2023-10-14T16:00:45.313Z | 2023-10-01T00:00:00.000 | {
"year": 2023,
"sha1": "3b874b7a90612002199d06d9e308c00a2293d802",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-6694/15/20/4912/pdf?version=1696918570",
"oa_status": "CLOSED",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f2066c60fa95e17400db2fed56001c977faad29",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
118646548 | pes2o/s2orc | v3-fos-license | Urban Energy Flux: Human Mobility as a Predictor for Spatial Changes
As a key energy challenge, we urgently require a better understanding of how growing urban populations interact with municipal energy systems and the resulting impact on energy demand across city neighborhoods, which are dense hubs of both consumer population and CO2 emissions. Currently, the physical characteristics of urban infrastructure are the main determinants in predictive modeling of the demand side of energy in our rapidly growing urban areas; overlooking influence related to fluctuating human activities. Here, we show how applying intra-urban human mobility as an indicator for interactions of the population with local energy systems can be translated into spatial imprints to predict the spatial distribution of energy use in urban settings. Our findings establish human mobility as an important element in explaining the spatial structure underlying urban energy flux and demonstrate the utility of a human mobility driven approach for predicting future urban energy demand with implications for CO2 emission strategies.
As a key energy challenge, we urgently require a better understanding of how growing urban populations interact with municipal energy systems and the resulting impact on energy demand across city neighborhoods, which are dense hubs of both consumer population and CO2 emissions. Currently, the physical characteristics of urban infrastructure are the main determinants in predictive modeling of the demand side of energy in our rapidly growing urban areas; overlooking influence related to fluctuating human activities. Here, we show how applying intra-urban human mobility as an indicator for interactions of the population with local energy systems can be translated into spatial imprints to predict the spatial distribution of energy use in urban settings. Our findings establish human mobility as an important element in explaining the spatial structure underlying urban energy flux and demonstrate the utility of a human mobility driven approach for predicting future urban energy demand with implications for CO2 emission strategies.
The earth's rapidly expanding urban spaces are growing in terms of both technology and population at a rapid rate, creating the most complex built environments in human history. A 2014 United Nations report announced that 54% of the world's population now resides in urban areas 1 . It was not until 1950 that New York became the world's first megacity with a population of 10 million or more inhabitants 2 , but over the following decades others joined the category and today's 28 megacities are projected to increase to 41 by 2030 1 . A growth of this magnitude has significant implications for global energy, as urban areas are major consumers (up to 80%) of the world's total energy production, and an increase of up to 56% in global energy requirements has been predicted between 2010 and 2040 3 . Managing and allocating resources and generating credible predictions of future energy demand requires a clear understanding of the spatial distribution and patterns of urban energy consumption by identifying the factors and indicators that determine and influence the demand side of energy.
The spatial distribution of energy use in urban areas depends on human activities and people's daily routines. Certain types of energy use behavior are clustered in specific spatial and temporal locations 4 . These include work, home and leisure activities, all of which have an impact on future energy demand in distinct areas of the city. For example, individuals may practice low consumption habits at work but then consume disproportionate amounts of energy later in the day when they arrive home and they may be consuming energy from either exclusive or shared resources. It is thus important to identify the drivers of this consumption in different regions and explore the patterns and predictors of urban energy use. Unreliable predictions and poor management decisions about future patterns of energy consumption and demand due to non-quantified human dimensions of energy use may adversely affect cities' energy resilience, leading to enormous waste in the financial resources municipalities invest in energy distribution and infrastructure.
Urban Energy Spatial Flux and Demand Prediction
In predicting future energy demand, spatial patterns currently tend to be primarily characterized in terms of physical determinants of the urban infrastructure such building types 5,6 , city location and district features 7 ; and building age and function 8 , generally also taking into account external conditions such as weather and geographic location 9 . However, given that the planet's urban population is predicted to rise by an additional 2.5 billion inhabitants in urban environments by 2050 1 , the scale and diversity of the human activities driving energy consumption continues to expand 10 . The spatial distribution of energy use thus remains in a continuous state of flux and the resulting intricate interdependencies between infrastructure, services, and individuals presage an ambiguous future in which we will face challenges of which we are not yet aware. This means that existing approaches, which are principally based on the physical characteristics of urban infrastructure, will fail to reliably explain patterns of urban energy, and lead to widely inaccurate predictions of energy demand. This raises important questions regarding our ability to create and maintain adequate energy resources to meet demand in our large and growing population centers.
Although several studies have recognized that different human activity patterns may be responsible for fluctuations in energy consumption 11,12 , researchers have only captured this effect within limited areas, such as individual buildings, which cannot adequately represent the global patterns and structures of energy consumption at an urban level. Much of our current understanding of future patterns of energy use comes from decades of research focusing specifically on the physical properties of cities, omitting any consideration of quantified measures of human activities. Despite the importance of the role urban populations play in the transformation of energy systems 13 , reflections of their fluctuating activities are largely absent from urban energy studies.
Treating urban populations as "agents of change" 14 , with rapidly fluctuating patterns of activities, makes it possible to express higher levels of dynamics than the simple physical locations identified in current master plans 15 . To achieve reliable energy demand predictive models, an approach that incorporates patterns of human activities when quantifying spatial fluctuations of energy use is required. Recent advances in both sensing technologies and urban computing methods have greatly increased the availability of relevant data for urban spaces and supported new discoveries related to these challenges [16][17][18][19] . A significant body of work has begun to focus on ways to quantify human activity patterns 17,20,21 . In particular, one of the most popular of the new indicators, human mobility, is now being widely studied. The growing use of humans as sensors has facilitated the collection of city-wide human mobility data 16 via individuals' mobile phone signals, which include GPS data 18,22,23 , as has their smart card commuting data 24 , and locationembedded information from online social networks 17,25,26 , all of which can be used to infer information based on the mobility behavior of urban populations. Here we review statistically significant indications related to the spatial interdependencies between human mobility and urban energy consumption.
Findings
We used human mobility as a possible indicator for the induced fluctuations in the spatial distribution of energy use in Greater London, examining human mobility patterns of individuals using 18,810,222 individual positional records from an online social networking platform (Twitter) across 4,835 spatial divisions measuring radius of gyration (see Methods). Data from 3,438,939 electricity meters, and 3,007,392 gas meters in the same areas across 33 Greater London boroughs, over the course of 2014 was also used (Supplementary Table 1 -2). In order to assess the energy use attributable to individuals' urban mobility and thus evaluate the potential utility of human mobility as a predictor of future energy demand, we first examined how human mobility and energy consumption are spatially distributed, including whether there are underlying processes that impose structure on these distributions that can be used to quantify these patterns or are they merely characterized by spatial heterogeneity and randomness (see Methods). Figure 1 illustrates these distributions across the 4,835 spatial divisions (referred to in Greater London as Lower Layer Super Output Areas, or LSOAs) for human mobility, electricity and gas consumption. We found that the spatial distribution of human mobility is not random; an underlying spatial structure governs the mobility of urban population (Supplementary Results 2.1). This structure was present throughout the year with only insignificant deviations from the mean ( Figure 2). We thus reject the null hypothesis of spatial randomness in favor of structure (i.e., spatial autocorrelation), meaning that the spatial fluctuations of human mobility are relevant and provide additional insights into the structure beyond simply values. Observations of human mobility at one location correlate with those for neighboring locations, with a possible effect on the neighboring values (i.e., values for one division depend on the values at other neighboring locations). Interestingly, this correlation appears to be particularly strong (increased spatial dependency) in September, August, December, and January compared to other months (Supplementary Table 4, Supplementary Figures 5-6). The presence of spatial structure suggests that the locations of the individual mobilities' centers of mass (see Methods: Eq.1) will be significant, likely as a result of where and how individuals arrange their daily trips to home, work, school, shopping, leisure, and so on. Similar results were obtained for energy (electricity and gas) consumption (Supplementary Table 3, Supplementary Figure 4). These spatial autocorrelations suggest predictive models that relate observations of human mobility or energy use at one location to those at other locations can be used to define their particular spatial correlation structure more effectively. Once the existence of a spatial structure for both human mobility and energy consumption was confirmed we asked: Is it likely that people's mobility (representing their daily activity patterns) is the cause of the spatial processes (diffusion, interaction, etc.) driving particular energy use patterns in particular locations? If so, does our data support this? Given the spatial autocorrelations, we conducted spatial regression analysis (See Methods) to visually ( Supplementary Figures 7-10) and statistically (Supplementary Tables 5-12) explore this hypothesis and determine precisely how the strength of the association between human mobility and energy consumption varies by area. The results of the spatial regression analysis between human mobility and energy use performed to evaluate the contributions of human activities to energy use (electricity and gas) at the urban level revealed that the spatial distribution of energy use was not independent of human mobility; rather the spatial imprints of human mobility localized energy demand distribution. These spatial dependencies were intermittent across the year, reinforcing the finding of an underlying spatial structure for human mobility patterns. Interestingly, the monthly difference was almost unnoticeable, reinforcing the utility of human mobility as a predictor for urban energy consumption (Figures 3-4). The spatial regression analysis for electricity and gas across Greater London's 4,835 spatial divisions confirmed the existence of statistically significant relationships between human mobility and energy consumption for two spatial autoregressive models (simultaneous autoregressive models consisting of both lag (SAR) and error (SEM) models), with the SAR models predominantly providing the best representations of global dependency conditions (Supplementary Tables 5-12). Table 1 depicts the statistical significance and parameters of the predictive SAR models with the lowest Akaike information criterion (AIC) for both electricity (AIC =77,037) and gas (AIC =90,936), which were achieved during the month of February. The results of the spatial regression analysis indicate that the strength of the association between human mobility and energy consumption depends on spatial location, which can further be contextualized more locally based on Points of Interest (POIs). This means that human mobility across different areas in Greater London can indeed be regarded as a proxy indicator of spatial fluctuations in energy consumption behavior, with changes in human mobility explaining shifts in the pattern of energy consumption that can then be used to quantify and predict spatial flux for energy use of the urban population.
Discussion and Implications
Human mobility in urban areas has an undeniable impact on the spatial distribution of energy consumption and can thus serve as a quantitative representation of how an urban population interacts with local energy systems. The results presented in this paper suggest that human mobility can be applied to translate the location-based activities of an urban population into collective energy consumption, thus accounting for the urban energy spatial flux. This study elevates our understanding of the human dimensions of energy use beyond occupants' behavior at the building level 11,27 , quantifying a measure of this effect at an urban scale. Human mobility patterns at this wider scale can reveal important information about the way citizens interact with their surroundings, driving energy use. By quantifying these effects, we can measure the strength of relationships, understand interdependencies, and make more reliable predictions of future energy demands.
Our findings regarding the almost invariable spatial dependencies of human mobility over the year as a result of spatial autocorrelation reveal a predictable activity pattern for urban populations and thus energy use within various urban spatial units. Given that the decentralization and efficient allocation of resources is highly dependent on how an urban population's activity patterns are distributed in space, human mobility can be used to infer location choices 28 , anticipate future energy demand and strategize optimal decentralization and resource allocation to different amenities under the influence of human mobility, although further research is needed to contextualize this relationship. Knowing how individuals move around urban open spaces and across the physical infrastructure (both communal and private) of an urban landscape will enable us to build a comprehensive understanding of how certain types of energy behavior are clustered in geographical spaces and temporal locations within urban areas. It should be noted that our results do not exclude the possibility that the physical characteristics of the buildings in their spatial context 5-9 also significantly affect urban building energy use. However, developing a comprehensive model of location-based human activity patterns of urban populations by applying the concept of human mobility will enable us to extend studies of urban energy demand beyond the simple physical determinants of energy use. As they continue to grow ever larger, our urban areas will inevitably encounter serious challenges as they strive to meet the energy demands of their expanding populations for which our current knowledge is likely to prove insufficient. To cope with the continuing growth in population and the corresponding increase in urban activities, we need to develop a deeper understanding of the root causes of societally significant phenomena such as energy consumption. The relationship between energy use and human mobility is a key factor for creating effective policies for urban areas. Accurate information on the spatial dependence between fluctuating patterns of human mobility and energy use, the result of an underlying social/behavioral process, can help define a predictable structure for urban energy demand. Spatial dependence is the product of an underlying location specific activity process that leads to clusters of mobility patterns. These patterns can potentially be explained by groupings of particular populations with similar activity patterns or daily routines 29 ; diffusion processes 30 , where individuals in the same spatial divisions influence, acquire information, and adopt specific energy use patterns; spatial interactions 30 , where individuals tend to interact with those who are spatially closer to them; dispersal processes, where individuals travel short distances (e.g., home to work) and transfer their knowledge and energy use patterns with them; externalities and spatial spillover effects 31 These patterns will also enable us to identify the interdependencies between energy consumption, individual activities, and specific urban spatiotemporal features. Incorporating the spatial imprints into models will advance our understanding and knowledge of the underlying processes and how they propagate across space, shedding new light on the interconnected challenge of theory and analysis.
Perhaps the most striking example of the power of human mobility's impact on urban energy use, and a significant implication of these interdependencies, is the possible spatial spillover effect 31 that determines whether fluctuations in energy use due to human mobility in one spatial unit (i.e., an individual LSOA) have any diffusive impact on its neighboring locations, and if so, whether there is a significant difference in the diffusive effects of these populations. The SAR (simultaneously autoregressive) models, which were found to be the most representative predictive models in this study, permit the magnitude and significance of direct spillover effects to be assessed, showing how changes in human mobility at a particular location will be transmitted to all other locations and thus how they will affect the energy consumption at the corresponding locations.
The availability of such information will allow city managers and policy makers to identify hotspots and develop effective strategies to create bigger energy efficiency spillover effects, or to restrict unwanted or excessive energy use spillover effects. When creating such strategies, individual energy consumption hotspots can be targeted based on the spatial attributes of those locations. Alternatively, particular human mobility networks can become the focus of attention. Diffusing desired effects by introducing changes in the spatial structure (for example by targeting specific buildings or areas to create bigger spillover effects), or instigating contagion by introducing changes in the flow based on contagion (changing the flow, or mobility, by targeting specific clusters of population), will bring urban planners a step closer to achieving better management and allocation of scarce energy resources. The results of this research will also be of value to business practitioners, policy-makers, and research communities by enhancing their future efforts and eliminating overlooked or poorly specified components of urban energy resilience. In particular, by creating a clear picture of the demand-side concentration and diversity, this research will facilitate the appropriate decentralization of the urban energy distribution infrastructure to reduce the vulnerabilities that lead to service disruptions. The main goal of this study has been to contribute to our emerging understanding of how energy use is changing, especially in urban environments. Our ongoing research seeks to understand urban activity patterns across different functional locations using human mobility data to develop integrated predictive models that incorporate temporal elements of activity patterns (for example, recreation, nightlife, shopping, or education) and the resulting fluctuations in the patterns of energy use. Identifying spatial regions with similar temporal activities should allow us to more accurately assess their likely energy use flux and thus optimize the distribution of energy provision. This study contributes to efforts to understand how the urban population interacts with local energy systems by linking human mobility patterns to spatial fluctuations of energy use. Knowing individuals' movements around urban open spaces and across the physical infrastructure of our urban environments will enable us to build a comprehensive understanding of how certain types of energy behavior are clustered in specific geographical spaces and temporal locations within urban areas. In addition, it will enable us to identify the interdependencies between energy consumption, individual activities, and specific urban spatiotemporal features. The ability to understand how humans interact with urban energy systems 14 and identify evolving patterns and features in intra-urban mobility routines is important for predicting future patterns of energy demand and protecting energy resilience. Attaining global reductions in energy use and CO2 emissions will demand a paradigm shift in the way we treat energy demand. A clear picture of demand-side diversity that extends beyond the merely physical characteristics of our urban infrastructure will facilitate a more appropriate decentralization of urban energy distribution, thus reducing both, and the vulnerabilities that lead to service disruptions in our ever more complex urban settings.
Methods
We sought to investigate the interdependencies that may exist between the human activities of an urban population and energy consumption, and, if so, determine whether the distribution of urban energy consumption can be predicted by patterns of human mobility. Using radius of gyration as an indicator for human mobility, we examined the spatial distribution of human mobility and energy use and assessed possible models that could explain the present spatial structure.
Radius of gyration.
In order to obtain a better understanding of human mobility patterns, the radius of gyration (Eq. 2) was selected from among the three most widely accepted indicators used to describe large-scale human mobility patterns: the radius of gyration rg(t), the trip distance distribution p(r), and the number of visited locations S(t) 22,32,33 . Of these, the radius of gyration was deemed the most appropriate for capturing individuals' characteristic travel distance within the areas where they habitually carry out their daily activities (i.e., rgi (t)), as described below: (2) Here, N equals the total number of positional records per individual. The radius of gyration in this study is calculated at two spatial and two temporal levels ( Supplementary Methods 1.2; Supplementary Figure 3). Spatial autocorrelation. Spatial autocorrelation 34 was used to assess of the extent to which the spatial distribution of the data is compatible with spatial randomness and thus determine whether human mobility and energy consumption do indeed have spatial imprints. Spatial autocorrelation tested the spatial independence of human mobility and energy consumption across 4,835 spatial divisions in Greater London ( Supplementary Results 2.1). Moran's I 35 (Eq. 5), which ranges from -1 (most dispersed) to 1 (most clustered), was used to describe the degree of spatial concentration or dispersion for these variables, with large values for I showing clusters of large values that are surrounded by other large values, namely (I+)-spatial clustering, and (I-)-spatial dispersion, indicating large values that are spatially enclosed by smaller values. It is also a test of independence to determine whether values of human mobility or energy consumption observed in one location depend on the values observed at neighboring locations. While Moran's I represents the global spatial autocorrelation for our data, Geary's C 36 (Eq. 6) was also used based on the deviations in the responses of each observation with one another, ranging from 0 (maximum positive autocorrelation) to 2 (maximum negative autocorrelation), with 1 indicating an absence of correlation. Moran's I here serves as a measure of sensitivity to extreme values of energy consumption and human mobility, with Geary's C being used to evaluate the sensitivity to differences in smaller neighborhood LSOAs.
Here, n represents observations on variable x at locations i, j where is the mean of the x variable, and wij are the elements of the weight matrix. Spatial randomness is undesirable, so to ensure that it is not in effect, we reject the situation of spatial randomness in favor of structure (i.e., spatial autocorrelation). Spatial autocorrelation analysis exactly quantifies this, providing a measure of uncertainty (p-value) by which we can reject the null hypothesis (i.e., spatial randomness). A positive spatial autocorrelation indicates that similar values are clusters in neighboring locations, which would be a structure compatible with diffusion 37 .
Spatial regression.
In view of the spatial autocorrelation for human mobility and energy consumption, we investigated the nature of this structure through spatial regression ( Supplementary Results 2.2). Spatial regression models 38 are used to examine the relationships between variables and their neighboring values and offer a useful way to examine the impact that one observation has on other proximate observations. Starting with an ordinary least square model (Eq. 7), with the null hypothesis of a linear regression governing the structure of energy consumption by human mobility as a covariance. u + = x y (7) The expression describes the relationship between a vector of observations on the dependent variable y, a matrix of observations on the explanatory variable x (i.e., human mobility), a vector of regression coefficients β, and an error term u. The error term is required to have constant variance and must be uncorrelated (i.e., to possess homoscedasticity). While correlations explore the relationships between or among different variables, autocorrelations can be regarded as a special case, as they, explore correlations within variables across space 39 . In the search for an appropriate autocorrelation structure for our data, we tested for deviations that would violate the null hypothesis such as a non-constant variance for error terms (i.e., heteroscedasticity), correlations for the error terms induced by Spatial lag (SAR) (Eq. 8), or Spatial Error (SEM) models (Eq. 9). The results are shown in Supplementary Tables 5-8 for electricity consumption, and Supplementary Tables 9-12 for gas consumption.
x Wy y (8) where, y is the dependent variable (i.e., energy consumption: electricity and gas); x is the independent (explanatory) variable (i.e., human mobility); β is the regression coefficient; ε is the random error term; and ρ is the spatial autoregressive coefficient; in the term ρW, which represents the spatially lagged dependent variable. W X y (9) where, y is the dependent variable (i.e., energy consumption: electricity and gas); X is the independent (explanatory) variable (i.e., human mobility); β is the regression coefficient; ε is the random error term; λ is the autoregressive coefficient and ξ represents the normal distribution (0, σ2I) in the term λWξ, which represents the spatial lag for the errors. | 2019-04-13T15:53:03.210Z | 2016-09-05T00:00:00.000 | {
"year": 2016,
"sha1": "8e2a0ba68e5cedcf7a686ac234bec9732941b816",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fafe2ca847c67cefe3e8b4c889726df87c4637a7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Physics",
"Mathematics"
]
} |
159394227 | pes2o/s2orc | v3-fos-license | How media shape political trust: News coverage of immigration and its effects on trust in the European Union
Attitudes towards immigration are among the core predictors of attitudes toward the European Union. However, even though most citizens learn about immigration through the media, we lack a comprehensive account of how media coverage of immigration influences support for the European Union. In this study, we use a combination of European Social Survey and Media Claims data to investigate the effects of the visibility and valence of immigration and refugee media coverage on political trust in the European Union in 18 countries between 2012 and 2016. Our results show that media coverage of immigration and refugees influences trust in the European Union; however, the effects depend on citizens’ ideological leaning and content characteristics. Furthermore, we find that the impact of immigration attitudes on trust in the European Union becomes more important over the course of the refugee crisis.
Introduction
'A democratic political system cannot survive for long without the support of a majority of its citizens' (Miller, 1974: 951).In the European Union (EU), support for democracy and political trust in the EU have fluctuated considerably over the last decade (Armingeon and Guthmann, 2014).Even though levels of trust have recently recovered, still less than half of the European citizenry trust the EU (European Commission, 2018).EU trust, like political trust in general, is a form of evaluation (Kasperson et al., 1992).Extant literature suggests that trust in the EU is based on rational evaluations, identity considerations, and cues from national politics (Harteveld et al., 2013).The recent electoral successes of anti-European political actors are often attributed to a particular subset of identity considerations: anti-immigration stances (Hobolt, 2016).Even though these are usually conceptualized as a part of citizens' identity, these attitudes might also relate to policy and performance evaluations, particularly in times of increased immigration to the EU and shared European responsibility for immigrants.
Most citizens learn about political developments through the media; this applies to information about immigration flows and policies as well as to broader EU politics.However, there is no comprehensive account of the kind of media content that may change trust in political institutions, and particularly the EU.Following the European refugee crisis, Eurobarometer (European Commission, 2018) trends show that immigration has become the citizens' leading concern at the EU level.Media reports about the EU's important role for issues surrounding immigration, such as general border control, the Dublin Regulation, or the 2015 Refugee Relocation Scheme, have arguably made immigration more salient and thus a central issue for evaluations of the EU.Our study sets out to investigate the impact of media coverage of immigration on political trust in the EU, studying changes in the information environment across 18 countries and over three time points (2012)(2013)(2014)(2015)(2016).By combining European Social Survey (ESS) data with the ESS Media Claims dataset, we are able to explore the effects of media coverage over the period of the European refugee crisis, when immigration to the EU changed considerably.This set-up allows us to make three major contributions: First, we distinguish between the effects of media coverage of general immigration and the particular effects of the coverage of refugees and asylum seekers.This is an important, yet overlooked distinction, given that citizens can have vastly different attitudes towards different types of immigration.Second, we distinguish between sheer visibility of immigration media coverage and its valence in order to assess which features of immigration coverage are of consequence.Third, we consider the differential effects of media coverage for different groups of citizens, as these effects are likely to be conditional upon pre-existing ideological stances.
Overall, this study shows that the visibility and tone of immigration in the media can impact trust in the EU.Yet, the nature of this impact depends on the content of the coverage -and particularly whether it is about general immigration or about refugees in particular -as well as on the recipient's pre-existing attitudes.The implications of these findings are addressed in the discussion.
Immigration attitudes and trust in the EU
Trust in the EU has been conceptualized as an attitude directed at the existing system of political institutions.It is situated between the ideal types of diffuse (i.e.directed at the principles of the regime and political community) and specific (i.e.directed at the incumbents or specific policies) political support (Easton, 1965; see also Norris, 1999 for a more fine-grained conceptualization).Trust in the EU is driven by three main factors: cues from national politics, rational considerations of EU performance, and identity-based considerations (e.g.Harteveld et al., 2013).The latter two are often referred to as 'hard' and 'soft' factors (Hooghe and Marks, 2005;McLaren, 2002).
Within the identity-based (or 'soft') explanatory model, attitudes towards immigration are often considered a key variable, as they reflect a negative out-group bias.According to this model, citizens distrust the EU because they identify exclusively with their nation-state.The EU facilitates immigration, which in turn is perceived as a threat to the national identity.Yet, immigration could also play a role in rational performance evaluations, to the extent that citizens evaluate the EU in terms of how well it succeeds in handling immigration-related issues.Research has repeatedly shown that anti-immigration attitudes are related to Euroscepticism (de Vreese et al., 2008;Lubbers and Scheepers, 2007;McLaren, 2007) and to attitudes towards EU enlargement (de Vreese and Boomgaarden, 2005).In addition, some recent studies show that not only attitudes but also real-life events related to immigration have an impact on citizens' opinions about the EU: an influx of refugees (Harteveld et al., 2018) or immigrants from newer to older EU member states (Toshkov and Kortenska, 2015) increases Euroscepticism.
Media effects on trust in the EU
The present study focuses on trust as a comparatively stable measure of support for political institutions (Wessels, 2009).Trust in the EU, as a supranational institution, is a particular case of political trust.Since the EU is more distant and removed from its citizens' everyday lives, their opinions should depend on information from the media to a greater extent, with positive EU coverage making citizens less Eurosceptic (van Spanje and de Vreese, 2014).Media coverage of the EU also impacts political knowledge and blame attributions; however, these effects do not apply to all citizens and depend on medium characteristics (Hobolt and Tilley, 2014).In principle, general political trust can be influenced by media use and content; however, findings are mixed with regard to the direction of this effect (Avery, 2009;Ceron, 2015;Gross et al., 2004).One possible reason for this is the extent to which differential media effects on trust are conditional upon preexisting attitudes (Ceron and Memoli, 2015) and media content and tone (Kleinnijenhuis et al., 2006;Mutz and Reeves, 2005).
Even though citizens can experience increasing immigration directly, they receive a large part of their information on immigration issues from the media.However, media coverage of immigration does not always reflect actual developments (Jacobs et al., 2018); furthermore, the tone of the coverage is typically rather negative (Jacobs et al., 2018;Schlueter and Davidov, 2013).That means that media coverage of immigration may have an influence on opinions independent of actual immigration numbers.For example, increased visibility of immigration in the news media can increase anti-immigration sentiments (Boomgaarden and Vliegenthart, 2007).Since higher numbers of refugees (Harteveld et al., 2018) and increased immigration to a country (Toshkov and Kortenska, 2015) are related to amplified Euroscepticism, we hypothesize that increased visibility of immigration and refugees in the media could have an analogous effect.Multiple studies which have compared the effect of media coverage and real-world immigration metrics have found stronger effects of media coverage, and particularly the tone of the coverage, on immigration attitudes (e.g.Schlueter and Davidov, 2013).Regarding EU attitudes, Harteveld et al. (2018) found that the negative effect of the number of asylum applications -i.e.real-world developments -is fully mediated by media coverage of refugees.Even though this study only considered visibility, and not the tone of the coverage, it emphasizes the important role of media coverage of immigration on EU attitudes.
Alongside tone and visibility, another under-explored aspect of immigration coverage concerns the distinctions between general immigration and asylum seekers.Experimental evidence shows that simply interchanging the words 'refugees' and 'immigrants' does not affect people's perception of them (Hoewe, 2018).However, in real-world media coverage, the two expressions are likely tied to different narratives and content.Anthropological research argues that framing an individual as a 'refugee' rather than a 'migrant' makes that individual seem more deserving of various rights, since the term 'refuge' stresses the involuntary displacement (Holmes and Castañeda, 2016).In line with this, media exposure to refugees drowning in the Mediterranean Sea during the phasing out of the 'Mare Nostrum' operation lead to reduced xenophobic attitudes (De Poli et al., 2017).Europeans are also more willing to accept asylum seekers with severe vulnerabilities (Bansak et al., 2016).
On the other hand, anti-immigration attitudes are strongly driven by concerns about the impact of immigration on a country's overall economy, the welfare state (Fietkau and Hansen, 2018;Hainmueller and Hiscox, 2010), and culture (Hainmueller and Hopkins, 2014).Consequently, the public typically prefers welleducated, highly skilled immigrants with good chances of integration into the labor market (Fietkau and Hansen, 2018;Hainmueller and Hiscox, 2010;Turper et al., 2015). 1 The potentially higher 'expected economic costs' (Turper et al., 2015: 254) of integrating refugees, as opposed to labor migrants, might lead to more negative perceptions.There is little research on how the coverage of either refugees or immigrants emphasizes these aspects; therefore, it is also unclear how the coverage of immigration and refugees may affect attitudes towards the EU differently.However, based on the outlined economic considerations, we might expect a similar, albeit more pronounced effect, for the coverage of refugees on EU trust as for general immigration coverage.Refugee coverage may also have a stronger effect because it is a newer development particularly connected to a 'crisis', whereas citizens may be more used to the coverage of 'regular' immigration.
Generally speaking, media information can affect knowledge and attitudes through two different mechanisms: through individual media consumption (van Spanje and de Vreese, 2014), but also through changes in the general media environment (Azrout et al., 2012).The latter takes place because large-scale shifts in public opinion are reflected in most media coverage and citizens are thus very likely to encounter it, regardless of their individual media use.For example, it is not necessary to be exposed to a specific medium or type of content to learn about an important and impactful event such as the refugee crisis.The amount and the tone of coverage that is generally available in a certain place at a certain time can thus influence citizens' opinions.
In sum, tone and visibility of political media coverage can have differential and complementary effects on political attitudes and voting behavior (Geiß and Sch€ afer, 2017).While Harteveld et al. (2018) have found an effect of media visibility of immigration on Euroscepticism, there is no research on media effects on EU trust in particular.Furthermore, previous studies have not distinguished between tone and visibility, or different kinds of immigration, namely 'regular' immigration and refugees.Synthesizing the different strands of literature, we hypothesize the following: Citizens can process the same media information differently depending on their pre-existing attitudes (e.g.Geiß and Sch€ afer, 2017).Particularly in the context of immigration issues, ideological stances may lead to rather different outlooks on new information about such issues, as they structure different dimensions of immigration attitudes.For example, humanitarian considerations are more important for left-wing citizens, whereas religious concerns are stronger for right-wing citizens (Bansak et al., 2016).Furthermore, political-cultural aspects of the EU, such as the loss of national identity through the inclusion of people from different countries, are more important for right-wing citizens (Hooghe and Marks, 2009;van Elsas and van der Brug, 2015).Therefore, the hypothesized negative effect of media coverage may be enhanced for right-wing citizens, as they consider these aspects more important and could therefore be more susceptible to media effects.
H3: The negative effect of a higher media visibility of (a) immigration and (b) refugees media coverage on EU trust is stronger for right-wing citizens than for leftwing citizens.
H4:
The negative effect of more negative coverage of (a) immigration and (b) refugees on EU trust is stronger for right-wing citizens than for left-wing citizens.
An issue that is more visible in the media may be taken into consideration more when forming a political opinion.Specifically, if the media pay more attention to immigration, attitudes towards it may also become a more important factor in explaining general EU trust.Previous research found that 'soft' factors did not become more important for explaining Euroscepticism between 1994and 2005(van Klingeren et al., 2013).However, the 2015 refugee crisis may have had a more disruptive effect on the importance of immigration issues.We hypothesize that immigration attitudes will have a stronger effect on trust in the EU in contexts in which the topic of immigration is more salient in the media or evaluated more negatively.
H5:
The effect of anti-immigration attitudes on trust in the EU becomes stronger (a) over the course of the refugee crisis, and (b) in countries with more coverage of immigration and refugees or (c) more negative coverage of immigration and refugees.
Method
The study combines data from rounds 6 (2012-2013), 7 (2014-2015), and 8 (2016-2017) of the ESS and its corresponding Media Claims dataset in 18 countries.Table 1 shows the distribution of respondents and newspaper articles per country and time point.Table 2 shows descriptive statistics for all variables used in the analysis.
Media data
The Media Claims dataset includes data about the media context in each country during the time of the survey fieldwork, but for a minimum of 10 weeks if fieldwork was shorter than 10 weeks.It does not include information about a respondent's individual news consumption, but rather systematic changes in their larger media context.In each country, two 2 national quality newspapers are selected, if possible one left-and one right-leaning.This approach makes it more likely to capture different types of immigration news coverage, which can vary for newspapers of different ideological leanings (Fryberg et al., 2012;Roggeband and Vliegenthart, 2007).We use Edition 4.0 of the Media Claims dataset for round 6 and Edition 1.0 for round 7 and round 8.
The most important and most salient news -typically on the front page and the domestic news section -were coded.Country and newspaper-specific characteristics were taken into account when identifying the most important news.The unit of analysis was so-called 'claims' -'the expression of a political opinion' (European Social Survey, 2016: 8).An article could contain multiple claims.Each claim was analyzed individually with regard to its main topic and its direction or valence.Three of the topics are related to immigration: general immigration, the economic impact of immigration, and the impact of immigration on cultural diversity.For each, a positive value (1) for the 'direction' variable connotes a positive evaluation of immigration, i.e. that the statement or action in the claim is in favor of immigration.A negative value (-1), on the other hand, connotes that the statement is against immigration.A neutral value (0) stands for 'neither for nor against'.The claim is thus interpreted based on the position taken towards immigration, and not on the tone of the statement.For instance, a claim that 'policy X has been successful in reducing immigration' is coded as 'against immigration', even if the word 'successful' suggests a positive tone.For the present study, the visibility of immigration and EU issues was computed as the share of claims relating to one of the immigration topics relative to the total amount of claims coded.When there were no claims relating to these issues in a country during the period of data collection, visibility was coded as zero and valence was coded as neutral.
However, this variable captures any topic related to immigration, whereas we are also specifically interested in the effect of the refugee crisis.Therefore, in addition to the variable coded by human coders, we also included an automatically coded variable indicating whether words relating to refugees or asylum seekers were mentioned in the claim as well as a variable indicating the human-coded valence of each statement. 3Finally, we also included the coverage of the EU itself as a control variable.Visibility of the EU is the share of articles that relate to EU integration or enlargement (as categorized by the coders); a positive value for the 'direction' variable indicates that the claim is in favor of stronger EU integration.
Since the ESS does not provide measures of intercoder reliability, we replicated their coding procedure for the variables and categories we use with a subsample of N ¼ 102 random articles.Inter-coder reliability between our results and the original codes was calculated using Krippendorf's alpha.For the main topic (whether it was about immigration, the EU, or a different topic), reliability was a ¼ .90(with a percentage agreement of 98%) and for the direction of the claim, based on the ESS coding instructions for the given main topics, it was a ¼ .77(with a percentage agreement of 85.3%).We deem the measurement instrument reliable for both the visibility estimate and the tone estimate.
Survey data
We used data from Edition 2.3 of ESS round 6, Edition 2.1 of round 7, and Edition 1.0 of round 8.The survey was conducted as face-to-face interviews among representative samples of the population in more than 30, mostly European countries.Variables used in this study include age in years, education in years, and gender.Trust in the EU is operationalized as trust in the European Parliament (EP), which is measured on an 11-point scale ranging from 'no trust at all' to 'complete trust'.Trust in the EU and the EP correlate highly, and trust in the EP has been used as a proxy for EU trust in previous studies (e.g.Muñoz et al., 2011).Government satisfaction ('extremely dissatisfied' to 'extremely satisfied') and left-right selfplacement were measured on 11-point scales.Immigration attitudes are operationalized as the mean answer to three items measured on an 11-point scale: 'Immigration is good or bad for the country's economy', 'The country's cultural life is undermined or enriched by immigrants', and 'Immigrants make the country a worse or better place to live' (Cronbach's a ¼ 0.86).A higher value stands for more positive immigration attitudes.
Asylum application numbers were obtained from Eurostat.We connected the survey data to the average number of asylum applications per month during the period that respondents were surveyed (i.e. a period of approximately two years).This way, the numbers can be directly compared to the media environment.The absolute numbers of applications were divided by 1000 in order to make the scale easier to interpret.
We only included EU countries for which media context data are available for at least two points in time.The 18 countries that meet these criteria are Austria, Belgium, Czech Republic, Denmark, Estonia, Finland, France, Germany, Great Britain, Hungary, Ireland, Lithuania, the Netherlands, Poland, Portugal, Slovenia, Spain, and Sweden.Our sample consists of N ¼ 92,385 respondents.However, some respondents did not answer all questions.Therefore, the total number of respondents included in the analyses is somewhat reduced (Table 2 shows the number of valid respondents per variable; Table 3 shows the number of respondents per analytical model).The media environment estimates are based on a total of N ¼ 13,732 newspaper articles.
Analysis
The data are nested in 18 countries and three survey rounds.Following Shehata and Str€ omb€ ack (2011), who work with the same data structure and a similar research question, we analyzed the data using a multilevel model in R (Bates et al. (2015); Hlavac (2015); Solt and Hu (2015) and Wickham (2009) R Core Team,
2016)
. Individuals are at the first level; countries are at the second level.In addition to the random intercept, we also include random slopes for the variables that are used in cross-level interaction: immigration attitudes and left-right orientation.These variables were group-mean centered (Enders and Tofighi, 2007;Kreft et al., 1995).Dummies account for the different ESS waves that respondents were interviewed in.However, we acknowledge that our sample of a low number of non-randomly selected countries does not fulfill all assumptions of multi-level modeling (Bryan and Jenkins, 2016).As a robustness check, we thus also estimated the model as a fixed effects analysis.This model leads to highly similar substantial conclusions.However, a conservative model specification including country-wave clusters at the second level in combination with country and wave dummies in the same model shows much fewer significant effects, even though the effects sizes and directions generally remain consistent.This highly demanding approach, however, exhausts much of the available degrees of freedom and runs the risk of creating an overdetermined model, especially in combination with multiple second-level explanatory variables.The fact that we could not replicate the results in this strict specification limits generalizability to countries and time points that are not included in our sample.We report both alternative models in the Online appendix.We control for the effects of gender, age, education, government satisfaction, and the visibility and valence of EU coverage in all models.-158,105.90 -158,091.20 -157,744.40 -157,728.30 -157,935.60 -157,920.10 -157,900.00Akaike Inf.Crit. 316,243.80 316,214.50 315,528.80 315,496.50 315,911.20 315,880.30 315,844.00 Bayesian Inf. Crit. 316,391.10 316,361.80 315,713.00 315,680.70 316,095.30 316,064.40
Results
Figure 1 shows how visible the issue of immigration was in different countries and whether the coverage was pro-or anti-immigration.In most countries, visibility of immigration increased between 2012 and 2016.The pattern for valence, however, is more mixed, with some media environments becoming more positive about immigration and others becoming more negative.The results for the coverage of refugees are similar; the corresponding data are visualized in the Online appendix.
Table 3 shows the results of the analysis.The control variables exhibit the anticipated effects.Government satisfaction and more positive attitudes towards immigration are strong predictors of trust in the EU.Right leaning, younger, more educated citizens, and women trust the EU more.EU coverage also influences trust in the EU: when the EU is more visible and covered more positively, citizens trust it more.
H1 and H2 predict main effects; therefore, we interpret the results of Models 1 and 2, which do not yet include any interaction effects.H1 predicted that citizens would show lower levels of trust in the EU when immigration and refugees were covered more often.Refugee coverage has the expected negative effect on EU trust, in support of H1b; general immigration coverage, however, has no significant effect on EU trust, not supporting H1a.Concerning H2, the results show that there is no significant main effect of the valence of the coverage of immigration on trust in the EU; however, there is a negative effect of the tone of refugee coverage: when refugees are covered more positively, citizens trust the EU less.This does not support our expectations for H2.
H3 and H4 state that these effects of media coverage would be different for citizens with different ideological stances.Models 3 and 4 show the interaction effects of immigration and refugee coverage with left-right ideology.As Figure 2 illustrates, the negative effect of media visibility of refugees on EU trust is strongest for right-wing citizens and becomes weaker to non-existent for left-wing citizens; this confirms H3b.However, we find no support for H3a: While the coefficient shows the same pattern for immigration coverage, it is not statistically significant and the effect is considerably smaller.
Political positions also moderate the effect of the valence of immigration coverage.As displayed in Figure 3, positive valence of immigration news is not associated with changes in EU trust for left-wing citizens, whereas the effect is negative for right-wing citizens.While the interaction effect of the valence of refugee coverage follows an almost identical pattern, it is not statistically significant.
Our last hypothesis stated that attitudes towards immigration would become more important for EU trust over the course of the refugee crisis, as the topic becomes more visible in media coverage and its evaluation becomes more negative.Indeed, as Model 5 shows, the effect of immigration attitudes becomes more important in 2014 compared to 2012, and even more so in 2016, indicating support for H5a.However, we find no support that the predictive importance of immigration attitudes on EU attitudes increases when coverage of refugees and immigration becomes more frequent or more negative.
Robustness checks
As laid out before, we conducted two alternative analyses: First, a simple regression model (see the Online appendix) confirms all effects that we found in the reported model; directions and effect sizes are similar, and all reported results remain significant.The second model, a multi-level model with country-wave combinations at the second level and country-and wave-fixed effects (see the Online appendix), barely contains any significant effects.However, most relevant effects remain very similar in size and direction.One exception is the interaction effect of immigration attitudes and the valence of refugee coverage.Aside from that, the main effects of refugee coverage remain negative, while the main effects of EU coverage remain positive.Also, in line with the main model, the interaction effects of valence of immigration coverage and visibility of refugee coverage with left-right ideology are negative and the interactions of ESS rounds 7 and 8 with immigration attitudes are positive (and statistically significant in the case of ESS round 8).
Discussion
The present study investigated the effects of media coverage of refugees and immigration on trust in the EU in 18 different European countries between 2012 and 2017.Even though immigration attitudes are among the most important predictors of EU attitudes, there is very little previous research on the role of the media -the main source of information about immigration -in this mechanism.We find that both the visibility and valence of refugee coverage have effects on EU trust.These effects are dependent on citizens' political ideology.Furthermore, immigration attitudes become a more important predictor of EU trust over the course of the European refugee and migrant crisis.
We found partial support for our first two hypotheses, as only increased coverage of refugees, but not coverage of general immigration, was associated with reduced trust in the EU.This is in line with the results of Harteveld et al. (2018), which show that increased media attention to refugees increases Euroscepticism.Furthermore, favorable coverage about refugees was associated with reduced trust as well.This may suggest that EU citizens are not satisfied with the way in which the Union handled the refugee crisis, rather than a simple association between antiimmigration and anti-EU attitudes.At the same time, coverage of general immigration did not affect EU evaluations overall.This is an important insight for the literature on the relationship between immigration and attitudes towards the EU and suggests that it may be necessary to differentiate between different types of immigration.
In addition to these general effects, we also take into account how citizens of different ideologies respond to media coverage of immigration.Left-wing citizenswho are, generally speaking, more in favor of immigration and granting asylumdo not show remarkable changes in their evaluation of the EU when immigration is covered more often or more positively.For right-wing citizens, on the other hand, coverage that is in favor of immigration may spark a reactance effect and decreases their trust in the Union.Increased coverage of refugees also has a stronger negative effect on right-wing citizens' EU trust.Overall, the analyses show that trust in the EU is more dependent on the coverage of immigration and asylum issues for right-wing citizens than for left-wing citizens.This is in line with previous research that found political-cultural aspects of the EU to be more important for right-wing citizens (van Elsas and van der Brug, 2015).Typically, such reactions on the right are conceptualized as a consequence of cultural threats to national identities (McLaren, 2002).However, since we find that changes in attitudes towards the EU may in part be caused by media coverage, i.e. new information on immigration issues, this opens up an interesting new perspective on the conceptualization of immigration attitudes as a 'soft' predictor of EU attitudes.If the changes in fact occur as a reaction to information, they may not solely be a response to identity-concerns but could also be conceptualized as an evaluation of the EU's policy performance.For example, right-wing citizens may be unsatisfied with policies such as those implementing refugee relocation quotas, which obligate member states to accept the relocation and resettlement of a certain number of refugees.The trust-as-evaluation approach relies on the idea that trust in political institutions is dependent on perceptions of performance of the institution, but has mostly focused on economic performance (see van der Meer and Hakhverdian, 2017).Our results suggest that the performance evaluations relevant to political trust might extend to other policy domains as well.These questions warrant further investigation to disentangle the evaluation criteria for political trust.
Our results also show that, as expected, immigration attitudes became a more important predictor of trust in the EU over the course of the migrant and refugee crisis; however, this effect was not dependent on media coverage.This implies that at least the type of media coverage that we considered in this study is not the sole source of information for citizens.
Finally, a pattern that emerged from the analysis is the influence of our control variable 'EU coverage' on EU trust.Visibility of European integration in the media coverage exhibits a positive influence on trust.One speculative explanation is the 'mere exposure' effect (Zajonc, 2001): increased exposure to an object, when not connected to negative cues, leads to more favorable evaluations of said object.In the case of political institutions, news about them may also lead to increased transparency (Moy and Hussain, 2011) and result in increased political trust as well (Norris, 2001).However, recent findings (Wojcieszak et al., 2018) show that simple exposure to EU news can polarize citizens further, rendering the positive more positive and the skeptical more skeptical.Further research is needed to disentangle the effects of media visibility of an institution on political trust in it.Finally, our results show that the valence of EU coverage also matters for EU trust.When the coverage is more favorable for European integration, citizens trust the EU more.This is in line with previous studies showing that citizens, that are exposed to more positive media content about the EU, are less likely to vote for Eurosceptic parties (van Spanje and de Vreese, 2014).
Limitations
The data used in this study are publicly available.While the ESS facilitates answering our research question in a cross-national setting with high data quality and a large number of respondents, the data were not collected specifically for the purpose of this study and therefore have several limitations.First, the data, even though collected over a period of more than six years, are cross-sectional in nature and therefore, strictly speaking, do not allow for causal inferences.This concerns media effects in particular.Even though there are strong reasons to assume that changes in the media environment precede changes in public opinion, the opposite causal mechanism is also possible.Furthermore, we assume a causal effect of anti-immigration attitudes on trust in the EU.Anti-immigration attitudes have been conceptualized as a predictor of other EU attitudes in multiple previous studies (e.g.Lubbers and Scheepers, 2007;McLaren, 2007).However, given that most of these studies are based on survey data, there is no clear evidence for this causal assumption.On the other hand, our finding that the significance of immigration attitudes for shaping trust in the EU increases over the course of the migrant and refugee crisis provides some cautious confidence in this causal mechanism.Finally, unlike the media content measures, the immigration attitude items of the ESS do not allow for a distinction between refugees and other kinds of immigrants; future research could extend the findings by using more refined measures of immigration attitudes (see Kentmen-Cin and Erisen, 2017).
Our estimates for developments of the media environment are also based on data provided by the ESS.This implies that in some countries, the media environment may not always be perfectly reflected in the choice of newspapers.However, this is outweighed by the benefit of being able to conduct a retrospective analysis of a time in which the importance and magnitude of immigration to the EU changed considerably.
We rely on measures of the media environment, as previous studies showed that the media environment in and of itself can change attitudes, particularly in the context of public opinion about the EU (Azrout et al., 2012).Due to data limitations, we could not include media exposure at the individual level -which is, however, a likely moderator of these effects.For instance, a previous study found that political events influence EU opinions to a higher extent if citizens are more attentive to political news (Semetko et al., 2003).Furthermore, the associations, that we found in this study, might be exacerbated for individuals who use more partisan media sources.Previous research indicates that simple media use measures have different, contradictory relationships with political trust (Avery, 2009;Ceron, 2015;Gross et al., 2004).To disentangle these mechanisms, an encompassing design would need to consider both pre-existing attitudes (Avery, 2009;Ceron and Memoli, 2015) and media content features, such as the ones investigated in this study, and incivility (Mutz and Reeves, 2005) or negativity (Kleinnijenhuis et al., 2006).
Notwithstanding these limitations, the present study makes an important contribution to the literature on the impact of immigration issues for EU public opinion.It highlights how the information environment can affect trust in the EU, or political institutions more generally, especially when considering citizens' pre-existing ideological stances.Future research should disentangle the mechanisms that may explain why left-and right-leaning citizens respond differently to the coverage of refugees and immigration, for example by investigating whether this relation is in fact mediated by negative evaluations of EU policies during the refugee crisis, which then decrease EU trust.Most importantly, our research emphasizes the significance of media coverage for EU trust.It shows that not only coverage of the EU itself but also of specific policy areas like immigration can have an impact on how much citizens of different ideological leanings trust the Union.
Notes
1.The traditional labor-market competition hypothesis, according to which particularly low-skilled citizens are opposed to immigrants with similar skill-levels as themselves has been disputed recently, see, e.g.Hainmueller et al. (2015).2. Exceptions are Belgium, in which four newspapers were analyzed to reflect the media climate in Wallonia and Flanders equally, and Finland, in which only one newspaper was analyzed.In some countries, some newspapers are not consistent over time but are replaced with newspapers of similar political leaning.See the Online appendix for a detailed overview of all newspapers.3. The valence coding for violence-related topics had to be reversed before the analyses, as a positive value indicates a negative evaluation -i.e.too much violence in society.
Supplemental Material
The Supplemental Material for this article is available online.
H1:
In media environments in which (a) immigration and (b) refugees are covered more often, citizens trust the EU less.H2: In media environments in which (a) immigration and (b) refugees are covered more negatively, citizens trust the EU less.
Figure 1 .
Figure 1.Development of media visibility and valence of immigration related issues.Note.The x-axis shows the ESS waves 6 (2012), 7 (2014), and 8 (2016).The right y-axis shows the visibility of immigration coverage as the percentage of all claims relating to immigration (see dotted line).The left y-axis shows the average valence of the immigration-related claims (see solid line) on a scale ranging from -1 (negative or against immigration) to 1 (positive or in favor of immigration).
Figure 2 .
Figure 2. Effect of media visibility of refugees on EU trust for different ideological groups.
Figure 3 .
Figure 3.Effect of the valence of immigration coverage on EU trust for different ideological groups.
Table 1 .
Respondents and articles per country and ESS round.
Note: The monthly number of asylum applications is averaged over the period between survey rounds, i.e. approximately two years.The absolute number of monthly asylum applications ranged between 0 and 94,350. | 2019-05-21T13:05:59.765Z | 2019-04-12T00:00:00.000 | {
"year": 2019,
"sha1": "56a710403da9c96e17087b25c1f04c5f402d90e3",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/1465116519841706",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "08ba1d473ebcd4e41adad150b9c5be4abe540ae5",
"s2fieldsofstudy": [
"Political Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
119222802 | pes2o/s2orc | v3-fos-license | Group structures and representations of graph states
A special configuration of graph state stabilizers, which contains only Pauli $\sigma_X$ operators, is studied. The vertex sets $\xi$ associated with such configurations are defined as what we call X-chains of graph states. The X-chains of a general graph state can be determined efficiently. They form a group structure such that one can obtain the explicit representation of graph states in the X-basis via the so-called X-chain factorization diagram. We show that graph states with different X-chain groups can have different probability distributions of X-measurement outcomes, which allows one to distinguish certain graph states with X-measurements. We provide an approach to find the Schmidt decomposition of graph states in the X-basis. The existence of X-chains in a subsystem facilitates error correction in the entanglement localization of graph states. In all of these applications, the difficulty of the task decreases with increasing number of X-chains. Furthermore, we show that the overlap of two graph states can be efficiently determined via X-chains, while its computational complexity with other known methods increases exponentially.
I. INTRODUCTION
Graph states [1][2][3][4][5][6][7] represent specific multipartite entangled quantum systems. They are an important resource for measurement-based quantum computation: there, the multipartite entanglement of cluster states (a special class of graph states) is consumed by local measurements on subsystems. Depending on the measurement outcomes, local unitary transformations of the remaining systems are performed. In this way, certain quantum operations can be implemented.
Graph states can be represented in the stabilizer formalism as eigenstates of certain tensor products of Pauli σ X -and σ Z -operators (the graph state stabilizers). The explicit structure of the stabilizer operators depends on the structure of the underlying graph. The stabilizers form a group (under multiplication), which is generated from n generators, where n is the number of vertices of the graph.
In this paper we will introduce the concept of X-chains. X-chains are subsets of vertices of a given graph which correspond to graph state stabilizers that consist only of Pauli σ X -operators. We will show that these X-chains form a group. Not every graph contains an X-chain. However, it will be shown that if a graph does contain X-chains, this fact can be used as an efficient tool to determine essential properties of the corresponding graph state, such as its overlap with other graph states, its entanglement characteristics, and the existence of error correcting code words in subsystems of graph states. Note that the overlap of two graph states cannot be determined efficiently up to date. The X-chains provide an efficient method to solve this problem.
While usually graph states are given in the Z-basis, the concepts and methods developed in this paper show that it is often favorable to represent graph states in the Xbasis, in particular when one wants to study overlaps of graph states or determine their entanglement properties. The reason for this fact is that for all graph states originating from the same number of vertices, the probability distribution of outcomes of local Z-measurements are uniform, while they are non-uniform for outcomes of local X-measurements. Different X-measurement outcomes of two graph states reflect their difference in the X-chain groups, as the existence of an X-chain in a graph state implies vanishing probability of certain X-measurement outcomes. Reversely, X-chain groups of graph states determine their representation in the X-basis.
In the present paper we will focus on introducing the concept of X-chains, illustrating it with examples, and presenting some applications. The X-chain group of a given graph state can be efficiently determined, the search of X-chains in a given graph state will be studied in detail elsewhere [8] and a Mathematica package is available in [9]. This paper is organized as follows. In section I A, we review the essential concepts of graph theory and graph states. In section II, we review the representation of graph states in the Z-basis and point out its disadvantage in distinguishing graph states. Then we introduce X-chains and study their properties in section III. The representation of graph states in the X-basis is derived via the so-called X-chain factorization in section IV, where we show how X-chain groups feature the Xmeasurement outcomes on graph states. In section V, we discuss several applications of X-chains, namely the calculation of the overlap of two graphs states (section V A), the Schmidt decomposition of graph states in the X-basis (section V B) and the entanglement localization [10] of graph states against errors (section V B 1). The proofs are presented in Appendix A, and a list of notations and symbols is given in Appendix B. vertices. A symmetric relation between two vertices v 1 and v 2 , e.g. a two-way bridge between two islands, can be represented by the vertex set e = {v 1 , v 2 }, which is called undirected edge. Let ξ a , ξ b ⊆ V G be two subsets of V G , then the edges between ξ a and ξ b are the edges e = {v a , v b }, which have one vertex v a ∈ ξ a and the other vertex v b ∈ ξ b . The set of these edges is denoted by E G (ξ a : ξ b ). A vertex v 1 is a neighbor of v 2 , if they are connected by an edge. The set of all neighbors of v, called the neighborhood of v, is denoted as N v . In Table I we list two of the relevant types of graphs, which will be considered in the main text.
A graph F is a subgraph of G, if its vertices and edges are subsets of the vertex set and the edge set of G, respectively, i.e., V F ⊆ V G and E F ⊆ E G . A subgraph induced by a vertex set ξ ⊆ V G is defined as the graph G[ξ] := (ξ, E G (ξ : ξ)), which has the edge set E G (ξ : ξ) consisting of edges between vertices inside the set ξ. b. Binary notation: In this paper, we use binary numbers to denote a subset of vertices of graphs. Let G be a graph with vertices V G = {1, ..., n} and ξ ⊆ V G be a vertex subset. We denote the binary number of ξ as with E.g. in a 4-vertex graph, 0110 = i {2,3} . The tensor product of Pauli-operators σ α with α ∈ {x, y, z} is denoted as with σ 0 α = 1, σ 1 α = σ α . E.g. for n = 4, σ {2,3} α := 1 ⊗ σ α ⊗ σ α ⊗ 1.
II. REPRESENTATION OF GRAPH STATES
We review the representation of graph states [6,7]. A given graph with n vertices has a corresponding quantum state by associating each vertex v i with a graph state stabilizer generator g i , Here, N i is the neighborhood of the vertex v i . A graph state |G is the n-qubit state stabilized by all g i , i.e., The n graph state stabilizer generators, g i , generate the whole stabilizer group (S G , ·) of |G with multiplication as its group operation. The group S G is Abelian and contains 2 n elements. These 2 n stabilizers uniquely represent a graph state on n vertices. Let us define the"induced stabilizer", which is uniquely associated to a given vertex subset.
Definition 1 (Induced stabilizer).
Let G be a graph on vertices the ξ-induced stabilizer of the graph state |G . Here, g i is the graph state stabilizer generator of |G associated with i-th vertex.
Proposition 2 (Isomorphism of ξ-induction).
Let (S G , ·) be the stabilizer group of a graph state |G , P(V G ) be the power set of the vertex set of G. The vertexinduction operation s (ξ) G is a group isomorphism between (P(V G ), ∆) and (S G , ·), i.e.
where ∆ is the symmetric difference operation.
Proof. See Appendix A.
The summation operation maps the stabilizer group S G to its stabilized space, i.e. the density matrix of the graph state |G [7], Hence there exists also an operation mapping the group P(V G ) to graph states This is a well-known representation of graph states [7]. The representation of a graph state in the computational Z-basis |i Z [12] is given by where |i| is the Hamming weight of i. A G is the adjacency matrix of the graph G, and i, i A G = (i 1 , ..., i n )A G (i 1 , ..., i n ) T . For all graph states with n vertices, the probability amplitudes of Zbasis states i Z |G are homogenously distributed for all |i Z up to a phase −1, i.e. | i Z |G | = 1/2 n/2 . Therefore graph states with the same vertex set all have equivalent probability distribution of local σ Z -measurement outcomes. This means that the Z-basis representation conceals the inner structure of graph states.
Different from the Z-basis, the representation of graph states in the computational X-basis |i X ( i.e. σ ⊗n X |i X = (−1) |i| |i X ) reveals the structure of graph states to a certain degree. One aim in this paper is to find an efficient algorithm, i.e. a mapping from P(V G ) to |G , to represent graph states in the computational X-basis: In the rest of the paper, we denote the X-basis |i X as |i . I.e. |0 = |+ Z = (|0
III. X-CHAINS AND THEIR PROPERTIES
The commutativity of the measurement setting with graph state stabilizers determines whether one can obtain information about a graph state in the laboratory. Graph state stabilizers that commute with σ X -measurements are the stabilizers consisting of solely σ X operators. They are the key ingredient in the representation of graph states in the X-basis. We will call the vertex sets ξ inducing such configurations X-chains of graph states. In this section, the concept of X-chains will be introduced and their properties will be investigated.
The number of σ Z -operators in the graph state stabilizer s (ξ) G depends on the neighborhoods within the vertex set ξ. If a vertex v has an even number of neighbors within ξ, then the Pauli operator σ (v) Z appears an even number of times in s (ξ) G , such that the product becomes the identity. Therefore to find the X-chain configurations of graph states, one needs to study the symmetric difference of neighborhoods within the vertex set ξ, which we define as the correlation index of ξ as follows.
Definition 3 (Correlation index).
Let ξ be a vertex subset of a graph G. Its correlation index is defined as the symmetric difference of neighbourhoods within ξ, where N vi is the neighbourhood of v i and ξ = {v 1 , ..., v k }.
The name "correlation index" will become clearer in Theorem 13 and refers to the fact that for vanishing correlation index the corresponding stabilized state is factorized.
(These states are called X-chain states in Def.9.) Note that the set c ξ occurs as an "index" for the σ Z operator of the induced stabilizer s (ξ) G (see Proposition 5). Besides the correlation index, due to the anticommutativity of σ X and σ Z , the graph state stabilizers depend also on the so-called stabilizer parity of ξ.
Definition 4 (Stabilizer parity).
Let ξ be a vertex subset of a graph G. Its stabilizer parity in |G is defined as the parity of the edge number The stabilizer parity of ξ, π G (ξ) is positive if the edge number E(G[ξ]) is even, otherwise negative. The explicit form of the induced stabilizers is given in the following proposition.
Proposition 5 (Form of the induced stabilizer).
Let ξ be a vertex subset of a graph G. The ξ-induced stabilizer (see Def. 1) of a graph state |G is given by where c ξ is the correlation index of ξ and π G (ξ) is the stabilizer parity of ξ. can be represented in the following binary matrix in which each row represents a stabilizer: the bit strings on the left hand side of the divider are the possible vertex sets ξ occuring as a superscript for the Pauli σ X operators in Eq. A2, while the right hand side are their correlation indices c ξ occurring as superscript for the Pauli σ Z operators. This is the so-called binary representation of graph states [14][15][16]. We interpret this binary representation as an incidence structure [13] in Fig. 1b, in which the vertex sets ξ are depicted as the nodes in the lower row, while the upper row interprets the correlation indices c ξ . In the example of |S 3 , one observes that the correlation indices c ξ do not cover all possible 3-bit binary numbers. The vertex subsets are regrouped according to their correlation indices in Fig. 1c. The concept of regrouping is introduced via the definition of the so-called X-resources as follows.
Definition 6 (X-resources of correlation indices). We denote the set of correlation indices of a graph G as If a vertex set ξ has correlation index c, i.e. c G (ξ) = c, then we call ξ an (X-)resource of c-correlation in G. The (X-)resource set of c-correlation is written as Since in the example of |S 3 the correlation index of {2, 3} is ∅, each correlation index c ∈ C S3 has two X-resources ξ (c1) and ξ (c2) with ξ (c1) = ξ (c2) ∆{2, 3}. The number of X-resources of |S 3 is 2 3 . Therefore the graph state |S 3 generates 4 correlation indices corresponding to 4 binary numbers. The other 4 correlation indices are excluded due to the existence of the non-trivial ∅-correlation resource {2, 3}. This non-trivial ∅-correlation resource decreases the correlations of the graph state in the X-basis. Explicitly a non-trivial ∅-correlation resource induces a stabilizer consisting of solely σ X operators as follows.
G . We will call such vertex sets X-chains.
Definition 7 (X-chains).
Let |G be a graph state. An X-resource of ∅-correlation in G is called an X-chain of G. The set of all X-chains is denoted as X The X-chains of the graph states |S 3 , |S 4 and |C 3 are given as examples in Table II. The X-chains for certain types of graph states (i.e. linear graph states |L n , cycle graph states |C n , complete graph states |K n and star graph states |S n ) are studied in [8]. A Mathematica package is provided for finding X-chains in general graph states [9].
We point out that the X-chains form a group with the symmetric difference operation.
Lemma 8 (X-chain groups and correlation groups).
Let |G be a graph state. The set of X-chains together with the symmetric difference (X which we call call the correlation group of |G . Let Γ G and K G denote the generating sets of (X (∅) G , ∆) and (P(V G )/X (∅) G , ∆), respectively. The stabilizer group (S G , ·) is isomorphic to the direct product of the X-chain group and the correlation group, Graph G X-chains ΓG X-chain generators ΓG TABLE II: X-chain groups of simple graphs: The directed graphs shown under the X-chains illustrate the criterion of X-chains. Once a vertex is selected in a vertex subset ξ then one draws arrows from it to its neighbors. A vertex subset ξ is an X-chain if and only if all vertices of the graph are incident by even number of arrows. The X-chain groups ΓG are generated by their generating sets ΓG.
As a result, the graph state |G is the product of the Xchain group and correlation group inducing stabilizers, i.e.
Proof. See Appendix A.
Note that the brackets Γ G and K G denote the group generated by Γ G and K G , respectively The correlation group represents the partition of the powerset of vertex set P(V G ) regarding the correlation index of the vertex subsets ξ ∈ P(V G ). The members in the correlation group ξ ∈ K G possess distinct correlation indices. All the members in the c-correlation resource set ξ ∈ X (c) are connected by X-chains. Let ξ (c) 1 ∈ X (c) and ξ (c) 2 ∈ X (c) be two X-resources for the same correlation index c, then there must exist an X-chain γ ∈ Γ G , such that For instance, in the example of |S 3 (Fig. 1b), the resources of correlation i (c) = 111 (i.e. c = {1, 2, 3}) are connected by the X-chain {2, 3}, i.e. {1, 3} = {1, 2}∆{2, 3}. Therefore one can choose one member in X (c) to represent the whole resource set X (c) . Hence after the X-chain factorization the group (P ( In Eq. (A8) the Hilbert space H G of the graph state |G is first projected onto the subspace stabilized by the stabilizers s (γ) G with γ ∈ Γ G . It is the subspace, span (Ψ ∅ ), spanned by the stabilized states |ψ ∅ with In this projection, |ψ ∅ are all product states, since every X-chain stabilizer s (γ) G commutes with the σ ×n X operator. After the first projection, the graph state is then obtained via projecting the subspace span (Ψ ∅ ) into the state that is stabilized by the stabilizers s This approach will be employed in the next section to derive the representation of graph states in the X-basis.
IV. X-CHAIN FACTORIZATION OF GRAPH STATES
We express |G in the X-basis as |G = Since s G |i X = ±|i X . In order to fulfill Eq. (22), however, it follows that only the plus sign is possible, i.e.
That means that the possible X-measurement outcomes are solely those X-basis states |i G , which are stabilized by all X-chain stabilizers s (γ) G . A graph state |G is hence a superposition of such particular X-basis states.
E.g. the star graph state |S 3 in Fig. 1 is stabilized by the X-chain stabilizer s In this section, we will derive a general mapping from the X-chain group and correlation group to graph states in the X-basis. This is the question we raised in section II. We first introduce X-chain states and K-correlation states (Definition 9), which span the subspace stabilized by X-chain stabilizers and K-correlation stabilizers, respectively. Given the explicit form of the X-chain states and correlation states in the X-basis (Proposition 10 and 11), one arrives at the X-chain factorization representation of graph states in Theorem 13.
Definition 9 (X-chain states and correlation states). Let |G be a graph state with the X-chain group Γ G and the correlation group K G . We define the X-basis state |i (xΓ G ) (shortly |i (xΓ) ) as the state stabilized by the Pauli σ X operators such that The local unitary transformed states are called X-chain states. Let K ⊆ K G be a correlation subgroup, then a K-correlation state of graph state |G , |ψ K (ξ) , is defined as with ξ ∈ K G / K . Let K ⊆ K ⊆ K G , a set of Kcorrelation states are denoted as In this notation, the set of X-chain states is then written as Ψ Note that |ψ ∅ (∅) = |i (xΓ) , and X-chain states |ψ ∅ (ξ) are ∅-correlation states. The X-basis |i (xΓ) is the fundamental state from which the non-vanishing X-basis components in graph states can be derived. According to its definition, |i (xΓ) depends on the generating set of the Xchain group and the correlation group of a given graph state. One can employ the following approach to obtain the fundamental X-chain state |i (xΓ) .
Proposition 10 (X-chain states in X-basis). Let |G be a graph state with the X-chain group Γ G and the correlation group K G . Let Γ G = {γ 1 , γ 2 , ...}, and γ i = {v i1 , v i2 , · · · }. The generating set Γ G and K G can be chosen as Here, the first element of , with Proof. See Appendix A.
The vertices v i1 are the key for the determination of |x Γ . First of all, we choose the X-chain generators Γ G , such that γ i ⊆ γ j , for all γ i , γ j ∈ Γ G . That means each X-chain generator possesses at least a vertex v i1 as its own vertex exclusively, i.e. v i1 ∈ γ i \ (∪ j =i γ j ). In other words, the vertex v i1 represents the X-chain generator γ i uniquely. The correlation group generators are then chosen as the single vertex V G \ i {v i1 }. At the end, the corresponding vertex set x Γ of the fundamental X-chain state |i (xΓ) is the set of v i1 , whose X-chain generator γ i possesses a negative stabilizer-parity. Note that in general the choice of the X-chain generators Γ G is not unique, therefore the fundamental X-chain states |i (xΓ) are neither. However, the above mentioned approach still arrives to the same set Ψ (∅) of X-chain states, since the X-chain group is unique. Let us illustrate these concepts by an example, the graph state |K ¬1 4 ( Fig. 2a), which corresponds to the graph with one edge missing from the complete graph K 4 . Its X-chain generators can be chosen as Fig. 2c). The exclusive vertex v 1 for γ 1 can be chosen as 1, while v 2 for γ 2 is 4. Since only γ 1 has negative parity, therefore x Γ = {1} and the fundamental X-chain state is |i (xΓ) = |1000 .
From the fundamental X-chain state |i (xΓ) one can derive all the X-chain states and correlation states with the following proposition.
Proposition 11 (Form of X-chain states, K-correlation states). Let ξ ∈ K G be an X-resource and K ⊆ K G . An X-chain state is given as where π G (ξ) is the stabilizer parity of ξ (see Eq. (13)), and c ξ is the correlation index of ξ.
A K-correlation state is the superposition of X-chain states, Proof. See Appendix A.
According to this proposition, the X-chain states of |K ¬1 4 derived from |i (xγ ) = |1000 are given in the table in Fig. 2c. Alternatively, one can also choose the X-chain gener- Fig. 2d).
In this case v 1 = 2 and v 2 = 4. The parities of γ 1 and γ 2 are both negative, hence |i (xΓ) = |0101 . However, the sets of obtained X-chain states Ψ (∅) are identical in both cases. The correlation states |ψ K (ξ) are then the superposition of their corresponding X-chain states. E.g. in Fig. 2c The correlation states have the following properties.
Corollary 12 (Properties of K-correlation states). Let K ⊆ K G be a correlation index subgroup, then Therefore also the space span(Ψ (K) ), see Eq. (26), is stabilized by s 3. For κ ∈ K G and κ ∈ K, the K ∪ {κ}-correlation state can be obtained by Proof. See Appendix A.
With these properties one can derive the representation of graph states in the X-basis.
Theorem 13 (X-chain state representation of graph states). Let |G be a graph state. Then |G is a K G -correlation state, which is a superposition of X-chain states |ψ ∅ (ξ) , i.e. Proof. According to property 1 in Corollary 12, one can infer that |ψ K G is stabilized by all graph state stabilizers s (ξ) G with ξ ∈ Γ G × K G . As a result of Lemma 8, |ψ K G is stabilized by the whole graph state stabilizer group S G . According to the definition of graph states in the stabilizer formalism, one can infer that |G = |ψ K G . The explicit form of |ψ K G in Eq. (33) is obtained by Proposition 11. Note that the graph state obtained by this theorem may differ from the real one by a global phase −1, i.e. |G = − |ψ K G [31]. We summarize the approach of Xchain factorization of a graph state representation in a so-called factorization diagram.
Algorithm 14 (Factorization diagram).
The X-chain factorization of graph states can be described in the factorization diagram shown in Fig. 3. 1. One decomposes the group P(V G ) into the direct product of the X-chain group Γ G and the correlation group K G (Lemma 8).
2. From the X-chain group Γ G , one obtains the set of X-chain states Ψ ∅ K G (Proposition 10).
3. From the correlation group K G , one obtains graph states via the superposition of the X-chain states in Ψ ∅ K G (Theorem 13).
(P(VG), ∆) The arrows in the factorization diagram can be interpreted as a mapping from the sets of X-resources to their corresponding stabilized Hilbert subspaces. As we already discussed at the end of the section II, a graph state is mapped from the powerset of vertices by stabilizer induction, which is depicted in the left hand side of the equality in the diagram. The equation in the first row is the X-chain factorization of the group (P(V G ), ∆) (Lemma 8). The arrow from the X-chain group Γ G to the X-chain states Ψ (∅) interprets the mapping from the X-chain group to the stabilized subspace spanned by Ψ (∅) (Definition 9 and Proposition 10). The arrow from the correlation group K G through the X-chain states Ψ (∅) to the K G -correlation state is a mapping from the subspace span(Ψ (∅) ) to the K G -correlation state |ψ K G , which is stabilized by the K G -stabilizers. This arrowrepresented mapping is the summation (superposition) of the X-chain states over the correlation group K G (Proposition 11). Since the graph state |G is the only stabilized state of the stabilizers induced by the group Γ G × K G , it is identical to the K G -correlation state |ψ K G (Theorem 13), which is represented by the equality of the last line in the factorization diagram. With the help of the factorization diagram in Fig. 2b, the graph state |K Hence the representation of graph states in the Z-basis in Eq. (10) can be reformulated as Comparing this Z-representation with the representation of a graph state in the X-basis given in Eq. (33), the number of terms in the representation is reduced from 2 |V G | to 2 |K G | . The correlation group K G can be directly obtained if one knows the X-chain group. The X-chain group can be searched by a criterion that the cardinality of the intersection of the vertex neighborhood with the X-chain |N v ∩ ξ| should be even for all v ∈ V G [8]. The search of the X-chains of a graph state |G is equivalent to finding the 2-modulus-kernel of the adjacency matrix of the graph G. As this is efficient, the representation of graph states in the X-basis is feasible. The larger the X-chain group that a graph state possesses, the smaller is its correlation group and hence the more efficient is its X-chain factorization. Note that not every graph state has non-trivial Xchains (non-trivial means not the empty set). For graph states without non-trivial X-chains, their X-chain factorization contains all X-basis states, and thus has the same difficulty as their Z-representation.
Besides, the X-chain factorization of graph states in Theorem 13 implies that the possible outcomes of Xmeasurements are only the X-chain states, |ψ ∅ (ξ) . Consequently two graph states with different X-chains can have different X-chain states, and hence are distinguishable via the X-measurement outcomes. In Table III, we list the X-chain generators and X-chain states of graph states with 3 vertices. Since the X-chain states of these graph states are different from each other, one can therefore distinguish these 8 graph states via local X-measurements with non-zero probability of success.
V. APPLICATION OF THE X-CHAIN FACTOR-IZATION
The representation of graph states in the X-chain factorization reveals certain substructures of graph states. In this section, we discuss its usefulness for the calculation of graph state overlaps, the Schmidt decomposition and unilateral projections in bipartite systems.
A. Graph state overlaps
In [17], the overlaps of graph states are the basis for genuine multipartite entanglement detection of randomized graph states with projector-based witnesses 1/2 − |G G|, see [18,19], where G is a connected graph.
An expectation value tr(|H H|G G|) > 1/2 indicates the presence of genuine multipartite entanglement of the graph state |H .
In general, a graph state Since the operators U According to Eq. (10), where G∆H is the symmetric difference of the graphs G and H. G∆H is the graph (V G∆H , E G∆H ), whose vertices and edges are V G∆H = V G = V H and E G∆H = E G ∪E H \E G ∩E H , respectively. However, the complexity of this calculation increases exponentially with the size of the system. The quantity obtained from Eq. (10), corresponds to the difference of the positive and negative amplitudes of |G in the Z-basis. We can define for each graph state |G a Boolean function f G := i Z , i Z A (mod 2) with A being the adjacency matrix. The function f G is balanced, if and only if 0 ⊗n X |G = 0, otherwise it is biased. We introduce the bias degree of a graph state and define its Z-balance as follows.
Definition 15 (Bias degree and Z-balanced graph states).
The (Z-)bias degree β of a graph state |G with n vertices is defined as the overlap where |0 X = (|0 Z + |1 Z ) / √ 2. A graph state with zero bias degree is called Z-balanced.
The bias degree is related to the weight of a graph state, ω − (G) := |{i Z : i Z |G / | i Z |G | = −1}|, which is equal to the number of minus amplitudes in |G in the Z-basis [20]. The probability of finding a negative amplitude in the Z-basis is 1/2 − β(|G )/2, which is equal to ω − (G) /2 n . Note that as a result of Eq. (36), the bias degree of a graph state is equal to the sum of its stabilizer parities.
As a result of Theorem 13, the bias degree 0 x | G , depends only on the number of X-chain generators and the parity of their corresponding X-resources.
Corollary 16 (Graph state overlaps and bias degrees). The overlap of two graph states |G and |H is equal to the bias degree of the graph state |G∆H , i.e.
G|H = β(|G∆H ). (43)
The bias degree of a graph state |G is equal to where Γ G is the X-chain generating set of |G , δ is the Kronecker-delta and π G (γ) is the stabilizer-parity of Xchain generators γ.
In [20], the authors relate the weight ω − (G) to the binary rank of the adjacency matrix of graphs. Our Corollary 16 is a similar result showing that the bias degree depends on the binary rank of the adjacency matrix, which is equal to n − |Γ G |.
Here, we focus on the bias degree and Z-balance of graph states. Since the X-chain group of a graph state can be efficiently determined, instead of Eq. (39), Corollary 16 provides an efficient method to calculate the graph state overlap. As a result of Corollary 16, we arrive at the following corollary.
Corollary 17 (Z-balanced graph states).
A graph state is Z-balanced, if and only if it has at least one X-chain generator γ − with negative stabilizer-parity, i.e. |E (G [γ − ])| is odd. Two graph states are orthogonal, if and only if |G∆H is Z-balanced.
Knowing all the Z-balanced graph states with vertex number n allows to identify all pairs of orthogonal graph states with n vertices. Note that relabeling a graph state (graph isomorphism) does not change its bias degree, since the structure of the X-chain group does not change under graph isomorphism.
In Fig. 4, the Z-balanced graph states up to five vertices are listed. Every graph in the figure represents an isomorphic class. From these balanced graph states one can obtain orthogonal graph states via the graph symmetric difference. Examples of orthogonal graph states derived from the Z-balanced graph states |C 3 and |C 5 are shown in Fig. 5 and 6 , respectively, (C 3 and C 5 are the first and fifth graph in Fig. 4 ).
B. Schmidt decomposition
In this section, we discuss the Schmidt decomposition of graph states represented in the X-basis, which is derived via the X-chain factorization. The Schmidt decomposition of a graph state for an A|B-bipartition reads where φ Here r S is the Schmidt rank of the graph state |G with respect to the partition A versus B. Its value is studied in the section III.B of [6] via the Schmidt decomposition of graph states in the Z-basis, where supp(s We derive the Schmidt decomposition of graph states in the X-basis in the following steps. First, we generalize the X-chain factorization of graph states (Theorem 13) to the X-chain factorization of arbitrary correlation states (Theorem 18). Second, we introduce three correlation subgroups, whose correlation states are A|B-biseparable (Lemma 20). Third, we prove the orthonormality of these correlation states (Lemma 21). At the end, we arrive at the Schmidt decomposition in Theorem 22.
The X-chain factorization of graph states in Theorem 13 can be generalized to correlation states (introduced in Eq. (25) and (A15)) as follows.
Theorem 18 (X-chain factorization of K-correlation states). Let K 1 , K 2 ⊆ K G be two disjoint correlation subgroups of a graph state |G , and K = K 1 ∪ K 2 . Then the K-correlation state is a superposition of K 1 -correlation states, with ξ ∈ K G / K being an element in their quotient group. Theorem 13 is a special case of this theorem related by K = K 1 × K 2 = ∅ × K G . Proof. According to the definition in Eq. (25) it holds (49) Due to the commutativity of the graph state stabilizers it follows According to Proposition 2, s G , the product of (1 + s (κ) G ) with κ ∈ K 2 becomes the sum of the stabilizers s where the second equality is a result of property 2 in Corollary 12.
Algorithm 19 (Factorization diagram of correlation states).
Theorem 18 can be interpreted by the factorization diagram in Fig. 7.
1. One decomposes the group P(V G ) into the direct product of the X-chain group Γ G and the correlation group K G .
2. From the X-chain group Γ G , one obtains the set of X-chain states Ψ ∅ K G . 3. From the correlation group K 1 , one obtains graph states via the superposition of the X-chain states in Ψ ∅ K G within K 1 .
At the end the correlation state |ψ K1∪K2 (ξ)
is the superposition of the K 1 -correlation states K G inside the correlation group ξ ∈ K 2 (Theorem 18).
The subspace of X-chain states span(Ψ (∅) K G ) are projected via K 1 -stabilizers to the space spanned by the K 1 -correlation states |ψ K1 (ξ) . Further, the subspace span(Ψ (K1) K G ) are then projected via K 2 -stabilizers to the K 1 ∪ K 2 -correlation states |ψ K1∪K2 (ξ) . With this theorem, one can obtain the Schmidt decomposition of graph states, by appropriate selection of the correlation subgroup K 1 , such that its corresponding K 1 -correlation states are A|B-separable and mutually orthonormal.
Let |G be a graph state with the correlation group Their symmetric difference is identical to the cycle graph C5, where C5 is the fifth graph in Fig. 4. K G and A|B be a bipartition of its vertices. In order to find the Schmidt decomposition, we select K 1 as the disjoint union of three correlation subgroups specified as follows.
1. The correlation subgroup, whose elements possess a correlation index only in B: 2. The correlation subgroup, whose elements possess a correlation index only in A and only consists of vertices in A: 3. The correlation subgroup, whose elements possess a correlation index only in A, consists of vertices in B and has even number of edges between all β ∈ K (B) : These three groups form a special group called A B-correlation group.
(The notation "A B" is used, as the group is not symmetric with respect to exchanging A and B.) We will show in Lemma 20 that all A B-correlation states |ψ K A B (ξ) with ξ ∈ K A B , shortly |ψ A B (ξ) , are A|B-separable. The corresponding quotient group is denoted as and called (A B)-correlation group. (The notation A B is introduced, as there is again no symmetry under exchange of A and B, as the correlation index c ξ of ξ ∈ K A B is always inside A.) We will show in Theorem 22 that the Schmidt rank of |G is equal to the cardinality | K A B |. That means that the correlation subgroup K A B generates the A|B correlation in the graph state |G . Note that we investigated many graphs and found their correlation subgroups K In this A B-factorization, the correlation group K G is divided into four subgroups. Let us take the graph of "St. Nicholas's house" in Fig. 8a as an example. This "house" state |G House is divided into the bipartition A = {1, 2, 3} versus B = {4, 5}. The correlation group factorization is shown in Fig. 8b. The X-chain group of |G House is {{1, 2, 3}}. The X-resources are factorized by the X-chain group, P(V G ) = Γ G × K G , see the upper row in Fig. 8b. The array is the binary representation of the stabilizers induced by the X-chain generators Γ = {{1, 2, 3}} and correlation group generators K G = {{2}, {3}, {4}, {5}}, it corresponds to the incidence structure on its right hand side. In the second row of Fig. 8b, the X-resources, whose correlation indices lie in the system B, are first grouped together into K (B) = {{4, 5}, {2, 3, 4}} . Second, the X-resources ξ, whose correlation indices c ξ and itself ξ are both contained by V A , are grouped into K Note that |ψ A B (ξ) will be shown to be the Schmidt basis in Theorem 22. There, one will also see that the global phase π G (ξ) ensures positive Schmidt coefficients.
Let us continue to consider the "St. Nicholas's house" state as an example. According to Proposition 10, the fundamental X-chain state of |G House is |i xΓ = |10000 . Then from the K
(A)
A -correlation states, one can read off According to Lemma 20, A B-correlation states are and since π G ({2}) = 1 Orthonormality of the states within the subspaces still needs to be verified. This holds for the explicit example |G House in Eq. (66) and (67). In the general case, the orthonormality is shown in the following lemma. Proof. See Appendix A.
We can now construct the Schmidt decomposition of graph states with A B-correlation states as follows.
Theorem 22
(Schmidt decomposition in A B-correlation states). The Schmidt decomposition of a graph state |G is the superposition of its A B-correlation states, (69) The Schmidt rank r S and geometric measure of the A|Bbipartite entanglement [21,22] can be expressed by Proof. Employing Theorem 13 and 18 together with Lemma 20 one can prove that the graph state |G is equal to the superposition of all biseparable A B-correlation states |ψ A B (ξ) = π G (ξ)|φ A B (ξ) . As a result of the orthonormality of |φ [21]. For the bipartite case the singular value decomposition is equivalent to the Schmidt decomposition. Since the Schmidt coefficients are all 2 −|K A B |/2 , if follows that the geometric measure of bipartite entanglement of a graph state, E A|B g := −2 log 2 (s max ), is equal to log of the Schmidt rank, i.e. log 2 (r S ) = K A B . As a result, the A B-correlation states π G (ξ)|φ A B (ξ) are the A|B-separable states, which are closest to |G .
According to [7], the Schmidt rank is given by | in the language of the X-chain factorization. The Schmidt rank is also equal to the cardinality of the matching [32] between A and B [23]. The matching is the set of edges between A and B, which do not mutually share any common vertex [11].
The group
2. Via the X-chain group Γ G , one obtains the set of X-chain states Ψ ∅ .
The Schmidt basis states |φ (A)
A B (ξ ) are constructed from the superposition of states in Ψ ∅ inside the correlation group K The A B-factorization diagram of |G House is shown in Fig. 8c. As a result of this theorem, the Schmidt decomposition of this state is with |ψ A B (∅) and |ψ A B ({2}) being given in Eq. (66) and (67). The house state has Schmidt rank r S = 2 and the geometric measure of bipartite entanglement E (A|B) g = −2 log 2 (min ψ | ψ A ψ B |G House |) = 1.
Entanglement localization of graph states protected against errors
In this section, we consider the localization of entanglement [10] on graph states shared between Alice and Bob (A|B-bipartition), see Fig. 10a. Alice measures the graph state with Pauli-measurements on her system, then tells Bob her measurement results via a classical channel. At the end, Bob should possess a bipartite maximally entangled state which he knows. A connected graph state is maximally "connected" with respect to entanglement localization, if every pair of vertices can be projected onto a Bell pair with local measurements [7]. The most simple approach to localize the entanglement of |G in the subsystem {B 1 , B 2 } is finding a path between B 1 and B 2 , then removing vertices outside the path with Z-measurements and at the end measuring each vertex on the path between {B 1 , B 2 } in the X-direction. However, the resulting state depends on the measurement outcomes. If errors occur in Alice's measurements, it will leads to a wrong state of Bob. Therefore error correction would be a nice feature in the entanglement localization of graph states.
Graph states are stabilizer states. These states can be exploited as quantum stabilizer codes [7,14,15,24], which are linear codes and protect against errors. In the Schmidt decomposition, the measurement outcomes on the system A imply which states are projected in the system B. The existence of X-chains on Alice's side can provide simple repetition codes as the Schmidt basis in the Schmidt decomposition in X-basis. Therefore, instead of removing the vertices outside a selected path between B 1 and B 2 , we will make X-measurements on them to take the benefit of X-chains for the error correction.
The graph state |G in Fig. 10a is taken as an example. This state has the X-chain generating set and As a result, the Schmidt decomposition of the graph state is 16 In this example, one observes that there are 2 X-chain generators {1, 2} and {1, 3} on Alice's 3-qubit system. This encodes the following [3,1,3] repetition code [14,15,24] in the Schmidt vectors on Alice's system: These codes have the Hamming distance 3. Thus, a single Z-error can be corrected. After a measurement in the X-basis, Alice can therefore correct her result before sending it to Bob. In this approach, Bob will gain the correct acknowledgement of his maximally entangled state after Alice's measurement with confidence. Although the repetition code cannot correct phase errors (the X-errors in X-measurements), it is already sufficient for our task, since a phase error on Alice's side does not change the measurement outcomes. This application may be useful for quantum repeaters [25]. The parties B 1 and B 2 can be at a large distance, such that they are not able to create directly an entangled state between them. In this case, they need the help from Alice as a repeater station to project the entanglement onto B 1 and B 2 .
VI. CONCLUSIONS
In this paper, we discussed properties of the representation of graph states in the computational X-basis. We introduced the framework of X-resources and correlation indices and linked them to the binary representation of graph states. A special type of X-resources was defined as X-chains: an X-chain is a subset of vertices for a given graph, such that the product of the stabilizer generators associated with these vertices contains only σ X -Pauli operators. The set of X-chains of a graph state is a group, which can be calculated efficiently [8]. The X-chain groups revealed structures of graph states and showed how to distinguish them by local σ X measurements. We introduced X-chain factorization (Lemma 8, 13) for deriving the representation of graph states in the X-basis, and it was shown that a graph state can be represented as superposition of all X-chain states (Theorem 13). This approach was illustrated in the so-called factorization diagram (Algorithm 14). The larger the Xchain group is, the fewer X-chain states are needed for representing the graph state.
We demonstrated various applications of the X-chain factorization. An important application is its usefulness for efficiently determining the overlap of two graph states (Corollary 16), for which no efficient algorithm was known before.
Further, we generalized the X-chain factorization approach such that it allows to find the Schmidt decomposition of graph states, which is the superposition of appropriately selected correlation states (Theorem 22, Algorithm 23 and Mathematica package in [9]).
Further benefits of the X-chain factorization are error correction procedures in entanglement localization of graph states in bipartite systems. This could be useful for quantum repeaters [25].
The results of this paper can be extended to general multipartite graph states, e.g. weighted graph states [26,27] and hypergraph states [28][29][30]. Another possible extension of these results is to consider the representation of graph states in a hybrid basis, i.e. for a subset of the qubits one adopts the X-basis, while for the other parties one uses the Z-basis. The graph state in such a hybrid basis can even have a simpler representation (i.e. a smaller number of terms in the superposition) than the one obtained by X-chain factorization. Besides, in [6,7,20,23], various multipartite entanglement measures for graph states were studied. We expect that the approach of X-chain factorization may also be useful in these cases. | 2015-07-23T14:33:22.000Z | 2015-04-13T00:00:00.000 | {
"year": 2015,
"sha1": "130eeaf52ec5bf4357eb022f597c98c5a69ff187",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1504.03302",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "130eeaf52ec5bf4357eb022f597c98c5a69ff187",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
} |
264499115 | pes2o/s2orc | v3-fos-license | On-Device Execution of Deep Learning Models on HoloLens2 for Real-Time Augmented Reality Medical Applications
The integration of Deep Learning (DL) models with the HoloLens2 Augmented Reality (AR) headset has enormous potential for real-time AR medical applications. Currently, most applications execute the models on an external server that communicates with the headset via Wi-Fi. This client-server architecture introduces undesirable delays and lacks reliability for real-time applications. However, due to HoloLens2’s limited computation capabilities, running the DL model directly on the device and achieving real-time performances is not trivial. Therefore, this study has two primary objectives: (i) to systematically evaluate two popular frameworks to execute DL models on HoloLens2—Unity Barracuda and Windows Machine Learning (WinML)—using the inference time as the primary evaluation metric; (ii) to provide benchmark values for state-of-the-art DL models that can be integrated in different medical applications (e.g., Yolo and Unet models). In this study, we executed DL models with various complexities and analyzed inference times ranging from a few milliseconds to seconds. Our results show that Unity Barracuda is significantly faster than WinML (p-value < 0.005). With our findings, we sought to provide practical guidance and reference values for future studies aiming to develop single, portable AR systems for real-time medical assistance.
Introduction
The integration of Artificial Intelligence (AI) in Augmented Reality (AR) systems is beneficial for a wide range of industrial and clinical applications [1].AR systems often provide clinically relevant information to users about their surroundings, with information being derived from onboard image-based sensors.Most computer vision tasks for AR applications (e.g., image classification, object detection and pose estimation) benefit greatly from state-of-the-art Deep Learning (DL) models, specifically Convolutional Neural Networks (CNNs), enhancing performance and user experience [2].Deep learning techniques commonly address tasks such as: object detection, frequently performed with real-time CNN-based DL models such as Yolo [3]; 3D semantic segmentation, which enables enhanced spatial understanding of indoor environments using point cloud data [4,5]; and hand gesture recognition [6,7], with promising implications in the field of touchless medical equipment [8].However, integrating complex DL models into AR systems with limited computational capabilities can be challenging, particularly when real-time performance is needed [9].
AR systems in the medical field, e.g., for surgical assistance, ideally encompass several features [10]: (i) portability to accommodate various surgical and clinical settings, (ii) user-friendliness, (iii) real-time performance to provide timely and relevant information, and (iv) when feasible, voice-controlled functionality to leave the hands of the clinician free.AR Head-Mounted Displays (HMDs), or AR glasses, are currently being explored in the medical field as they can meet all the above-mentioned requirements [11][12][13].Current studies focus on 3D visualization of pre/intra-operative imaging, such as blood vessels and brain MRI, to enhance decision-making processes and provide real-time feedback during complex surgical procedures (e.g., laparoscopy [14][15][16] and endoscopy [17,18]).Beyond the operating room, AR HMDs also show promise in medical training and education, superimposing virtual anatomical models and interactive simulations for an immersive learning experience [19][20][21][22].Additionally, these devices are becoming pivotal in telemedicine, enhancing remote patient consultations with enriched visual aids and data overlays [23].
Among the numerous AR-or Mixed Reality (MR)-HMDs available on the market, HoloLens2 is well suited to the development of DL in AR applications due to its onboard processing capabilities.Its integrated "Holographic Processing Unit" (HPU) and suite of sensors enable complex functionalities, e.g., spatial mapping, hand and eye tracking, and voice control [24].In the forthcoming years, cheaper, power-optimized, and more efficient processors are expected to become available on the market, further supporting the potential for combining AI and AR in standalone wearable AR devices [9].A striking example of the rapid advancement in AR technology is the upcoming mixed-reality headset of Apple, Apple's Vision Pro [25], which is believed to revolutionize the AR HMD user experience.
Despite HoloLens2 having an integrated GPU, its computing capabilities remain limited.Thus far, DL integration has mostly been performed by executing the DL models on an external server, which receives the input data (RGB or depth images) and sends the result back through a wireless connection.However, this architecture can be suboptimal for real-time medical applications due to the latency, instability, and accessibility issues associated with Wi-Fi connectivity.In particular, the reliance on external servers can be problematic in off-grid areas or during emergency situations.Another drawback of wireless connection is that it introduces additional radio frequency transmissions and requires supplementary computing infrastructure.This can be problematic in environments that are either sensitive or already limited in space, such as surgical rooms [26].Therefore, in medical contexts, a straightforward, single-device AR system that ensures real-time performance can simplify the user experience and reduce potential technological pitfalls.In light of these limitations, this study aims to: • Provide an overview of applications, identified in the literature, in which deep learning models are directly executed on the HoloLens2.
•
Assess the feasibility of executing models with various complexity on HoloLens2.Our research builds upon the work of Lazar [27], which concluded that Barracuda was faster than WinML for a specific application using the Lenet5 model.In this study, we conduct a systematic evaluation of a broader range of DL models, providing reference values and relevant tests to quantify the performance of real-time medical applications on HoloLens2.With our findings, we aim to provide valuable technical guidance on how to integrate Machine Learning (ML) and DL models on HoloLens2.
Related Work
A review of academic works where ML/DL models are directly executed on HoloLens2 was performed.The review process involved a structured search on Web of Science, crossreferencing, and a hand search.For the structured search on Web of Science, a search string was used: "HoloLens" AND ("neural network" OR "deep learning" OR "machine learning" OR "artificial intelligence" OR "AI" OR "A.I.").This yielded a total of 79 articles.After removing one duplicate, the following exclusion criteria were applied:
•
Papers in which HoloLens2 is not used (three papers excluded).
•
Papers in which the developed AR application does not integrate ML/DL models (25 papers excluded).
•
Papers in which the integration of ML/DL models is performed by adopting a clientserver architecture (39 papers excluded).
The filtering process resulted in only two papers in which ML/DL models are directly executed on HoloLens2 [28,29].It also revealed that the majority of the works that integrate AI in HoloLens applications adopt a client-server architecture (e.g., Refs.[30,31]).Through cross-referencing and a hand search, we found six additional relevant ML/DL papers for HoloLens2 [27,[32][33][34][35][36].
Table 1 summarizes the studies found in the literature, reporting their application context, the ML/DL model used and its task, the adopted framework (or Application Programming Interfaces, API), and the speed of the model (i.e., the inference time).Except for the study conducted by Lazar [27], all papers listed in Table 1 have opted for WinML as framework to integrate ML/DL models into HoloLens2.To the best of our knowledge, Lazar [27] is the only work in the literature where different approaches to run ML/DL models in HoloLens2 are implemented and compared.Table 1 also shows that the majority of the models are used to perform object detection [28,29,[32][33][34][35][36].These findings are in line with the literature review performed by Bohné [37], which focuses on studies that propose systems integrating machine learning and augmented reality.It is also worth mentioning that Bohné [37] suggests Unity Barracuda as framework to integrate ML/DL models in AR applications, being easy to implement and test.Von Atzigen [34], Doughty [29], and Doughty [28] applied object detection in specialized tasks within the medical field.Advanced techniques have been employed for the detection and pose estimation of surgical instruments [29], as well as for the prediction of surgery phases [28].These specific use cases highlight the potential of deep learning models in real-time AR medical applications.
Microsoft HoloLens2
HoloLens2 is the second generation of Microsoft mixed reality headsets.It offers substantial improvements compared to its predecessor, HoloLens1: a custom-built Holographic Processing Unit (HPU), new sensors (an RGB camera, a depth camera, 4 visible light cameras, and an Inertial Measurement Unit (IMU)), and additional functionalities such as hand gestures and eye tracking.The user experience is further enhanced with a larger field of view (52°), improved battery life (3 h), and reduced weight (566 g) [38,39].
The HPU is a custom-designed ARM-based co-processor, developed to handle tasks related to data streams coming from all of HoloLens' sensors.It is designed to offload work from the main processor, providing more efficient processing for the complex workloads involved in rendering holograms.The main processor, a Qualcomm Snapdragon 850 Compute Platform, has an ARM64 architecture and includes both a CPU and a GPU.The CPU is responsible for executing most of the computing instructions, while the GPU takes care of tasks related to rendering graphics [24].An additional feature provided for users and developers is that GPU or CPU usage can be monitored using the Device Portal on the Windows platform, thus making it easier to manage and optimize application performance [40].
Along with the hardware improvements, HoloLens2 introduces a new version of the research mode, a C++ API that allows access to the raw streams of the sensors [41].The research mode, coupled with the availability of benchmark data [42], further supports the use of HoloLens2 for real-time medical applications.Hololens2 spatial mapping (i.e., the ability to scan the surrounding environment in real-time and localize its position within it) and depth sensing have been extensively validated [43].Moreover, the headtracking capability has proven to provide accurate movement parameters for clinical gait analysis [44].The processing capabilities and available tools make it possible to integrate deep learning models into real-time AR medical applications.
Deep Learning Integration on HoloLens2
Lazar [27] compares the performances of several frameworks (Windows Machine Learning (WinML), TensorFlow.NET, TensorFlow.js, and Barracuda) to integrate ML/DL models in HoloLens2.The study's findings suggest that Barracuda is the optimal choice due to its faster inference and ease of implementation.In this study, we systematically assess the inference times of both WinML and Barracuda for a broader range of models.Our aim is not only to compare the two frameworks but also to examine the relationship between their performances and the complexities of different models.
To conduct our analysis, we developed two applications, one using WinML and one using Unity Barracuda.Both execute DL models in Open Neural Network Exchange Model (ONNX) format.Subsequently, the applications are deployed on HoloLens2, where the inference times are acquired for later processing.The pipeline for integrating DL models on HoloLens2 is depicted in Figure 1.
Open Neural Network Exchange Model
The Open Neural Network Exchange (ONNX) is an open-source standard for representing machine learning models.It was introduced by Microsoft and Facebook with the goal of promoting interoperability within the machine-learning ecosystem [45].This flexibility is achieved by defining a set of standardized operators-individual computations that make up the layers of a neural network-and opsets, which represent specific versions of these operators [46].This standardization enables the same ONNX model to be run across different hardware and software environments.
In fact, a ML/DL model trained with one machine learning framework may not be compatible with another one (or may produce different results).By exporting the pre-trained model to ONNX, the model can be used in different projects by using the right execution tools.In this study, WinML and Barracuda represent the tools to execute pre-trained ONNX models on HoloLens2.
Machine learning frameworks such as TensorFlow, PyTorch, and Keras have native support for exporting to ONNX, and allow flexibility in the ONNX versioning.However, the choice of the ONNX opset version strictly depends on the specific Windows build targeted [47].Moreover, not all versions of ONNX opset are compatible with all execution tools (i.e., WinML and Barracuda) and some operators are not supported at all.
Unity Barracuda
Barracuda is a powerful machine learning inference library developed by Unity Technologies, designed for running DL models directly within Unity applications.Barracuda functionalities can be used in Unity applications by simply downloading the package from the Unity Package Manager [48].To deploy the Unity application on HoloLens2, it must first be built as ARM64 Universal Windows Platform (UWP) app.Then, as every UWP application, it can be deployed using Visual Studio [49].
Although Barracuda is highly flexible and versatile, it currently does not support all model architectures and ONNX operations.However, the library effectively supports MobileNet v1/v2, Tiny YOLO v2, and U-Net models [50], which provide robust capabilities for a broad range of applications.
Barracuda can operate on a variety of device types, including CPU and GPU.In this study, we employed the worker type ComputedPrecompiled to execute the DL model on the HoloLens2 GPU.This worker precomputes certain tasks, which optimizes the model's performance and allows for efficient utilization of the GPU resources [51].
Windows Machine Learning
Windows Machine Learning (WinML) is a Microsoft API that enables developers to run ML/DL models natively on Windows devices, including the HoloLens2 [52].It comes with the standard Windows 10 SDK, which can be installed in Visual Studio through the Visual Studio Installer.In this study, the API was used in C# UWP applications, which, once built for ARM64 platform, were deployed on HoloLens2 [53].WinML is well-known within a broad user community, supports many DL architectures and offers comprehensive documentation.
Similarly to Barracuda, WinML can utilize different hardware resources for model execution (CPU and GPU), as the API allows the selection of the device to evaluate the model on.For this study, the DirectXHighPerformance device was selected for execution.DirectXHighPerformance refers to the most powerful GPU available on the system.This choice allowed us to maximize the high-performance HoloLens2 capabilities for inference computations [54].
In this study, WinML was used in UWP applications, due to the straightforward implementation.However, given Unity's capability to support a multitude of platforms, it is worth mentioning that the use of WinML in Unity applications is possible.Such integration requires the use of Dynamic Link Libraries (DLLs), potentially decreasing the ease of implementation and debugging.
Evaluation Metric: Inference Time
We chose the inference time as the metric to evaluate the performances of WinML and Barracuda.Inference time refers to the duration between executing a model on a single image, measured from the start to the end of the execution.In our experiments, we simulate the typical use case scenario in which the model integrated into HoloLens2 processes real-time images captured by the HoloLens2 colored camera, one image at a time.
To measure the inference time, we utilize the C# Stopwatch class [55].The stopwatch is started immediately before the model execution and stopped right after its completion.For WinML, the execution code is as follows: output = await model .EvaluateAsync ( image ) ; This code invokes the EvaluateAsync method, which internally calls the CreateFrom-StreamAsync method, where the actual inference is performed [54].
For Barracuda, the inference time is calculated as the duration of executing the following lines of code: output = engine .worker .Execute ( image ) .PeekOutput ( ) ; engine .worker .FlushSchedule ( t r u e ) ; The FlushSchedule method with flag set to True is needed in order to block the main thread until the execution is complete [51].
By measuring the inference time using these methods, we can accurately assess the performance of WinML and Barracuda in terms of execution speed.
Experimental Design
The experimental study consists of two phases.In the first phase, we systematically assess the performance of WinML and Barracuda in terms of inference time for CNN models with increasing complexities.In the second phase, we evaluate the inference times of both frameworks for State Of The Art (SOTA) DL models.
To measure the mean inference time for all models, we employ two template applications, one for each framework.These applications read 200 images stored in a local folder and perform model inference on each image using a for loop.After completing the loop, the inference times are recorded in a .txtfile.Each experiment is repeated 5 times, resulting in 1000 samples of inference times for each framework and model.
Impact of Model Complexity on Inference Time
To investigate the performances of WinML and Barracuda, we created multiple DL models with varying complexities.The architecture of the models is composed of stacked convolutional layers; the input layer size is 256 × 256 × 3, and the consequent convolutional layers have a kernel size of 3 × 3.By adjusting the number of layers, i.e., the depth of the model, and the number of filters, we were able to create models with different architectures.Each model was then exported to ONNX format.To evaluate the computational complexity of the models, we determined the number of Multiply-Accumulates (MACs) and the number of parameters.We created two distinct groups of CNN models:
•
Group A. The models belonging to group A have similar complexity in terms of MACs and parameters, but different depths and number of filters (see Table 2).• Group B. The models belonging to group B have the same depth, but increasing MACs and number of parameters (see Table 3).
By following this methodology, we aimed to provide a comprehensive analysis of DL model variations in terms of size, computational complexity (MACs and parameters), and architecture (depth and number of filters).
Table 2. Specifications of Group A ONNX models, listed by their unique identifier (ID), model size in megabytes (Mb), total number of parameters (# params), Multiply-Accumulates (# MACs), number of filters (# filters), and depth of the network (depth).The ID of each model indicates its depth, with A D1 representing a depth of 1, A D10 a depth of 10, and so on.
Feasibility of SOTA Models Integration
The second phase of the experimental study aims to assess the performance of WinML and Barracuda in executing state-of-the-art (SOTA) models on the HoloLens2 device.To conduct this evaluation, we selected a subset of the models listed in Table 1: • Lenet-5, from the work of Lazar [27].The model was implemented using a public GitHub repository [56] and trained on the MNIST dataset, a large database of handwritten digits [57].The model is trained to recognize digits between 0 and 9 from grayscale images of 32 × 32 pixels.• SurgeonAssist-Net [28].The model infers the poses of a drill and of the surgeon's hand from RGB images of 224 × 224 pixels.The ONNX model, pre-trained on the Colibrì dataset [58], is available in the official GitHub repository of the paper [28].
The model version "PyTorch_1.4" was used in this study.
• HMD-EgoPose [29].The model predicts the surgical phase from RGB images of 256 × 256 pixels.The ONNX model, pre-trained on the Cholec-80 dataset [59], is available in the official GitHub repository of the paper [29].• Yolov4-Tiny [34,36].The model performs object detection in RGB images of 416 × 416 pixels.For our study, we utilized a pre-trained version of the model on the Pascal Visual Object Classes (VOC) dataset [60], which is available in a public GitHub repository [61].The model was trained to detect 20 different classes.
In addition, we assessed the inference time of two models that were not found in our literature review:
•
Resnet50 model [62] for 2D Human Pose Estimation (HPE).The model estimates the 2D poses of multiple people from RGB images with variable sizes (in this study, the input of the model are images of 256 × 256 pixels).Mills [63] provides a pre-trained ONNX model in a public GitHub repository.• Unet model.The pre-trained ONNX model was obtained from a public GitHub repository [64].The model performs semantic segmentation of RGB images (256 × 256 pixels).The model version "u2netp.onnx" was used in this study.
We maintained consistency by utilizing the same template applications and methodology (5 repetitions of 200 images for each model) as in the first experimental phase.In order to minimize additional factors that could contribute to the inference time and application FPS (Frames Per Second), we minimized the image processing and post-processing steps, and rendering was intentionally avoided.
Results
This section presents the outcomes of the experiments outlined in Section 3.
Impact of Model Complexity on Inference Time: Results
Tables 4 and 5 present the average inference times for models in Group A and Group B, respectively.The tables provide a comparison of each model's performance, with inference times and standard deviations indicated for both WinML and Barracuda.Figures 2 and 3 present the corresponding bar diagrams.
The Pearson correlation test was applied to evaluate the linear relationship between model depth and inference time across the Group A models.For WinML, the Pearson correlation coefficient was 0.72, with a statistically significant p-value (<0.005).For Barracuda, the Pearson correlation coefficient was 0.99, with a statistically significant p-value (<0.005).
The Pearson correlation test was applied to evaluate the linear relationship between model MACs and number of parameters, separately, and inference time across the Group B models.For both WinML and Barracuda, the Pearson correlation coefficient between MACs and inference time, as well as the correlation coefficient between the number of parameters and inference time, were 0.99, with statistically significant p-values (<0.005).In this analysis, we compare the inference times of WinML and Barracuda for all the models (Group A and Group B).For each model, we considered the five repetitions of the two frameworks as two independent samples.Due to the non-normality of the data (the normality check was performed with the function "normaltest" of scipy Python library [65], which is based on D'Agostino and Pearson's [66]) the non-parametric Mann-Whitney U test was applied.In all cases, the test returned a p-value less than 0.005, revealing a statistically significant difference between the inference times of WinML and Barracuda across all models.
Following the statistical test, we computed the mean ratio of Barracuda's inference time to that of WinML for each model.Barracuda outperformed WinML for the majority of the models.The mean ratio was less than 1 for 7 out of 8 models, implying a faster inference time for Barracuda.Conversely, only one model (B P4_M9 ) exhibited a mean ratio exceeding 1, implying a faster inference time for WinML in that instance.
Feasibility of SOTA Models Integration: Results
Table 6 reports the mean inference times recorded when executing the SOTA models with both WinML and Barracuda.However, it was not possible to test the SurgeonAssist-Net model with Barracuda, as the model includes Gated Recurrent Units (GRUs) which are not supported.Our results can be compared with the inference times reported by the original authors: • SurgeonAssist-Net.Our results show higher inference times with WinML on GPU than the original authors reported for WinML on CPU, which they reported as 219 ms in the paper and estimated between 200 and 350 ms in their GitHub repository.• HMD-EgoPose.We recorded inference times of around 1 s when using WinML on GPU, similar to the inference times reported by the author in CPU.However, when executed with Barracuda, the recorded inference times were notably shorter at 384 ms.• Yolo-v2Tiny.We recorded inference times of 1.3 s using WinML on GPU, comparable with the literature (the inference times reported by von Atzigen [34] and Hamilton [36] are, respectively, 900 ms and 1.7 s).Using Barracuda, the inference times decrease to 630 ms.For Lenet-5, the authors reported inference times for a batch of 10,000 images, which is not directly comparable with our individual inference times.Regarding Resnet50, we found no reference values as, to the best of our knowledge, no previous works have executed a Resnet50 model directly on HoloLens2.Similarly, for the Unet model, we could not find any reference values.The works we reviewed-the research paper by Zakaria [35] and a GitHub repository [64]-did not provide such information.
Discussion
Our results indicate that both the complexity of deep learning models and the choice of framework significantly influence inference time.We investigated the impact of model complexity, and found a strong positive correlation between model depth, number of parameters, MACs, and inference time.These findings align with theoretical expectations and prior research: (more) complex models generally require more computational power and thus, more time to infer.We also found that Barracuda consistently outperformed WinML, except for one of the tested models (B P4_M9 ).This may be due to differences in the framework implementations that are beyond the scope of this study.
Table 6 shows striking examples of performance improvement when opting for Barracuda over WinML.In particular, the HMD-EgoPose model (originally deemed unsuitable for real-time applications due to an inference time of 1 s with WinML) showed an improved speed of 384 ms.During surgery, a lag or delay in recognizing the drill's pose can interfere with the precision of the incision.With Barracuda, the inference speed for HMD-EgoPose nearly tripled compared to WinML, greatly enhancing its potential utility in surgical procedures.Another compelling example is the inference time achieved with Barracuda for the YoloV2-Tiny model.In our tests, Barracuda registered an inference time of 630 ms, which is less than half of WinML's 1330 ms.Notably, the inference of Barracuda is considerably faster than that previously reported in the literature, with one study [34] reporting 1 s, and another [36] reporting 1.7 s.Von Atzigen [34] successfully employed YoloV2-Tiny for the detection of implanted pedicle screw heads during spinal fusion surgery.However, the authors acknowledge that the low application frame rate is a limitation of their study.As for HMD-EgoPose, adopting Barracuda could potentially help translate von Atzigen's [34] proof-of-concept application in clinical practice.
Our results, in line with the works of Lazar [27] and Bohné [37], strongly suggest exploring Barracuda as an inference framework.While WinML may support a broader range of DL architectures, Barracuda allows for faster model inference and is easier to integrate in Unity applications-a valuable feature given Unity's support for the development of Apple Vision Pro applications [67].Our results suggest that, by adopting Barracuda to execute DL models directly on HoloLens2: (i) high application frame rates (>30 fps) can be achieved by models with MACs less than 10 7 , such as Lenet5; (ii) more complex models, such as EfficientNetB0, are likely to yield an application frame rate of only a few fps; (iii) models with a number of MACs of the order of 10 10 , such as Resnet50 and Unet, will likely exhibit inference times of the order of seconds.
Limitations and Future Research
Despite our results demonstrating the feasibility of integrating SOTA models into HoloLens2, there are several study limitations.Firstly, it is limited to a specific set of DL models and conditions.Besides model complexity and framework choice, software versions can also greatly influence inference time.Table 6 reveals a discrepancy in the inference times of SurgeonAssist-Net using WinML between our study and that of the original authors.It is possible that the authors explored a range of builds and versions to fine-tune performance, an approach we did not adopt in our analysis.Secondly, while the inference time represents the execution speed of the models, the overall application frame rate can be influenced by other factors (e.g., rendering and image processing).
It is also important to acknowledge that, although the execution of SOTA models is faster with Barracuda, it is not yet adequate for all applications.A relevant example is HPE; performing real-time (>30 fps) HPE on HoloLens2 could enable physiotherapists to intuitively visualize the motion parameters of their patients, such as their range of motion, rather than relying solely on 2D screen displays.However, Resnet50 yielded an inference time of 700 ms, corresponding to 1.4 fps (Table 6).Moreover, executing SOTA models on-device may not be feasible for image-guided surgery AR applications requiring high frame rate and real-time feedback.However, the performances of SOTA models can still be adequate for surgical planning, needle insertion, and medical training.
Future studies should explore optimization techniques (e.g., post-training quantization [68]) for faster inference, and quantify their impact on model accuracy.In addition, newer DL architectures (e.g., Vision Transformers [69]) should be investigated.Executing DL models in Unity applications using Barracuda can ease the transition from HoloLens2 to future AR HMDs-as the upcoming Apple Vision Pro-broadening the horizon for real-time medical applications.
Conclusions
In conclusion, this study presents a systematic evaluation of the influence of model complexity for deep learning models running directly on HoloLens2.Additionally, we compared the performances of two popular inference frameworks-Windows Machine Learning and Unity Barracuda.Our results showed that model complexity in terms of depth, parameters, and MACs positively correlates with inference time.Furthermore, we found significant differences in the performance of WinML and Barracuda frameworks, with Barracuda generally yielding faster inference times.With our work, we sought to provide technical guidance and reference values for future HoloLens2 applications that aim to execute DL models directly on the device.
Figure 1 .
Figure 1.Overview of the pipeline for integrating DL models on HoloLens2.
3. 3 . 3 .
Software and Library Versions The HoloLens2 device used in our experiments was running on OS build 20348.1543.The C# UWP apps that utilize WinML were developed using Visual Studio Community 2019, with Windows 10 build 19041 (Version 2004).WinML is part of the Windows 10 SDK (MSVC v142).The Unity projects that employ Barracuda have an editor version of 2021.3, and use Barracuda version 3.0.The CNNs models were created using a custom-made Python script (Python version 3.9, TensorFlow library version 2.12, ONNX version 1.14).The library "onnx_tools" (version 0.3) was used for ONNX model profiling.All ONNX models have opset version 10, except for Lenet-5, SurgeonAssist-Net and Unet, which have opset version 9, and Yolov4-Tiny model, which has opset version 8.
Figure 2 .
Figure 2. Comparison of mean inference times for models of Group A. The bars represent the average inference time across five repetitions of 200 images with each model.The values on top of each bar indicate the mean inference time and the average standard deviation (in milliseconds) across the five repetitions.The y-axis is in logarithmic scale.
Figure 3 .
Figure 3.Comparison of mean inference times for models of Group B. The bars represent the average inference time across 5 repetitions of 200 images with each model.The values on top of each bar indicate the mean inference time and the average standard deviation (in milliseconds) across the five repetitions.The y-axis is in logarithmic scale.
Table 1 .
Overview of AR applications executing ML/DL models directly on Microsoft HoloLens2.
1Inference time for a batch of 10,000 images.
Table 3 .
Specifications of Group B ONNX models, listed by their unique identifier (ID), model size in megabytes (Mb), total number of parameters (# params), Multiply-Accumulates (# MACs), number of filters (# filters), and depth of the network (depth).The ID of each model indicates the logarithm (base 10) of the order of magnitude of its number of parameters and MACs (e.g., B P2_M7 represents a model with a number of parameters of the order of 10 2 , and a number of MACs of the order of 10 7 ).
Table 4 .
WinML and Barracuda inference times for Group A models.All models have similar values of MACs and number of parameters (see Table2).
Table 5 .
WinML and Barracuda inference times for Group B models.All models have a depth of 10.
Table 6 .
WinML and Barracuda inference time with SOTA models.The "Lit.WinML" column presents the inference times as reported by the original authors. | 2023-10-27T15:22:00.813Z | 2023-10-25T00:00:00.000 | {
"year": 2023,
"sha1": "a27f1452c6c4fb1e409e0bdcc526135f71906cc7",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/23/21/8698/pdf?version=1698219843",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f939da6e24611eeed554044a7ed0022e2a569d6",
"s2fieldsofstudy": [
"Medicine",
"Computer Science",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
55406262 | pes2o/s2orc | v3-fos-license | Rules of Engagement : The Why , What , and How of Professional Engagement for Pharmacy Students
The development of student pharmacists must include inspiring within them a sense of engagement in their profession. This paper provides the rationale for and the potential implications of this concept, as well as an overview of experiences with professional engagement in student pharmacists at one college of pharmacy. Curricular-based experiences and research will be discussed, including the insights gained and suggestions for developing and encouraging professional engagement. Lastly, this article provides future direction for furthering professional engagement on both the conceptual and curricular level. Introduction When our members fail to be engaged in the profession, our profession has failed. Engagement refers to one’s physical involvement, cognitive vigilance, and emotional connectedness with something, such as one’s profession. Expanding beyond prior conceptions such as work or school engagement, professional engagement encompasses the unique typology of engagement that a professional has within one’s profession. Engagement in a profession is broader and more abstract; whereas a workplace or school are concrete structures and have roles that are easily defined, a profession is tangible only as defined by itself, society, and the professional. In student pharmacists, professional engagement captures their distinct engagement within the profession as they undergo professionalization, becoming acculturated and adopting the professional ethos of pharmacy. This article describes the importance of, and our experiences with professional engagement in student pharmacists at one college of pharmacy. This emerging line of inquiry involves a novel extension of previous conceptualizations of engagement. We describe the unique ramifications of Corresponding Author: Benjamin D. Aronson, PharmD Department of Pharmacy Practice and Pharmaceutical Sciences, University of Minnesota College of Pharmacy 232 Life Science, 1110 Kirby Drive, Duluth, MN 55812 Phone: 218-726-6067; Fax: 218-726-6500; E-mail: arons071@d.umn.edu professional engagement for the development of student pharmacists and propose that professional engagement should be considered as instructors and schools monitor and evolve the student experience. Specifically, the purpose of this article is to describe why professional engagement is important, what our experiences with professional engagement in student pharmacists have been, and how we move forward. The Importance of Professional Engagement Other conceptions of engagement (e.g., student engagement, work engagement) have been correlated to positive outcomes. The Gallup Model for Student Success posits that engagement has direct and indirect effects on academic success, and indeed student engagement has been linked to higher self-reported and actual academic performance. Similarly, the Job Demands-Resources model posits a positive link between work engagement and job-related outcomes. Engagement has been linked to several work-related outcomes including organizational commitment, job satisfaction, customer loyalty and satisfaction, lower turnover, lower absenteeism, and increased productivity and profitability. Businesses with the most engaged employees have higher return on assets, profitability, and shareholder value compared to firms with the least engaged employees. Work engagement is related to fewer accidents at work, and for health care workers, fewer patient safety errors. The benefits of work engagement extend beyond just business outcomes. Engaged employees have higher health and wellbeing. 23 Idea Paper EDUCATION http://z.umn.edu/INNOVATIONS 2015, Vol. 6, No. 3, Article 205 INNOVATIONS in pharmacy 2 However, engagement as a professional is quite different than engagement as an employee or student. Workplaces and schools are physical environments. Engagement can be theorized as the employment of physical, emotional, and cognitive resources during role performances. In fact, some measures of engagement examine the availability and utilization of resources in these roles, or the cognitive state experienced while involved in certain tasks. For instance, as a marker of work engagement, we could ask if time flies while a pharmacist is at their job. A profession, on the other hand, is a more abstract notion. One cannot go to the profession, and it is not necessarily something you do. The work of a profession may or may not take place in a workplace. Certainly, one can engage in the profession outside of a workplace, and almost assuredly a professional can be engaged in their work, but not the profession. For student pharmacists, a similar yet more pronounced variant of this quagmire can be seen. They are students, becoming professionals, interacting and learning about the profession in a variety of manners. Student pharmacists are undergoing the process of professionalization, a unique crossroad between student and professional where they exist in a dual-role state. Other conceptions of engagement fail to capture this dynamic state. Furthermore, research in both pharmacy and nursing suggests that scales intended to assess student engagement do not adequately measure this construct in health professions students. In a multicollege multi-year cohort of pharmacy students, the model fit of the National Survey of Student Engagement (NSSE), a measure of student engagement in undergraduate students, was unacceptable. The exact reason for this incomplete fit is uncertain, but presumably was due to the differences between individuals in a professional program and an undergraduate program. In nursing students, NSSE scores differed significantly from those of undergraduate students. It is notable that the NSSE was not designed or intended for use in professional students. Similar to student and work engagement, professional engagement should be correlated with positive outcomes for student pharmacists. For instance, high professional engagement could be associated with better grades, and less attrition. In addition, high professional engagement in student pharmacists may have much broader implications, such as becoming leaders within our professional associations, advocating for the profession, and working tirelessly to advance the profession. If these outcomes hold true, one can certainly see how nurturing professional engagement in students could dramatically impact the profession and ultimately even the health care system. Despite its potential to influence educational and professional outcomes, there is a lack of published material describing, documenting, and declaring the importance of professional engagement. The literature that is available provides a diverse set of definitions and measures, often times drawing upon closely related concepts. In both medicine and teaching, professional engagement has been defined and conceptualized in a variety of ways. It has been used to refer to commitment to organizational climate, satisfaction with professional working conditions, and commitment to organizational objectives and goals. It has been used synonymously with fulfillment, and similarly to professional identity formation. It has been measured using various indicators, such as number of: professional development activities and professional collaborations, professional memberships and contacts, and within-school interactions, outside of own school interactions, and involvement in leadership activities. Other measures have included planned effort, planned persistence, professional development aspirations, and professional leadership aspirations. Likewise, in student pharmacists, when the term ‘professional engagement’ has been used it is generally as a synonym for something else, often times to mean participating as a professional, connecting with others, or using something; for instance, professional engagement with a new patient counseling technique. The term has been used in close connection to community engagement, professionalism, and leadership. These uses of the term professional engagement have not robustly defined, measured, or elaborated on the concept. One tool has sought to measure aspects of professional engagement in pharmacy students specifically. Within the Professionalism Assessment Tool, the “Citizenship and Professional Engagement” domain contains four items assessing both community and professional engagement. However, this small set of items does not capture the complete breadth of professional engagement and may not be adequate to direct curricular improvements. With a more thorough comprehension of this unique variable in development of professional students, educators would have the opportunity to better support student pharmacists in pursuing a sense of significance and enthusiasm for the profession. Experience with Professional Engagement Our awareness and investigation of professional engagement started in 2010, when it was initially identified as a concept with important ramifications for pharmacy students, yet relatively absent from the literature. Through both curricular experiences and research, we have furthered our understanding of professional engagement in student Idea Paper EDUCATION http://z.umn.edu/INNOVATIONS 2015, Vol. 6, No. 3, Article 205 INNOVATIONS in pharmacy 3 pharmacists. These experiences include: a Delphi-process aimed at gaining consensus around a modified definition of professional engagement, ongoing development of an instrument to measure professional engagement, a reflective assignment about professional engagement completed during experiential education, and several classroom discussions of professional engagement. Overview of Curricular and Research Experiences Student Delphi Process. Our initial research in professional engagement began with a modified-Delphi process in 2010. During this process, consensus was sought from highly engaged student pharmacists around a definition of professional engagement, professionally engaging and disengaging activiti
Introduction
When our members fail to be engaged in the profession, our profession has failed.Engagement refers to one's physical involvement, cognitive vigilance, and emotional connectedness with something, 1 such as one's profession.
Expanding beyond prior conceptions such as work or school engagement, professional engagement encompasses the unique typology of engagement that a professional has within one's profession.Engagement in a profession is broader and more abstract; whereas a workplace or school are concrete structures and have roles that are easily defined, a profession is tangible only as defined by itself, society, and the professional.In student pharmacists, professional engagement captures their distinct engagement within the profession as they undergo professionalization, becoming acculturated and adopting the professional ethos of pharmacy.
This article describes the importance of, and our experiences with professional engagement in student pharmacists at one college of pharmacy.This emerging line of inquiry involves a novel extension of previous conceptualizations of engagement.We describe the unique ramifications of professional engagement for the development of student pharmacists and propose that professional engagement should be considered as instructors and schools monitor and evolve the student experience.Specifically, the purpose of this article is to describe why professional engagement is important, what our experiences with professional engagement in student pharmacists have been, and how we move forward.
The Importance of Professional Engagement
Other conceptions of engagement (e.g., student engagement, work engagement) have been correlated to positive outcomes.The Gallup Model for Student Success posits that engagement has direct and indirect effects on academic success, 2 and indeed student engagement has been linked to higher self-reported and actual academic performance.
3-5
Similarly, the Job Demands-Resources model posits a positive link between work engagement and job-related outcomes.
6-12
7][18] Businesses with the most engaged employees have higher return on assets, profitability, and shareholder value compared to firms with the least engaged employees. 19Work engagement is related to fewer accidents at work, 16 and for health care workers, fewer patient safety errors. 17,20The benefits of work engagement extend beyond just business outcomes.Engaged employees have higher health and wellbeing.
INNOVATIONS in pharmacy 2
However, engagement as a professional is quite different than engagement as an employee or student.Workplaces and schools are physical environments.Engagement can be theorized as the employment of physical, emotional, and cognitive resources during role performances. 1In fact, some measures of engagement examine the availability and utilization of resources in these roles, or the cognitive state experienced while involved in certain tasks.For instance, as a marker of work engagement, we could ask if time flies while a pharmacist is at their job.A profession, on the other hand, is a more abstract notion.One cannot go to the profession, and it is not necessarily something you do.The work of a profession may or may not take place in a workplace.Certainly, one can engage in the profession outside of a workplace, and almost assuredly a professional can be engaged in their work, but not the profession.
For student pharmacists, a similar yet more pronounced variant of this quagmire can be seen.They are students, becoming professionals, interacting and learning about the profession in a variety of manners.Student pharmacists are undergoing the process of professionalization, a unique crossroad between student and professional where they exist in a dual-role state.Other conceptions of engagement fail to capture this dynamic state.Furthermore, research in both pharmacy and nursing suggests that scales intended to assess student engagement do not adequately measure this construct in health professions students. 24,25In a multicollege multi-year cohort of pharmacy students, the model fit of the National Survey of Student Engagement (NSSE), a measure of student engagement in undergraduate students, was unacceptable. 24The exact reason for this incomplete fit is uncertain, but presumably was due to the differences between individuals in a professional program and an undergraduate program.In nursing students, NSSE scores differed significantly from those of undergraduate students.
25
It is notable that the NSSE was not designed or intended for use in professional students.
Similar to student and work engagement, professional engagement should be correlated with positive outcomes for student pharmacists.For instance, high professional engagement could be associated with better grades, and less attrition.In addition, high professional engagement in student pharmacists may have much broader implications, such as becoming leaders within our professional associations, advocating for the profession, and working tirelessly to advance the profession.If these outcomes hold true, one can certainly see how nurturing professional engagement in students could dramatically impact the profession and ultimately even the health care system.
Despite its potential to influence educational and professional outcomes, there is a lack of published material describing, documenting, and declaring the importance of professional engagement.The literature that is available provides a diverse set of definitions and measures, often times drawing upon closely related concepts.In both medicine and teaching, professional engagement has been defined and conceptualized in a variety of ways.It has been used to refer to commitment to organizational climate, 26 satisfaction with professional working conditions, 27 and commitment to organizational objectives and goals. 28It has been used synonymously with fulfillment, 29 and similarly to professional identity formation. 30It has been measured using various indicators, such as number of: professional development activities and professional collaborations, 31 professional memberships and contacts, 32 and within-school interactions, outside of own school interactions, and involvement in leadership activities. 33Other measures have included planned effort, planned persistence, professional development aspirations, and professional leadership aspirations.
34,35
Likewise, in student pharmacists, when the term 'professional engagement' has been used it is generally as a synonym for something else, often times to mean participating as a professional, connecting with others, or using something; for instance, professional engagement with a new patient counseling technique.The term has been used in close connection to community engagement, 36 professionalism, 37,38 and leadership. 39,40These uses of the term professional engagement have not robustly defined, measured, or elaborated on the concept.One tool has sought to measure aspects of professional engagement in pharmacy students specifically.Within the Professionalism Assessment Tool, the "Citizenship and Professional Engagement" domain contains four items assessing both community and professional engagement. 37However, this small set of items does not capture the complete breadth of professional engagement and may not be adequate to direct curricular improvements.With a more thorough comprehension of this unique variable in development of professional students, educators would have the opportunity to better support student pharmacists in pursuing a sense of significance and enthusiasm for the profession.
Experience with Professional Engagement
Our awareness and investigation of professional engagement started in 2010, when it was initially identified as a concept with important ramifications for pharmacy students, yet relatively absent from the literature.Through both curricular experiences and research, we have furthered our understanding of professional engagement in student pharmacists.These experiences include: a Delphi-process aimed at gaining consensus around a modified definition of professional engagement, 41 ongoing development of an instrument to measure professional engagement, a reflective assignment about professional engagement completed during experiential education, and several classroom discussions of professional engagement.During a professional development course for first year pharmacy students, one session was dedicated to professional involvement.In order to stimulate discussion, the above instrument was given to students prior to the class session, along with an open-ended question asking about insights gleaned while taking the instrument.In 2015, a session was dedicated to the importance of professional engagement in development.Prior to the session, students provided examples of their most engaged moment from their first year.Pre-session assignments were reviewed by the authors to identify any trends in student responses.
Overview of Curricular and Research Experiences
In addition, a two-hour session of an elective class on wellbeing focused on professional engagement.Two preassignments required students to reflect upon their experiences with professional engagement, the process of professional engagement, and connections to wellbeing.During discussion, these points were discussed at length.
Insights and Lessons Learned
Our cumulative work with professional engagement, although preliminary, has engendered a number of realizations and considerations surrounding this concept in student pharmacists.For the most part, these insights and lessons learned have come from a combination of our experiences, and indeed observations from our earliest encounters have been cemented and reinforced by more recent ones.
Characteristics vs. Activities.It appears that this cycle can be initiated, or professional engagement may be encouraged, through participation in activities with certain professionally engaging characteristics.
Discussion on what stimulates or grows professional engagement with a group of students in an elective course focused on this cycle, as did responses from first year students reflecting on their involvement over their first semester.Both groups indicated that when these initiating experiences do not occur, professional engagement may be stunted or not occur at all.Colleges of pharmacy play an important role in stimulating professional engagement by ensuring these experiences occur for students within the curriculum and the extra-curricular opportunities available for the student body.
Ties to Strengths.The process by which students become engaged is further revealed through instances where strengths are applied.Students feel engaged in the profession when they feel they fit, and when they apply their strengths to do something they are good at in the profession.Review of fourth year student reflections shows that instances of high engagement include times when they realized a hidden talent, unearthed a new passion, or exercised areas of expertise in a manner consistent with their personal strengths.5][46] Examples detail times when they felt they utilized their unique interests and skills to excel.These observations are consistent with the Gallup Student Success Model, 2 which posits that strengths influence engagement.Benefits of strengths training for student pharmacists and approaches have been detailed elsewhere. 39,47As such, faculty have the ability to aid students in finding their strengths and unique place in the profession, and in turn, improve their professional engagement.
Professional Identity Formation.The mechanism whereby students become professionally engaged appears to be shared with that of identity formation.Professional identity formation can be conceptualized as a process involving students observing, experimenting, and evaluating possible selves within the profession through the experiences of a professional education program. 48Similarly, there is a common thread across all of our work with professional engagement.The activities that students identify as professionally engaging are often those in which they are experiencing, witnessing, or roleplaying a professional role.
As students begin pharmacy school, instances that are professionally engaging allow them to connect with, experience, and 'try on' roles of the profession.At this point in their careers, these experiences are a gateway to identity formation.In other words, professional engagement may play an important role in developing professional identities.
That being said, it is important to recognize that not all identity formation will be professionally engaging; students may experience roles that disengage them, influencing the formation of their professional identity but not instilling the state of mind consistent with professional engagement.It is vital to maximize the opportunities for students to develop their professional identity through experiences that are professionally engaging.As such, designing curricular experiences, and ensuring the presence of engaging characteristics of those activities may aid both professional engagement and identity formation.
Transitioning to practitioner.As students transition from classroom education, to experiential education, to practitioner, we observe important differences in their professional engagement.An interesting pattern noted in fourth year reflections is the decreased emphasis on certain types of engaging activities (e.g., health fairs, professional meeting attendance), and a focus on the moments within their experiential education (i.e., practice related experiences).At the transition of student to practitioner, professional identities are frequently observed to be in flux. 48,49Prior literature in pharmacy has described a dissonance between the professional identities and roles learned in the curricular and experiential context, with the former being idealistic and the latter grounding them. 50It is understandable that students at this stage of their career may relate more to their future role, and less to their student role.During this transition, colleges and schools can help students discover what will professionally engage them as their roles and environments change from that of a student.This transition also illuminates a gap in our knowledge and experience with professional engagement.Our experiences have centered on student pharmacists, and we have noted a change in what is professionally engaging for transitioning students; what is yet to be known is how exactly this construct operates in pharmacists.
A State of Mind.Using the conceptual definition above, a distinction can be made between engagement as a cognitive affective state of mind, rather than simply participation, or 'engagement' in an activity or experience.In our experience, students commonly mistake participation as engagement, initially.However, upon further probing it becomes clear that professional engagement is a cognitive-affective process, consistent with other conceptions of engagement, 42 driven by the cycle suggested above.It is vital that instructors understand the significance of this distinction as they attempt to guide students in planning for future engagement.
Planning cannot be purely activity oriented (e.g., joining an organization does not mean you will automatically be professionally engaged).Instead, goal setting and planning must involve an understanding of the characteristics that engage the emerging practitioner and opportunities for experiences containing those characteristics.
Advancing Professional Engagement
To advance professional engagement, both conceptual development and curricular development are needed.
Conceptual Development.An instrument to measure professional engagement is currently being tested and refined.This work will validate the measure and determine its psychometric properties.Once validated, this instrument can be used to improve our conceptual understanding.The relationships between professional engagement and other concepts (e.g., professionalism, burnout) and outcomes (e.g., grades, professional activity participation, professional membership, professional commitment, wellbeing) can be investigated.Furthermore, the connection between professional identity development and professional engagement should be further elucidated through this research.This tool will also allow the investigation of the impacts of interventions to improve professional engagement.While difficult to measure certain downstream effects during students' careers, several long-term outcomes would be worthy of investigation.Future research could answer the question: Does professional engagement as a student pharmacist impact broader outcomes as a practitioner (e.g., efforts to advocate for the profession, efforts to advance the profession, working in advanced practice models)?With a stronger understanding of the inter-relationships, opportunities for synergistic interventions may emerge.
In addition, the process by which students become engaged in the profession should be explored.Development of a model is needed to elucidate the mechanism, correlates, and outcomes of professional engagement.Future qualitative work will help to reveal these underlying processes and connections with other concepts, while quantitative research will help to understand how the measurements of these various concepts are structurally related.By developing a model, pharmacy educators can tailor interventions to improve professional engagement.
Lastly, it is of utmost importance to acknowledge that this work has focused on student pharmacists.We believe the construct of professional engagement may operate differently in practicing pharmacists.Students are entering the profession and through experiences become connected to the profession, whereas practicing pharmacists are immersed in (or at least have access to) elements of the profession on a daily basis.Conceptually, it appears that changes occur as students transition from student to pharmacist, evidenced through fourth year reflections.More work is needed to understand this transition point, and how professional engagement operates in pharmacists.
Curricular Development.Support for extracurricular opportunities from administration should increase the chance of promulgating professional engagement, yet this strategy alone assuredly does not guarantee that all students will become engaged.Ideally, students would be guided to professional engagement through intentional design by professionally engaged faculty, preceptors, and upper-class students.Building a culture that recognizes and values the importance of optimistic peers, progressive role models and personalized attention to development is needed.To begin, instructors can introduce the concept as a cognitive affective state of mind and describe its characteristics.As an example, we have done this at the end the first professional year by asking students to come to class having identified their most professionally engaged moment as a student pharmacist, asking them to share it with a colleague, and fostering a class discussion on the concept.Through this conversation students reflect past the specific activity or event to determine the underlying elements that they found professionally engaging.This process allows a personal understanding of why it was significant and what it means.As students are planning for their professional development, emphasis could be placed on setting goals and selecting options after having given consideration to previous experiences that have resulted in feelings of professional engagement.As curricular and co-curricular experiences are designed, attention should be given to providing a breadth of initiating experiences.These experiences should be consistent with the characteristics of professional engagement, giving attention to opportunities for: positive feelings about the profession, feelings of growth, relationships within the profession, relationships with patients and other professionals, representing the profession and performing as a professional, role models displaying model behavior, helping others, and doing something to advance the profession. 27Given that student organizations are vital in providing these opportunities, professional engagement is an ideal area partnering between student organization officers and faculty.Students may need support in identifying why and how some experiences lead to this cognitive affective state.Working together, faculty and officers could design and incorporate debriefings and/or reflective writing to aid students in this analysis.Once professional engagement has been initiated, asking students "what stimulates and grows professional engagement?" may be helpful in creating consciousness of the concept.As students progress, attention to the use of one's talents, emerging interests and evolving identity becomes important in supporting continued advancement of a student's professional engagement.Given that one student may not be engaged by the same activities as another, examination of their engagement may be prompted by questions like "why was this activity/role professional engaging for me?" Lastly, we need to help prepare students to deal with the dissonance that can be felt as they transition from student to professional, and give them tools to remain engaged in the profession after graduation, when the opportunities and challenges are different.Additional investigation of methods to support student professional development is needed.Ultimately, a graduate emerges from this process and we would like that the graduate is professionally engaged.
Summary
Our experiences with professional engagement have affirmed its importance for student pharmacist development and provided us with preliminary tools and approaches for encouraging it.Despite the breadth of knowledge and experience that has been gained, further work is needed.Conceptually, it is necessary to measure professional engagement, to develop a model explaining the underlying process in student pharmacists, and to understand how engagement operates as students become practitioners.Professional engagement should be considered as instructors and schools monitor the student experience and evolve curricula.Classroom and curricular advances are needed to identify methods that foster professional engagement, and to improve student ability in assessing and planning for engagement particularly as roles change in the transition to practitioner.Colleges and schools will directly benefit from stronger professional engagement, as will current and future employers and the profession.
Student Delphi Process.Our initial research in professional engagement began with a modified-Delphi process in 2010.During this process, consensus was sought from highly engaged student pharmacists around a definition of professional engagement, professionally engaging and disengaging activities, and characteristics of those activities.
41Beginning with a definition of work engagement,42professional engagement was defined as: an energizing state of mind towards one's profession characterized by high energy, involvement with a sense of significance, enthusiasm, inspiration, pride, and being happily engrossed in one's profession.This conceptual definition can guide research and inform curricular experiences designed to build professional engagement in student pharmacists.Insights from this initial research shaped subsequent instrument development, reflective assignments, and classroom discussions.Instrument Development.Drawing upon qualitative data from the first round of the modified-Delphi process, an instrument to measure professional engagement in student pharmacists is currently being developed.Item development began in 2012, with iterative versions of an instrument administered in 2013 to a cohort of third year students, and an expanded version in 2014 to a cohort of first year students.A factor analysis of this preliminary instrument yielded six factors, illuminating distinct areas of professional engagement.These preliminary factors are: 1) Meaning: having purpose in work as a student pharmacist; 2) Excitement: passion, excitement, and pride in profession; 3) Student Reflections on Professional Engagement.Based upon findings from the Delphi study, it was deemed important for development to have students reflect upon their professional engagement.Since 2012, students in their final year of the | 2019-03-16T13:03:56.352Z | 2015-01-01T00:00:00.000 | {
"year": 2015,
"sha1": "410477e10aadd833db43a531938c2d13c4f92c45",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.lib.umn.edu/index.php/innovations/article/download/387/381",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "a2a916988ad3a792d1b6a8a0bf67f08e81713775",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
} |
250174266 | pes2o/s2orc | v3-fos-license | Hand hygiene practices during the COVID-19 pandemic and associated factors among barbers and beauty salon workers in Ethiopia
Coronavirus disease-2019 (COVID-19) is still causing morbidity and mortality all over the world. Preventive measures such as wearing a facemask, social distancing and hand hygiene continue to be the only options available in countries such as Ethiopia where vaccines are not yet widely available. Hand hygiene is one of the easiest and cheapest preventive measures, and one that is especially important for barbers and beauty salon workers who are widely exposed to the virus due to their contact with many customers. Therefore, measuring the proportion of good hand hygiene practices and associated factors among barbers and beauty salon workers may provide essential guidance in the development of effective interventions to improve COVID-19 prevention measures. A facility-based cross-sectional study was conducted among 410 barbers and beauty salon workers in Dessie City and Kombolcha Town from January 5 to February 10, 2021. The study participants were selected using a simple random sampling technique. A structured questionnaire and an observational checklist were used to collect the data. The collected data were entered into EpiData version 4.6 and analysed using Statistical Package for Social Sciences (SPSS) version 25.0. Logistic regression analysis using bivariate and multivariable logistic regression models was employed. From the bivariate analysis, variables with p <0.25 were retained into multivariable logistic regression analysis. Finally, from the multivariable analysis, variables that had a p-value < 0.05 were declared as factors significantly associated with good hand hygiene practices. Of the total 410 barbers and beauty salon workers, 52.9% [95% CI: 48.3–57.6] had good hand hygiene practices whereas 47.1% [95% CI: 42.4–51.7] had poor hand hygiene practices. From the total respondents, more than half 250 (61%) were male and 160 (39%) were female, with a mean age of 27.42 ±7.37 years. Out of 410 barbers and beauty salon workers, 73.7% had good knowledge about COVID-19 and 59.5% had a positive attitude towards taking precautions against COVID-19. Female sex (AOR = 2.17, 95% CI:1.29–3.65), educational level of college or above (AOR = 5.53, 95% CI:2.85–10.71), positive attitude towards taking precautions against COVID-19 (AOR = 2.4, 95% CI:1.46–4.17), belief in the effectiveness of hand hygiene practices (AOR = 3.78, 95% CI:2.18–6.55) and presence of a hand-washing facility with soap and water (AOR = 5.55, 95% CI:3.28–9.40) were factors significantly associated with good hand hygiene practices among barbers and beauty salon workers. The proportion of good hand hygiene practice was not sufficient to combat the virus. Good hand hygiene practice was higher among those with higher educational level, positive attitude towards taking precautions against COVID-19, belief in the effectiveness of hand hygiene practices, presence of a hand-washing facility with soap and water and those of female sex. Thus, improving hand hygiene practices through continued training, especially for those with a lower educational level and for male workers, is recommended. Moreover, government and non-government organizations should work together to provide alcohol-based hand sanitizer at a low cost to those barbershops and beauty salons if there is no access to water and soap.
Introduction
In December 2019, coronavirus infectious disease-19 , caused by the severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) was identified in Wuhan, China [1]. Since then, the virus has spread rapidly around the world, with a huge impact on human health and the world economy. Many countries have struggled to apply various strategies against the pandemic, even while more than 108.2 million infections and over 2.3 million deaths were reported globally as of 14 th February 2021 [2]. It was estimated that due to COVID-19, gross domestic product (GDP) would fall by 2% overall around the world in 2020, 2.5% in developing countries and 1.8% in industrial countries [3].
COVID-19 is highly transmissible, including from asymptomatic individuals [4] through respiratory droplets and by touching a surface or object infected with the virus and then touching eyes, nose or mouth [5,6]. Individuals with COVID- 19 have shown symptoms such as fever, fatigue, dry cough, malaise and breathing difficulty [7][8][9]. The virus can also cause damage to tissues and organs of the infected host and lead to severe disease, including hospitalization, admission to an intensive care unit and death [10]. Severe cases and death have occurred mainly in older adults and people with chronic illnesses such as hypertension, cardiovascular disease, chronic kidney disease and diabetes [11][12][13]. Preventing these adverse impacts requires the application of measures such as hand hygiene, facemask wearing, cleaning and disinfecting frequently touched surfaces, staying home as much as possible and avoiding close contact with others [5].
Studies have also shown that non-pharmaceutical measures such as hand hygiene and facemasks are the easiest and most effective methods to reduce the transmission of respiratory infections [14,15]. Hand hygiene has the potential to reduce the spread of respiratory infection by 16% [16]. It is also been shown that hand hygiene can reduce the transmission of gastrointestinal illness (GI) by 31.0% [17]. Washing hands frequently with water and soap for at least 20 seconds, and when water is not available, using alcohol-based hand sanitizers is crucial in the prevention of COVID-19 [5,18].
Although hand hygiene is the cheapest and most effective method to prevent respiratory disease and GI illness, it was practised with higher compliance rates during the early phase of COVID-19 than more recently [19]. The rate of hand hygiene compliance was also observed to increase after the onset of the COVID-19 pandemic compared with that of pre-COVID-19 times.
A study in Germany showed increased adherence to hand hygiene practices from 47% before to 95% during the COVID-19 pandemic [20]. Similar observations were also made among Polish adolescents whose hand hygiene practices increased from 35.6% to 54.8% during the outbreak [21]. In Pakistan, furthermore, increased hand hygiene practices during the COVID-19 pandemic among healthcare workers has led to a reduction in hospital-associated infections (HAIs) in a hospital setting [22]. Even so, hand hygiene practices were higher during the early phase of the outbreak and then fell again among community members including barbers and beauty salon workers to that of pre-COVID-19 time.
Overall compliance with good hand hygiene practices is low, especially in developing countries where the reported compliance rates is 20.49% [23], while the risk of morbidity and mortality continued in the African region. Africa recorded 2,723, 431 COVID-19 positive cases and 68,294 deaths as of 14 February 2021 with the highest-burden in South Africa followed by Zambia and Nigeria [2].
Ethiopia is one of the developing countries with challenges in facing COVID-19 spread due to mass use of public transportation, a shortage of sanitation material, suspected cases being hidden, a lack of personal protective equipment for health care providers and the presence of immune-compromised people [24]. Although vaccines for the virus are available in some countries, in a resource-limited country such as Ethiopia it may take a long time to reach everyone, so applying preventive measures is the best and only option in tackling the virus.
The first confirmed case of COVID-19 in Ethiopia was registered in early March 2020 and then cases spread throughout the country. To tackle the problem, the government of Ethiopia advocated preventive measures such as avoiding handshakes, reducing the number of passengers riding public transportation by half, keeping adequate physical distancing, providing cleaning and hand washing facilities in every public institution [25]. While Ethiopia has responded to COVID-19 by taking various prevention measures, the virus has continued to cause morbidity and mortality, resulting in 145,704 cases and 2,181 deaths recorded as of 14 February 2021 [2].
Barbers and beauty salon workers are among those at the highest risk of getting COVID-19, but their level of hand hygiene practices is not well known. Thus, this study aimed to assess hand hygiene practices during the COVID-19 pandemic and associated factors among barbers and beauty salon workers in Dessie City and Kombolcha Town, Ethiopia. Understanding these factors may guide strategies related to increasing hand hygiene compliance and preventing the transmission of COVID-19 among barbers and beauty salon workers and the community.
Study area
The study was conducted in Dessie City and Kombolcha Town in South Wollo Zone, one of the twelve zones found in Amhara Regional State. Dessie City is 401 km from Ethiopia's capital City of Addis Ababa, and is at an elevation between 2,470 and 2,550 meters above sea level, while Kombolcha Town is 377 km away from Addis Ababa at an elevation of 1,857 meters above sea level. Based on the 2007 population and housing census projection, the total population of Dessie City was 212,436 and of Kombolcha Town of 126,144 [26].
Study design, period and population
A facility-based cross-sectional study was conducted from January 5 to February 10, 2021 among barbers and beauty salon workers in Dessie City and Kombolcha Town. The source population was all barbers and beauty salon workers in Dessie City and Kombolcha Town. The study population was all systematically selected barbers and beauty salon workers who worked at the barbershops and beauty salons during the study period.
Sample size determination and sampling technique
The single population proportion formula was used to determine the sample size [27]. The study used the assumptions of good hand hygiene practices among barbers and beauty salon workers at 50% since there is no previous study in a similar setting, Zα/2 value 1.96 at 95% confidence interval (CI) and 5% margin of error.
The calculated sample size was 384; after considering a 10% non-response rate the final sample size became 422.
The total sample size was proportionally distributed between Dessie City and Kombolcha Town barbershops and beauty salons. Barbershops and beauty salons were then selected using a systematic sampling technique. One worker was selected randomly from each selected barbershop or beauty salon when there was more than one worker in a given location.
Operational definitions
Hand hygiene practices. Compliance with the practices of cleansing hands with soap and water or with an antiseptic hand rub to remove transient microorganisms from hands [28].
Good hand hygiene practices. Study participants who correctly answered a number of questions greater than or equal to the mean from 11 total questions about hand hygiene practices using clean water and soap or alcohol-based hand sanitizer [29].
Poor hand hygiene practices. Study participants who correctly answered a number of questions fewer than the mean from 11 total questions about hand hygiene practices using clean water and soap or alcohol-based hand sanitizer [29].
Good knowledge. Study participants who correctly answered more than or equal to the mean number out of 14 total knowledge questions [30,31].
Poor knowledge. Study participants who correctly answered fewer than the mean number out of 14 total knowledge questions [30,31].
Positive attitude towards taking precautions against COVID-19. Study participants who scored higher than or equal to the mean out of 10 attitude questions [30,31].
Negative attitude towards taking precautions against COVID-19. Study participants who scored lower than the mean on 10 attitude questions [30,31].
Data collection procedures and quality assurance
An interviewer-administered structured questionnaire and observational checklist were used to collect data. The questionnaire was adapted from previously published articles [21,32,33]. Socio-demographic and economic factors, knowledge about COVID-19, attitude towards taking precautions against COVID-19, behavioral and environmental factors were incorporated into the questionnaire. The questionnaire was originally prepared in English and then translated to the local language Amharic and back to English to ensure consistency.
Prior to the actual data collection, the questionnaire was pre-tested in a group of 5% of the total sample size in Kemisse Town. The result of the pre-test was used to correct some unclear ideas and statements. Four data collectors and two supervisors were involved in the study. One day of training was given to data collectors and supervisors. The data were collected by faceto-face interviews at the worksite and by observing the presence or absence of a hand-washing facility with soap and water and/or alcohol-based hand sanitizer. The questionnaires were checked daily for completeness and consistency by supervisors and the principal investigator.
In addition, data entry errors were controlled through double data entry of a randomly selected 5% of the questionnaires.
Data management and analysis
The collected data were coded and entered into EpiData version 4.6 and exported to SPSS version 25.0 for data cleaning and analysis. Descriptive statistics were calculated to describe the study populations using measures of frequency, percentages and proportions and were displayed using tables. The proportion of good hand hygiene practices among barbers and beauty salon workers was determined by dividing the number of workers with good hand hygiene practices by the total number of study participants.
Due to the binary nature of the outcome variable, binary logistic regression analysis was used. Variables that had a p-value < 0.25 by the bivariate analysis were then analysed by multivariable binary logistic regression to control the potential confounders. From the multivariable analysis, variables that had a p-value < 0.05 were declared as factors significantly associated with good hand hygiene practices. Model fitness was checked using Hosmer and Lemeshow goodness-of-fit-test, finding a p-value of 0.938.
Ethics approval and consent to participate
Ethical clearance was obtained from the ethical review committee of Wollo University College of Medicine and Health Sciences with a protocol number CMHS/544/01/2021. Official permission letters were obtained from Dessie City and Kombolcha Town Health Bureaus. Prior to beginning the study, its purpose was explained to each participant and written consent was obtained from all participants. Participants were made aware that they had full right to participate or not in the study as well as to withdraw anytime during the interview. Confidentiality was also maintained through anonymity. During data collection, the data collectors wore facemasks, used alcohol-based hand sanitizer and kept a minimum of one meter distance from the interviewees to prevent transmission of the COVID-19 virus.
Socio-demographic and economic characteristics
A total of 410 barbers and beauty salon workers were included in this study, from whom we yielded a response rate of 97.0%. More than half 250 (61%) of the barbers and beauty salon workers were male and 160 (39%) were female. Nearly two-thirds 263 (64.1%) of the barbers and beauty salon workers were aged 18-29 years and less than a tenth 36 (8.8%) were �40 years; the mean age was 27.42 years (±7.37SD). A primary-level education was reported by half 206 (50.2%) of the barbers and beauty salon workers while nearly one-fourth 94 (22.9%) had an education at college level or above (Table 1).
Knowledge and attitude status about COVID-19
Our findings showed that nearly three-fourths 73
Behavioral and environmental factors
Of 410 barbers and beauty salon workers, nearly two-thirds 267 (65.1%) of the workers believed in the effectiveness of hand hygiene in preventing COVID-19 while just over onethird 143 (34.9%) of them did not believe in the effectiveness of hand hygiene. In this study, most 283 (69.5%) of the workers believed that there are no curative treatments for COVID-19. Just over half 211 (51.5%) of the workers perceived that they were vulnerable to COVID-19 and nearly half 199 (48.5%) did not feel vulnerable to the virus. With regard to environmental factors, in most 225 (54.9%) of the barbershops and beauty salons, there was a water source close by. In this study, nearly half 203 (49.5%) of the barbershops and beauty salons had a hand-washing facility with water and soap and the remaining 207 (50.5%) had no hand-washing facility ( Table 2).
Hand hygiene practices
The proportion of good hand hygiene practice among barbers and beauty salon workers was 52.9% [95% CI: 48.5-57.9]. One-fourth 104 (25.4%) of the barbers and beauty salon workers always practised hand hygiene before putting on a facemask and almost one-third 123 (30%) of them always did so after removing a facemask. Only 88 (21.5%) of the workers always practised good hand hygiene after coughing, sneezing, or blowing their noses ( Table 3). Out of the total 410 barbers and beauty salon workers, only 80 (19.5%) practised hand hygiene by washing with water only. Nearly one-fourth 100 (24.4%) used water and soap and more than half 230 (56.1%) used, at various times, either water and soap or alcohol-based hand sanitizer. In the majority 254 (62%) of the barbershops and beauty salons, alcohol-based hand sanitizer was available; and more than two-thirds 290 (70.7%) of barbers/salon workers washed their hands for �20 seconds (Table 4).
Multivariable analysis of factors associated with good hand hygiene practices
From the bivariate analysis, sex, age, educational level, household size, work experience, presence of IPC guidelines, knowledge about COVID-19, attitude towards taking precautions against COVID-19 (Table 5), belief in the effectiveness of hand hygiene practices and presence of a hand-washing facility with soap and water were retained into multivariable analysis since these variables had a p-value < 0.25 from the bivariate analysis (Table 6). From multivariable logistic regression analysis, female sex, educational level of college or above, positive attitude towards taking precautions against COVID-19, belief in the effectiveness of hand hygiene practices and presence of a hand-washing facility with soap and water showed significant association with practice of good hand hygiene among barbers and beauty salon worker (Table 7).
We found that the odds of practicing good hand hygiene among female barbers and beauty salon workers were 2.17 times (AOR = 2.17, 95% CI: 1.29-3.65) higher than male barbers and beauty salon workers. The odds of developing good hand hygiene practice among barbers and beauty salon workers with an educational level of college or above were 5.53 times (AOR = 5.53, 95% CI: 2.85-10.71) higher than those with a lower level of education. On the other hand, the odds of practicing good hand hygiene among individuals with a positive attitude towards taking precautions against COVID-19 were 2.4 times (AOR = 2.4, 95% CI:1.46-4.17) higher than those with a negative attitude towards taking such precautions.
Similarly, the odds of practicing good hand hygiene among barbers and beauty salon workers who believed in the effectiveness of hand hygiene practices were 3.78 times (AOR = 3.78, 95% CI: 2.18-6.55) higher than those who did not believe in the effectiveness of hand hygiene. Furthermore, the odds of practicing good hand hygiene among barbers and beauty salon workers were 5.5 times (AOR = 5.55, 95% CI: 3.28-9.40) higher than those whose shops/salons had no hand-washing facility (Table 7).
Discussion
Hand hygiene is usually used as a second factor for controlling the spread of disease if contact occurs [34]. But in the case of barbers and beauty salon workers for whom contact is mandatory, hand hygiene is the best option. Therefore, this study was conducted to assess hand hygiene practices and associated factors among barbers and beauty salon workers in Dessie City and Kombolcha Town. We found that 52.9% of the barbers and beauty salon workers had good hand hygiene practices. Our findings showed that good hand hygiene practice was significantly associated with sex, educational level, attitude towards taking precautions against COVID-19, belief in the effectiveness of hand hygiene practices and the presence of a handwashing facility with soap and water. Hand hygiene has been known to prevent respiratory infections. During SARS and H1N1 influenza outbreaks, hand hygiene with soap and water or alcohol-based hand sanitizer played a significant role in the reduction of the outbreaks [35][36][37]. Similarly, hand hygiene has been proven to prevent the transmission of COVID-19 [38][39][40]. Although there is evidence that hand hygiene can reduce respiratory diseases, in our study, only 52.9% of the barbers and beauty salon workers practised good hand hygiene. The result was lower than that found by a study in a similar area among taxi drivers, which was 66.4%. This difference may have been due to the difference in study period and type of question used [29]. The result was also lower than found in other studies from Ethiopia (82%), (76%) and (95.5%) [41][42][43], Nigeria (95.3%) and (69.9%) [44,45], Malaysia (87.8%) [46], Poland (58.4%) [21], Japan (58.5%) [47], China (79.44%) [48], and United States (85.2%) [49]. The possible reason for the lower proportion of good hand hygiene in our study area might have been due to the fact that our study was conducted at the later stage of the outbreak when compliance with hand hygiene recommendations had gone down. This finding was supported by a recent study where compliance with hand hygiene recommendations was lower in a later stage of the outbreak [19]. This reduction in compliance during the later stage of the outbreak might be due to adaptation to the disease and the presence of vaccines. However, this study reports a higher proportion of good hand hygiene practice than found by previous studies from Ethiopia (14.9%) and (43.0%) [50,51], China (42.05%) [52], Indonesia (27.1%) [53], Vietnam (31%) [54], Turkey (42.4%) [55] and from a systematic review finding (40%) [56]. The reason for this discrepancy might be the difference in the socioeconomic status, different scoring systems and the type of questions used. The difference could also be due to the fact that this study was conducted at a time when the cost of alcoholbased hand sanitizer had gone down compared with the cost during the earlier stage of the outbreak.
In the present study, being female showed a significant association with good hand hygiene practices. Similar results were found by previous studies in Ghana [57], China [52], Switzerland [58], Turkey [55,59], United States [49], Poland [60], Korea [61,62] and the result from a review of studies [32,33,63,64]. This could be due to a greater perceived susceptibility to disease amongst women compared to men. It could also be due to the presence of water and soap for hair washing purposes in female beauty salons being more common compared with male barbershop where hair washing services are less common.
Educational level showed a direct association with hand hygiene practices among barbers and beauty salon workers. This result was supported by recent studies in Kenya [65], China [52], Turkey [59], Vietnam [66], Switzerland [58] and a review of studies [32,33]. The possible reason for the association of educational level with hand hygiene practices might have been that having a higher educational level influences the ability to seek and understand health information and actions to prevent COVID-19. However, an inverse relationship between educational level and hand hygiene practices was observed in studies conducted in Hong Kong [67,68]. The reason for this might be due to the variation in socio-demographic characteristics of the population.
In this study, a positive attitude towards taking precautions against COVID-19 among barbers and beauty salon workers was positively associated with good hand hygiene practices. Barbers and beauty salon workers with a positive attitude towards taking precautions against COVID-19 were 2.4 times more likely to practice good hand hygiene compared with those with a negative attitude towards taking precautions against COVID-19. Consistent results were shown in recent studies in the United States [49,69] and China [33]. The reason might be that those barbers and beauty salon workers with positive attitudes felt compelled to practice good hand hygiene since attitude is the driving force for practices. Our study also found an association of hand hygiene practices with belief in the effectiveness of hand hygiene. In this study, barbers and beauty salon workers who believed in the effectiveness of hand hygiene in the prevention of COVID-19 were 3.7 times more likely to practice good hand hygiene than those who did not have this belief. This result is supported by studies in Korea [61,62], England [70] and Hong Kong [71,72]. The possible explanation might be the fact that belief is an influential determining factor of good practices.
Over 2 billion people in the world lacked a hand-washing facility with soap and water. In sub-Saharan Africa, more than 50% of the population are without a hand-washing facility [73]. Similarly, in our study only 49% of barbershops and beauty salons had a hand-washing facility with soap and water. This hinders practice of hand hygiene, and as a result, it promotes the spread of COVID-19. In this study, workers in barbershops and beauty salons with a handwashing facility were 5.5 times more likely to practice good hand hygiene than those in shops and salons with no hand-washing facility. Similar findings have been reported by other studies [23,74]. This could be due to the fact that the barbers and beauty salon workers are alarmed by the virus and motivated by the presence of a hand-washing facility.
Limitations of the study
This study had several limitations. Although the presence or absence of a hand-washing facility with soap and water and alcohol-based hand sanitizer was determined by the data collectors' observation, the proportion of workers following good hand hygiene practices was determined based on self-report that was not verified using direct observation, and therefore was subject to recall and social desirability biases [75]. In addition, since the study was conducted only in Dessie City and Kombolcha Town, the finding is not generalizable to all barbers and beauty salon workers at the national level. Moreover, due to limited access to studies on hand hygiene practices among barbers and beauty salon workers; the discussion was made on the basis of the findings with other target groups.
Conclusion
This study showed that the proportion of barbers and beauty salon workers who practised good hand hygiene in Dessie City and Kombolcha Town was 52.9%. The predictors of good hand hygiene practices were sex, educational level, attitude towards taking precautions against COVID-19, belief in the effectiveness of hand hygiene practices and the presence of a handwashing facility with soap and water. Therefore, it is recommended that training be provided for barbers and beauty salon workers to enhance their hand hygiene practices. In addition, government and non-government organizations should work together to provide alcoholbased hand sanitizer at a low cost to those shop/salon locations that are without access to a hand-washing facility with water and soap. | 2022-07-02T06:17:24.022Z | 2022-07-01T00:00:00.000 | {
"year": 2022,
"sha1": "155bc8220af64cd5ed382906495efb3ef860343b",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9454e0db016f0cb8127894231abdb6aea16a7e96",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
19346576 | pes2o/s2orc | v3-fos-license | Evaluation of Urbanization Dynamics and its Impacts on Surface Heat Islands: A Case Study of Beijing, China
As the capital of China, Beijing has experienced a continued and rapid urbanization process in the past few decades. One of the key environmental impacts of rapid urbanization is the effect of urban heat island (UHI). The objective of this study was to estimate the urbanization indexes of Beijing from 1992 to 2013 based on the stable nighttime light (NTL) data derived from the Defense Meteorological Satellite Program’s Operational Line Scanner System (DMSP/OLS), which has became a widely used remote sensing database after decades of development. The annual average value nighttime light Digital Number (NTL-DN), and the total lit number and urban area proportion within Beijing’s boundary were calculated and compared with social-economic statistics parameters to estimate the correlation between them. Four Landsat thematic mapper (TM) images acquired in 1995 and 2009 were applied to estimate the normalized difference vegetation index (NDVI) and normalized land surface temperature (LSTnor), and spatial correlation analysis was then carried out to investigate the relationship between the urbanization level and NDVI and LSTnor. Our results showed a strong negative linear relationship between the NTL-DN value and NDVI; however, in contrast, a strong positive linear relationship between existed between the NTL-DN value and LSTnor. By conducting a spatial comparison analysis of 1995 and 2009, the vegetation coverage change and surface temperature difference were calculated and compared with the NTL-DN difference. Our result revealed that the regions of fast urbanization resulted in a decrease of NDVI and increase of LSTnor. In addition, choropleth maps showing the spatial pattern of urban heat island zones were produced based on different temperatures, and the analysis result indicated that the spatial distribution of surface temperature was closely related with the NTL-DN and NDVI. These findings are helpful for understanding the urbanization process as well as urban ecology, which both have significant implications for urban planning and minimize the potential environmental impacts of urbanization in Beijing.
Introduction
Currently, more than 50% of the world's population lives in urban areas, with this proportion likely to keep increasing in developing countries. Urbanization is taking place at a spectacular rate worldwide, particularly in China during the past few decades. With continuous rural-urban migration and urban expansion, more megacities are continuously emerging in China [1,2]. Along with rapid urbanization and modernization, some environmental issues, such as water pollution [3], air pollution [4], greenhouse gas emissions [5], and enhanced urban heat islands [6], are arising in remote-sensing data, including the integrated application of DMSP/OLS NTL imagery and Landsat TM products, and estimated the interrelationship between the spatial distribution of urbanization level with the LST nor and NDVI. Furthermore, the comparison analysis was carried out from the DMSP/OLS NTL in 1995 and 2009 to probe into the variation tendency of LST nor and NDVI in different years. In addition, the target region was divided into four categories based on temperature distribution character in each year, and statistical and comparative analysis among the different urban heat island zones were used to explore the correlation between urban heat island effects, and the NTL and NDVI.
Data and Methods
As the capital of China, Beijing is well known worldwide as a famous ancient city and an international metropolis, which is located at 39 • 56' N and 116 • 20' E. Figure 1 shows the geographic location of the study area ( Figure 1a) and administrative districts of the Beijing city ( Figure 1b). As the political, cultural and economic center of the People's Republic of China, Beijing city covers an area of 16,808 km 2 . In terms of topography, two-thirds of Beijing are mountainous areas where the city is surrounded by mountains from all sides, except for the southeast of the city, where a plain slopes slightly to the Bohai Rim. Beijing city is characterized by a warm temperature zone and has a typical continental monsoon climate with four distinct seasons, including a hot and rainy summer and a cold and dry winter.
Remote Sens. 2017, 9,453 3 of 16 urbanization level with the LSTnor and NDVI. Furthermore, the comparison analysis was carried out from the DMSP/OLS NTL in 1995 and 2009 to probe into the variation tendency of LSTnor and NDVI in different years. In addition, the target region was divided into four categories based on temperature distribution character in each year, and statistical and comparative analysis among the different urban heat island zones were used to explore the correlation between urban heat island effects, and the NTL and NDVI.
Data and Methods
As the capital of China, Beijing is well known worldwide as a famous ancient city and an international metropolis, which is located at 39°56' N and 116°20' E. Figure 1 shows the geographic location of the study area ( Figure 1a) and administrative districts of the Beijing city ( Figure 1b). As the political, cultural and economic center of the People's Republic of China, Beijing city covers an area of 16,808 km 2 . In terms of topography, two-thirds of Beijing are mountainous areas where the city is surrounded by mountains from all sides, except for the southeast of the city, where a plain slopes slightly to the Bohai Rim. Beijing city is characterized by a warm temperature zone and has a typical continental monsoon climate with four distinct seasons, including a hot and rainy summer and a cold and dry winter. After 1949 planners were concerned with defining the city as a political and cultural center, the city has been experiencing unprecedented urbanization over the last several decades [27]. The rapid economic growth of Beijing is not only associated with the large-scale migration of population [28], but is also connected with many environmental consequences including UHI effects. Urbanization is a quite complicated phenomenon, with many driving forces responsible for the urbanization of a city. As the capital city of China, the situation in Beijing is more complicated, and it is important to evaluate the transformation processes to support policy making for environmental management and urban planning. Thus, investigating the spatial and temporal urbanization situation and understanding the UHI development situation in Beijing is crucial for the sustainable development of the city in the context of policy making. For the present study, the following methodology was adopted, which involved the collection of social-economic statistics, satellite data collection, preprocessing of the remote sensing imagery, preparation of NDVI maps, retrieval of LSTnor maps, construction of choropleth maps of urban heat island zones and correlation analysis studies. The data sets used in this research are shown in Table 1 below. After 1949 planners were concerned with defining the city as a political and cultural center, the city has been experiencing unprecedented urbanization over the last several decades [27]. The rapid economic growth of Beijing is not only associated with the large-scale migration of population [28], but is also connected with many environmental consequences including UHI effects. Urbanization is a quite complicated phenomenon, with many driving forces responsible for the urbanization of a city. As the capital city of China, the situation in Beijing is more complicated, and it is important to evaluate the transformation processes to support policy making for environmental management and urban planning. Thus, investigating the spatial and temporal urbanization situation and understanding the UHI development situation in Beijing is crucial for the sustainable development of the city in the context of policy making. For the present study, the following methodology was adopted, which involved the collection of social-economic statistics, satellite data collection, preprocessing of the remote sensing imagery, preparation of NDVI maps, retrieval of LST nor maps, construction of choropleth maps of Remote Sens. 2017, 9, 453 4 of 16 urban heat island zones and correlation analysis studies. The data sets used in this research are shown in Table 1 below. The indicators were primarily selected according to the following general selection criterion: (1) choose the most cited indicators; (2) cover the components of urbanization processing and (3) choose the simplest indicators to facilitate indicators. Permanent population and gross domestic product (GDP) were selected as representative indicators of demographic and economic, in addition, these two indicators were the most widely used socio-economic indexes to establish the relationship with the urbanization processes based on DMSP/OLS NTL, which have been estimated by previous researchers [9,10]. Moreover, studies have confirmed that energy consumption and electricity consumption have close relationships with DMSP/OLS NTL based urbanization estimating [13,30]. Besides, popularization and promotion of natural gas in some ways could reflect the development of energy industry structure in the city, there is also little research on establishing relationship with DMSP/OLS NTL, as a result, total consumption of natural gas was selected. Freight traffic plays an important role during the development of a city, length of highways and the total number of civil motor vehicles were chose to reflect the freight traffic development in Beijing. Therefore, the data collection includes seven annual indexes: GDP; permanent population; total energy consumption; total consumption of natural gas; the total electricity consumption; length of highways; and the total number of civil motor vehicles. [31]. Annual composites were produced for each satellite using the highest-quality of collected data. These images are grid-based compositions with a 0-63 digital number (DN) and a 30 arc-second (approximately 1 km at the equator) spatial resolution for pixels.
Inter-Calibration
Due to the lack of on-board calibration, the NTL-DN among annual DMSP/OLS NTL images were not compatible. To improve the comparability of NLT data in Beijing from the period 1992-2013, a second order regression model proposed by Elvidge et al. [32] was utilized to generate an inter-calibrated DMSP/OLS NTL time series and was matched with the composite of F12 in 1999 to minimize the effects of variation among sensors. On this basis, the NTL series can be adjusted to the same radiometric baseline [32]. As the model was empirically developed using the region with little NTL change as the reference data, by analyzing the socio-economic characteristics based on GDP and built-up area data for Chinese cities from 1992-2013, Yichun City in Heilongjiang Province, China was selected as the reference region. A set of annual composites were compared with F121999 for the city of Yichun using the regression model below. In Equation (1), DN is the original NTL-DN value, DN calibrated is the inter-calibrated, a, b, and c are coefficients.
Intra-Annual Composition and Inter-Annual Correction
To make full use of the data collected from two satellites for the same year and to remove the intra-annual unstable lit pixels, a systematic correction method proposed by Liu et al. [33] was applied as intra-annual composition. The composition produced one intra-annual composite for the year with two NTL images from the same year.
In addition, to make sure the NTL based urban dynamics matched the actual process and to remove the discrepancies in the multi-year dataset, an assumption was proposed that the area would develop continuously, leading to an only grow trend of NTL-DN value over time [31]. The inter-annual correction carried out was applied to each NTL image [33].
Calculation of DMSP/OLS NTL Indicators
Our method for analyzing the dynamics of urbanization processes in Beijing includes three main steps: calculating the annual average NTL-DN; counting the total number of lit pixels; calculation the urban area proportion.
Calculation of the annual average NTL-DN
In this study, we used the annual average NTL-DN as one of three indicators to estimate the annual degree of urbanization in Beijing. The annual average NTL-DN was defined as the annual average NTL-DN value of all pixels located in the administrative districts. By taking the pixels located at the boundaries into consideration, the DN value of each pixel was multiplied by the area that it located. In addition, the calculation was made after the preprocessing of the DMSP/OLS NTL imagery, and can be illustrated by Equation (2). In the equation, DN pixel and Area pixel are the NTL-DN value and the trapezoid area of each pixel, Area sum is the sum of the area within the administrative boundary.
2. Counting the total number of lit pixels The total number of lit pixels explained the overall level of how many pixels were lit within the scope of the study region. The total number of pixels derived from the satellite within Beijing city was 24,991, and after inter-annual correlation, the total number of pixels greater than 0 in each year from each satellite was counted. For the year with two satellite datasets, the pixel was counted sonly if it was lit in either dataset.
Urban area proportion
For the extraction of lit urban area, we adopted the threshold technique developed by Henderson et al. [34]. Furthermore, the optimal thresholds for extraction were determined based on the method provided by Gao et al. [35]. As Beijing is in the Northern Coastal China (NCC) region, the thresholds of NCC were exploited for the extraction.
Preprocessing of Landsat TM Images
Four Landsat 5 TM images were collected on 16 September 1995 (Landsat Scene ID: LT51230321995259HAJ00 and LT51230331995259HAJ00) and 22 September 2009 (Landsat Scene ID: LT51230322009265IKR00 and LT51230332009265IKR00) from the United States Geological Survey (USGS) Earth Explorer website [36]. The Landsat TM images were all acquired under clear sky conditions. The images were further rectified to the Universal Transverse Mercator projection system (datum WGS84, UTM zone N50). The Landsat TM products were composed of seven bands, six of them in the visible and near infrared, with only one band located in the thermal infrared region.
To cover the whole administrative region of Beijing, two images were merged together as a preliminary step. The next step consisted of the calculation of at-sensor spectral radiance in converting image data into a physically meaningful common radiometric scale. Equation (3) was applied to convert the digital numbers for both reflective and thermal bands to at-sensor radiance [37,38]: where L λ is the spectral radiance at the sensor's aperture in W/(m 2 sr µm); Q cal is the quantized calibrated pixel value in DN; Q calmin is the minimum quantized calibrated pixel value corresponding to LMI N λ ; Q calmax is the maximum quantized calibrated pixel value corresponding to LMAX λ ; LMI N λ is the spectral radiance that is scaled to Q calmin in W/(m 2 ·sr·µm); and LMAX λ is the spectral radiance that is scaled to Q calmax in W/(m 2 ·sr·µm).
To reduce the between-scene variability of relatively clear Landsat scenes for all bands, we adopted top-of-atmosphere (TOA) reflectance to correct the cosine effect of solar zenith angles and solar irradiance [37]. The combined surface and atmospheric reflectance was computed as per Equation (4) [38]: where ρ λ is the unitless planetary reflectance; L λ is the spectral radiance at the sensor's aperture in W/(m 2 ·sr·µm); d is the earth-sun distance (in astronomical units); ESUN λ is the mean solar exo-atmospheric irradiance in W/(m 2 ·µm); and θ s is the solar zenith angle in degrees.
Normalized Difference Vegetation Index Calculation
The NDVI index is a measure of the amount and vigor of vegetation at the surface. As vegetation reflects well in the near infrared part of the spectrum, NDVI has become a simple graphical indicator to assess the vegetation coverage of the target. Numerous studies have focused on understanding the relationship between LST and NDVI [26] and concluded that a close relationship between them existed. For this study, to explore the relationship between vegetation coverage and LST and DMSP/OLS based urbanization level, the Landsat image based NDVI was calculated as the reflection of the vegetation coverage. Data supplied by Band 3 and Band 4 can be used to construct this vegetation index according to the following equation:
Estimating Land Surface Temperature
The procedure described by Weng et al. [39] was adopted for the retrieval of LST. The thermal infrared band (Band 6) was utilized to map LST, and the processing was composed of four parts:
1.
First, converting spectral radiance to at-sensor spectral, which is a more physically useful variable, and was calculated by Equation (3); 2. Second, to converting the result of Step 1 to at-sensor brightness temperature, this proceeded under the assumption that the earth's surface is a black body, and using the formula below [38]; where T B is the effective at sensor brightness temperature in Kelvin; L λ is the spectral radiance at the sensor's aperture in W/(m 2 ·sr·µm); K 1 and K 2 are the calibration constant for Landsat TM; K 1 is 607.76 W/(m 2 ·sr·µm); and K 2 is 1260.56 K.
3.
Third, the obtained brightness temperatures were referenced to a black body, which differ from the properties of real objects. Correction for spectral emissivity (ε) is a must and the emissivity corrected surface temperature was computed as follows [40]: where T s is the surface radiant temperature in Kelvin (k); T B is the effective at sensor brightness temperature in Kelvin; λ is the wavelength of emitted radiance, herein, λ = 11.5 µm [41]; Js ; c is the velocity of light 2.998 × 10 8 m/s ; and ε is the surface emissivity which can be calculated from NDVI [42,43], the estimation was shown in Table 2 below.
As the surface radiant temperature is in Kelvin, which differs from the commonly used centigrade, the LST in Celsius was calculated by adding the absolute zero (approximately −273.15 • C) [44].
4.
Finally, to ensure that the images from different years were comparable, normalized land surface temperature was put forward, which quantified the LST range from 0-1. The calculation required the equation below: where LST nor is the normalized land surface temperature; LST min and LST max are the minimum and maximum value obtained within the study region, respectively.
Urbanization Dynamics Process Assessment in Beijing
In this study, the annual average NTL-DN value under Beijing's boundary was calculated using Equation (2). The dynamics of urbanization, which was estimated by DMSP/OLS NTL, are shown in Figure 2a. By preprocessing the images, the calibrated annual average NTL-DN value was calculated and is displayed in red. The average NTL-DN increased 1.56 times, from 11.73 in 1992 to 18.28 in 2013, with an average annual growth rate of 2.16%. Via calibration, we obtained a smooth result of an average NTL-DN trend, which seems more reasonable than the initial NTL data. The total lit number and urban area proportion trends are shown in Figure 2b 3.46 times from 4.86% in 1992 to 16.83% in 2013. Thus, the lit number increased more rapidly in the first decade than the next; however, the urban area proportion showed an opposite trend where it increased faster in the later decade than before.
Correlation Analysis of the NTL Indicators with Social-Economic Variables
The relationship between the NTL indicators and social-economic variables was initially examined through a log-linear regression analysis [45]. Representatively, the Average DN-Ln(GDP), Total Lit Number-Ln(GDP) and Urban Area proportion-Ln(GDP) are shown in Figure 3. The regression analysis determined that the DMSP/OLS based indicators were closely related to the GDP of Beijing during the study period. The correlation coefficient of determination of Ln(GDP) with average NTL-DN value, total lit number and urban area proportion were 0.924, 0.807 and 0.815, respectively, with both showing a close positive correlation. With in-depth analysis through a comparison of the correlation of GDP with NTL indicators, we found that the average NTL-DN had a better correlation than the other two indexes from the approximate increase trends of GDP and Average NTL-DN. Log-linear regression was then applied between the NTL indicators and all collected social-economic parameters from 1992-1993 to estimate their correlation. Table 3 shows the correlation results, and both average NTL-DN and total lit number and urban area proportion were closely correlated with the social-economic variable. The correlation coefficients of determination of annual average NTL-DN value were all above 0.752 with an average value of 0.873. In particular, the correlation of average NTL-DN with the total consumption of natural gas is among the best, with a
Correlation Analysis of the NTL Indicators with Social-Economic Variables
The relationship between the NTL indicators and social-economic variables was initially examined through a log-linear regression analysis [45].
Correlation Analysis of the NTL Indicators with Social-Economic Variables
The relationship between the NTL indicators and social-economic variables was initially examined through a log-linear regression analysis [45]. Representatively, the Average DN-Ln(GDP), Total Lit Number-Ln(GDP) and Urban Area proportion-Ln(GDP) are shown in Figure 3. The regression analysis determined that the DMSP/OLS based indicators were closely related to the GDP of Beijing during the study period. The correlation coefficient of determination of Ln(GDP) with average NTL-DN value, total lit number and urban area proportion were 0.924, 0.807 and 0.815, respectively, with both showing a close positive correlation. With in-depth analysis through a comparison of the correlation of GDP with NTL indicators, we found that the average NTL-DN had a better correlation than the other two indexes from the approximate increase trends of GDP and Average NTL-DN. Log-linear regression was then applied between the NTL indicators and all collected social-economic parameters from 1992-1993 to estimate their correlation. Table 3 shows the correlation results, and both average NTL-DN and total lit number and urban area proportion were closely correlated with the social-economic variable. The correlation coefficients of determination of annual average NTL-DN value were all above 0.752 with an average value of 0.873. In particular, the correlation of average NTL-DN with the total consumption of natural gas is among the best, with a Log-linear regression was then applied between the NTL indicators and all collected social-economic parameters from 1992-1993 to estimate their correlation. Table 3 shows the correlation results, and both average NTL-DN and total lit number and urban area proportion were closely correlated with the social-economic variable. The correlation coefficients of determination of annual Remote Sens. 2017, 9, 453 9 of 16 average NTL-DN value were all above 0.752 with an average value of 0.873. In particular, the correlation of average NTL-DN with the total consumption of natural gas is among the best, with a determination coefficient of 0.966. Furthermore, the best correlation of total lit number with social-economics parameters was with the total consumption of natural gas with a determination coefficient of 0.954, while the average correlation determination coefficient was 0.766. For the correlation analysis with urban area proportion, the coefficients of determination were all above 0.674, and held an average value of 0.804, while the maximum of 0.916 was derived from the population. The result illustrates that DMSP/OLS NTL based urbanization indicators are effective techniques to evaluate and monitor development at city level. Figure 4c,f, respectively). From the spatial distribution in both years, we concluded that the downtown area had a smaller NDVI value and a larger value of LST nor , and that through a visual inspection, a surface urban heat island existed within the city.
Spatial Relationship Analysis of DMSP/OLS NTL and Landsat Images
An analysis based on the DN value was also conducted to explore the spatial correlation between the DMSP/OLS NTL-DN and Landsat based parameters. The calibrated NTL images are shown in (Figure 5f) were established, and in contrast to the mean NDVI, the LST nor showed a close positive correlation with determination coefficients reaching 0.651 in 1995 and 0.926 in 2009. In both years, the LST nor showed an increasing trend along with the rise of the NTL-DN value, which can be used to draw conclusions on how the urbanization level impacts the temperature distribution. The results indicate that for an area with a higher NTL-DN value (that is more urbanized), the mean LST nor within it would be higher than the surroundings, which provides strong evidence that the surface urban heat island effects are influenced by urban development. and in 2009 (Figure 4e), an obvious horizontal gradient appeared between the urban area and its surroundings in both 1995 and 2009. The value of the NDVI ranges from −0.63-0.79 in 1995, and changes to −0.93-0.75 in 2009, and it can be seen that the NDVI changed significantly due to urban expansion during this period. Furthermore, the LST in 1995 and 2009 were normalized (shown in Figure 4c,f, respectively). From the spatial distribution in both years, we concluded that the downtown area had a smaller NDVI value and a larger value of LSTnor, and that through a visual inspection, a surface urban heat island existed within the city. The results indicate that for an area with a higher NTL-DN value (that is more urbanized), the mean LSTnor within it would be higher than the surroundings, which provides strong evidence that the surface urban heat island effects are influenced by urban development.
Spatial-Temporal Comparison Analysis in 1995 and 2009
To investigate the influence of SUHI driven by urban development, the images of NTL-DN, NDVI and LST nor from two years was composited to obtain the difference between 1995 and 2009. Next, a spatial-temporal comparison analysis was carried out to explore the variation in vegetation coverage situation and the SUHI effects from the urbanization process revealed by NTL images. Figure 6a shows the spatial pattern of the NTL-DN difference between the two years through visual assessment, with the NTL-DN of the previous downtown area remaining unchanged, whereas the surrounding areas had an obvious increase in NTL-DN value, which matched the urban sprawl and expansion in Beijing during this period. With the extraction of the NTL-DN difference, correlation analysis was then carried with using the NDVI difference during the study period (Figure 6b). The NDVI difference showed a negative correlation with the NTL-DN difference, where the determination coefficient reached 0.681, and the correlation was more obvious in the range where theNTL-DN difference had a positive value. The result suggested that the urbanization process would reduce vegetation coverage, given that urban sprawl changed the vegetation area into impervious surfaces. Another correlation analysis was conducted with the NTL-DN difference and LST nor difference (Figure 6c), where the result clearly demonstrated a positive correlation between the two parameters, where the coefficient of determination was 0.664. Areas with a higher value of NTL-DN difference, which represent faster urbanization zones, and the LST nor difference showed a tendency to increase. The faster the development in the area, the higher the increase of LST nor difference. Additionally, the result further illustrated how urbanization affected the SUHI quantitatively over time in the city.
Spatial-Temporal Comparison Analysis in 1995 and 2009
To investigate the influence of SUHI driven by urban development, the images of NTL-DN, NDVI and LSTnor from two years was composited to obtain the difference between 1995 and 2009. Next, a spatial-temporal comparison analysis was carried out to explore the variation in vegetation coverage situation and the SUHI effects from the urbanization process revealed by NTL images. Figure 6a shows the spatial pattern of the NTL-DN difference between the two years through visual assessment, with the NTL-DN of the previous downtown area remaining unchanged, whereas the surrounding areas had an obvious increase in NTL-DN value, which matched the urban sprawl and expansion in Beijing during this period. With the extraction of the NTL-DN difference, correlation analysis was then carried with using the NDVI difference during the study period (Figure 6b). The NDVI difference showed a negative correlation with the NTL-DN difference, where the determination coefficient reached 0.681, and the correlation was more obvious in the range where theNTL-DN difference had a positive value. The result suggested that the urbanization process would reduce vegetation coverage, given that urban sprawl changed the vegetation area into impervious surfaces. Another correlation analysis was conducted with the NTL-DN difference and LSTnor difference (Figure 6c), where the result clearly demonstrated a positive correlation between the two parameters, where the coefficient of determination was 0.664. Areas with a higher value of NTL-DN difference, which represent faster urbanization zones, and the LSTnor difference showed a tendency to increase. The faster the development in the area, the higher the increase of LSTnor difference. Additionally, the result further illustrated how urbanization affected the SUHI quantitatively over time in the city. Figure 7a,d show the spatial pattern of urban heat island zones in 1995 and 2009, respectively. The average normalized LST was calculated within each NTL pixel to indicate the surface temperature distribution. Next, choropleth maps were produced based on the classification scheme of standard deviation, in which the first-class data values were greater than one standard deviation above the mean, and second-class values were between the mean and one standard deviation above the mean, and so on [39,46]. Thus, the area was divided into four categories: heat island zone, The average normalized LST was calculated within each NTL pixel to indicate the surface temperature distribution. Next, choropleth maps were produced based on the classification scheme of standard deviation, in which the first-class data values were greater than one standard deviation above the mean, and second-class values were between the mean and one standard deviation above the mean, and so on [39,46]. Thus, the area was divided into four categories: heat island zone, sub-heat island zone, medium temperature zone, and low temperature zone. In both 1995 and 2009, the urban heat island zone was obviously assembling in the city center, and the heat island zone had a large expansion from all directions. Figure 7b,e shows the area proportions and the average LST nor for the four categories, where the percentage of urban heat island zone increased from 9.3% in 1995 to 21.7% in 2009; the sub-heat island zone had no significant change from 29.3% to 28.6%; and both the medium temperature and low temperature zones had a decrease of area proportion, especially the low temperature area which decreased from 44.0%-35.7%. These changes indicated an adverse effect on the thermal environment through the urbanization process from 1995-2009, and that the urban heat island zone occupied a higher proportion than before.
Urban Heat Island Analysis
To better understand the relationship between the NTL-DN, NDVI and UHI zones, the statistics of average NTL-DN and NDVI in different UHI zones were obtained by super-imposing the heat island maps with images of NTL and NDVI. From Figure 7c 34, respectively, from the low temperature zone to the heat island zone. The results illustrated the spatial correlation between the surface temperature and the NTL images. The urban heat island zones located in more developed areas, the sub-heat island and medium temperature zones took second and third place, and the low temperature zones were mainly located in the western and northern parts of the city where most of the area was covered with mountains and forests that have lower NTL-DN values than downtown. Furthermore, in both 1995 and 2009, the low temperature zones had the highest value of NDVI, whereas the urban heat island zones had the lowest. These results showed a positive correlation between the proportion of green space and their cooling effects.
Remote Sens. 2017, 9,453 12 of 16 sub-heat island zone, medium temperature zone, and low temperature zone. In both 1995 and 2009, the urban heat island zone was obviously assembling in the city center, and the heat island zone had a large expansion from all directions. Figure 7b,e shows the area proportions and the average LSTnor for the four categories, where the percentage of urban heat island zone increased from 9.3% in 1995 to 21.7% in 2009; the sub-heat island zone had no significant change from 29.3% to 28.6%; and both the medium temperature and low temperature zones had a decrease of area proportion, especially the low temperature area which decreased from 44.0%-35.7%. These changes indicated an adverse effect on the thermal environment through the urbanization process from 1995-2009, and that the urban heat island zone occupied a higher proportion than before.
To better understand the relationship between the NTL-DN, NDVI and UHI zones, the statistics of average NTL-DN and NDVI in different UHI zones were obtained by super-imposing the heat island maps with images of NTL and NDVI. From Figure 7c The results illustrated the spatial correlation between the surface temperature and the NTL images. The urban heat island zones located in more developed areas, the sub-heat island and medium temperature zones took second and third place, and the low temperature zones were mainly located in the western and northern parts of the city where most of the area was covered with mountains and forests that have lower NTL-DN values than downtown. Furthermore, in both 1995 and 2009, the low temperature zones had the highest value of NDVI, whereas the urban heat island zones had the lowest. These results showed a positive correlation between the proportion of green space and their cooling effects.
Result Analysis
Urban systems do not naturally develop, and are always affected by human activities [2]. As a remote sensing database, which is closely related to anthropogenic activities, the time series DMSP/OLS images provide a consistent and timely measure to estimate social-economic dynamics and changes [7]. In this study, the calibrated annual NTL-DN value from 1992-2013 was used to reflect the average urbanization level of the city in that year, as a global index to evaluate the development level of Beijing. The results show that the human activities observed via satellite were consistent with the official statistical data. The average value of the correlation coefficient of determination between the three remote sensing based indexes with the seven statistical indicators reached 0.873, 0.766 and 0.804, respectively. Compared with the other two indicators, the coefficient of total lit number was a little less, most likely due to the characteristics of urbanization pattern in Beijing. The lit number increased quickly in the few years from 1992 as a fast period of urban sprawl, then slowed down and the urbanization level in the expanded area developed rapidly instead of a continuous outward expansion. In contrast, the social-economic statistics showing a sustained and rapid growth during the period led to less correlation with the total lit number than the other two parameters. The urbanization situation in the study period was confirmed by three fields: the average NTL-DN value (within the administrative boundaries of Beijing) presented an average urbanization situation of the city in each year; the total lit number was an indicator from which the number of pixels lit by human activities could be obtained; and the urban area proportion revealed the cases of urban area sprawl during the study period.
Land surface temperature observations acquired from remote sensing technologies such as MODIS and Landsat were applied to assess the UHI, and to analyze the relationship between surface temperature and land use and land cover (LULC) in urban areas [39]. Nighttime light imagery derived from DMSP/OLS has been widely used for detecting human settlement, dynamic mapping of urban areas and exploring the spatial urban sprawl [8,9]. For the use of these two kinds of databases in combination, Liao et al. [19] found that the DMSP/OLS NTL based energy consumption has a significantly positive correlation with the nighttime SUHI, the results indicated that the anthropogenic heat released from energy consumption is an important contributor to the urban thermal environment [19]. In addition, researchers also found that urban patterns extracted using DMSP/OLS were consistent with those extracted using Landsat products [33]. However, the number of studies estimating the correlation between the Landsat based LST with nighttime imagery from the DMSP/OLS are rare and limited. This paper proposed a new way of integrating different remote sensing data to explore the performance of NDVI and LST by Landsat TM images under different values of NTL-DN from DMSP/OLS imagery. The results clearly showed that the spatial pattern of LST nor and NDVI correlated with the pattern of urbanization level in 1995 and 2009. Moreover, by spatial comparison analysis between the two years, the spatial pattern of LST nor increase and NDVI decrease also proved to be closely related to the change of urbanization level index developed from the nighttime imagery. Furthermore, the urban heat island zone maps presented the spatial pattern of surface temperature distribution, where the results revealed that the urban heat island zone held higher NTL-DN values and lower NDVI than other zones.
Limitations and Potential Improvement
Although the DMSP/OLS time series was successfully adopted to explore urbanization dynamics and analyze the UHI situation in Beijing, there are some sources of uncertainty in this study. First, for the inter-calibration, Yichun City of Heilongjiang Province was selected as the reference region even though the area had little change of NTL during the study period; however, the inappreciable development of the city reduced the accuracy of the calibration. Second, for the calculation of the urban area proportion, we adopted the threshold technique provided by Gao et al. [35], where the thresholds for urban area extraction were determined for several periods rather than every year, which is one of the influence factors for the accuracy of extraction. Third, the calculation of the normalized LST contained limitations. The normalization relied heavily on the extreme values within the region; however, as most of the pixel values ranged in the middle area between the maximum and minimum values, normalization reduced the sensitivity of the analysis. Fourth, in this study, four temporal Landsat time series image data were used to characterize and quantify the spatiotemporal patterns of NDVI and LST nor , and correlation analysis was carried out with the annual value of NTL-DN from DMSP/OLS. As this would have created limitations for the correlation analysis between the annual composites of DMSP/OLS and Landsat images, it was necessary to develop new methods that could be applied effectively to detect the annual value of NDVI and LST nor using time series image data.
Conclusions
This study developed the annual average NTL-DN value, total lit number and urban area proportion to investigate the spatial-temporal urbanization progress in Beijing from 1992-2013. Our results offer quantitative measures for evaluating the urbanization level from three aspects over the period, and provide a new way to explore the urbanization process that is relatively accurate and comprehensive.
From both the DMSP/OLS NTL indicators and the social-economic statistics data, the results showed that Beijing experienced continued and rapid urbanization during the study period. Log-linear regression analysis between the two kinds of data sets showed close correlation with each other, with the average value of determination coefficient of annual average NTL-DN value, total lit number and urban area proportion reached 0.873, 0.766 and 0.804, respectively.
From the distribution map of NDVI and LST nor , we found that the downtown area had less vegetation coverage and a higher LST nor . Furthermore, according to the correlation analysis with the NTL-DN value, our results indicated that both the mean NDVI and mean LST nor were closely correlated with the NTL data. The NDVI and LST nor showed a tendency of decreasing and increasing, respectively, along with the growth of the NTL-DN value. The study proposed a new way of integration using different remote sensing data to explore the urban heat island effects by Landsat TM images and DMSP/OLS imagery. The results provided quantitative measurements to estimate the vegetation coverage situation and surface temperature distribution, which are significant for the eco-environment assessment of the city.
The study also explored the spatial-temporal difference of NDVI and LST nor along with the urbanization process derived from NTL data in 1995 and 2009. Along with the rapid urbanization of the city, the original land use turned into urban land, which is largely composed of an impervious surface [23]. The correlation analysis result between the NTL-DN difference with a change in NDVI and LST nor showed a close relationship, which provided strong evidence of how the urbanization process affects the thermal environment and vegetation coverage over time. In addition, by zonal classification, we also found that the urban heat island zone occupied a higher proportion than before by a growth rate of 12.4%, and that the average NTL-DN and NDVI showed an obvious gradient from the low temperature zone to heat island zone. Our results illustrated the spatial correlation between the surface temperature and the urban development situation, as well as the cooling effects of the green space, which have important implications for urban planning to mitigate the SUHI effects. | 2017-05-11T22:24:56.904Z | 2017-05-07T00:00:00.000 | {
"year": 2017,
"sha1": "55798678d9505394dc7fd157222990cc053f66a2",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2072-4292/9/5/453/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "55798678d9505394dc7fd157222990cc053f66a2",
"s2fieldsofstudy": [
"Environmental Science",
"Geography"
],
"extfieldsofstudy": [
"Computer Science",
"Geology"
]
} |
134124026 | pes2o/s2orc | v3-fos-license | Seasonal Changes of the Bottom Sediments’ Physicochemical Characteristics in the Region of the Near-Coastal Methane Seeps
The features of seasonal changes in physical (moisture, granulometric composition) and chemical (organic carbon content and carbonate content) characteristics of the bottom sediments in the region of the coastal methane seeps in the southern sector of the Tarkhankut Peninsula are investigated. It is shown that the bottom sediments of the region under study are represented by fineand mediumgrained sand containing inclusions of aleurite-pelitic silts and shell detritus. The obtained in situ data on the content and vertical distribution of Corg and CaCO3 in the bottom sediments make it possible to examine the intra-annual dynamics of the basic geochemical parameters in the areas of the methane gas hydrates and to assess influence of the methane seeps upon the geochemical structure of the bottom sediments. The values obtained for the period under investigation can be divided into three parts: before formation of the bacterial mass (May), the period of maximum formation of mat (June – September), the time of destruction of the mat structures (November). It is shown that in May before the mat structures are formed, the Corg content in the surface 0–2 cm layer is low (0.5 %), in June it increases up to 15.5 %, then decreases to 9.7 % in September and to 2.3 % in November. In the surface layer, the CaCO3 content is maximal in late May (93.5 %); during the mat formation in June it decreases to 16.6 % and then grows from 23.5 to 49.1 % in August – November, respectively. Being analyzed, the data on vertical distribution of the geochemical characteristics under study show that accumulations of the bacterial mats affect the intra-annual variation of the above-mentioned parameters.
Introduction.
Until recently, the studies of methane in the marine environment were mostly aimed at studying the questions relating to the methane itself and the physical effect of its intake on thermochaline characteristics and exchange processes [1][2][3] but not on biogeochemical ones [4]. The main research areas on this theme are the following: the assessment of the gas emission intensity and the possibility of gas hydrates formation, the obtaining of quantitative estimates of the methane propagation during its dissolution in the water and its release into the atmosphere, as well as microbiological research aimed at studying the features of microbial consortia functioning within the framework of studying the methane cycles in the seawater [3].
Moreover, the research of methane seeps in the Black Sea was carried out mostly at the areas of continental slope and in the deepwater zones [2], i.e. at the depths of stable existence of gas hydrates and anaerobic processes. The study of methane cycle processes in the coastal area was significantly restricted by the re-search of its production in the surface layer of bottom sediments with high organic carbon (Сorg) content and anaerobic oxidation at the boundary of bottom sediments and water. The study of jet gas emissions in the coastal area was started only in the last decade and the results of these studies are poorly presented in publications [5][6][7].
The first microbiological studies of methane seeps were carried out by the scientists of the Research Center of Biotechnology RAS (Moscow) and Institute of Marine Biological Research of RAS (IMBR) in December 1990 [8]. Later, in [9] it was shown that the carbonate buildups in the anaerobic zone at the gas seep sites are covered from the outside by a dense bacterial mat. This indicates the involvement of microorganisms into the geochemical processes of methane cycle [10,11]. In later works [12,13] it was shown that the carbon of the methane itself, bacterial mat and carbonate buildups, formed as a result of anaerobic oxidation of methane, is characterized by high content of light carbon isotope.
Jet emissions of methane are the source of Corg in the marine environment; its input is a source of energy for various biogeochemical processes. In this case organic and inorganic carbons are the products of methane bacterial oxidation in aerobic (due to the oxygen) and anaerobic (during the sulfate reduction) conditions. Granulometric composition and water content of bottom sediments determine the intensity of physical fluxes and exchange between the bottom sediments and the bottom layer of water.
The purpose of the work is to estimate the impact of methane seeps on geochemical structure of bottom sediments and their intra-annual changes related to the processes of aerobic and anaerobic assimilation in the marine environment.
Materials and methods.
The Tarkhankut Cape area under study is the open part of the Black Sea where direct sources of anthropogenic pollution are absent. In the coastal part of the studying region seasonal changes of physical and chemical characteristics of bottom sediments are greatly affected by hydrodynamic factors. In winter and spring they contribute to complete destruction of mats and saturation of the bottom layer with oxygen. In summer and autumn biochemical processes of sulfate reduction play a key role [14].
In the Tarkhankut Cape area there is a predominance of north-eastern storm winds directed from the shore and southern ones directed to the shore. The performed analysis of data over 1945-2008 given in [15] revealed the fact that the frequency of winds with >10 m/s speed for all the directions is ~5%. The winds of the southern, western and south-western rhumbs with >10 m/s speed are observed, on average, in 1.1% of cases during the year, with >15 m/s speedsin 0.25% of cases. The maximum values of frequencies of winds with >10 m/s speed are recorded in January and February, in summer months their frequency is rather low.
Abrasion rate of the Tarkhankut limestone shores and benches does not exceed several centimeters per year, therefore the beaches here are small in volume and they stay under conditions of sharp deficit of sediments. Exposure to intensive wave processing during the storms contributes to good sorting of sediments [16].
The samples for studying physical and chemical characteristics of the bottom sediments, the features of their spatial distribution and the vertical structure were taken during complex in situ studies in the area of coastal jet emissions of methane in the northwestern part of the Crimean Peninsula (Okunevka, Chernomorskiy district) (Fig. 1). The work [17] is devoted to the study coastal methane seeps in this region. Currently, ~7 coastal methane seeps along 10 km coastline are found. The areas of methane seepage make up 5-143 m 2 . The studied filtering area reaches 20 m in diameter and is located about 50 m from the shore at ~5 m depth. Expeditions were performed in different seasons: in spring (May), in summer (June, August) and in autumn (September, November). Methane emissions were observed during the entire period of research from April to November 2016. Natural water content was determined gravimetrically according to standard procedures (GOST R ISO 11465-2011; introduced since 01.01.2013). The calculation of the water content in the samples was taken as the ratio of the difference between the wet and dry deposits to the wet deposit, taken in percentage. Grain size analysis was carried out by the standard method (GOST 12536-2014; introduced since 01.07.2015) according to recommendations given in [18]. Separation of aleurite-pelitic fraction (≤0.05 mm) was performed by wet screening method. Coarse fractions (>0.05 mm) were separated by sieving method after the drying.
Carbonate (CaCO3) content in a sample was determined by volumetric weight method after the decomposition of carbonates with hydrochloric acid, taking into account the methodological recommendations of UNEP manual. As a result of repeated analysis of samples with an average value of CaCO 6.84%, an average square deviation of ± 0.18% (variation coefficient 2.6%) was obtained (UNEP/IOC/IAEA -1995).
Corg concentration in the sample was determined by the spectrophotometric method after organic substance oxidation with a sulfochromic mixture (GOST 26213-91; introduced since 30.06.1993). An improved modification of the chemical analysis methodology was applied.
The measurement of organic carbon and carbonate content was carried out twice in each sample; the mean value of the two measurements was used as measured parameters (Corg and carbonate content).
Repeatability of Corg content determination (carried out as two separated but consecutive measurements) results should not exceed 0.1% at Corg content up to 1% and 6.5%at more than 1% Corg content. Relative error of the method at up to 3% Corg content in the bottom sediments makes up 20%, at 3-5% content the permissible deviations are 15%, more than 5% of Corg -10%.
Results and discussion. Bottom sediments of the studied area are represented by fine and medium-grained sands with inclusions of aleurite-pelitic silts and shell detritus. The content of sand fraction (1-0.1 mm) varies vertically within 85-93% range (Fig. 2). Gravel material content (10-1 mm fraction) varies within 0.4-1.1% range. The presence of a finely dispersed fraction (<0.1 mm) is noted in 0-4 cm surface layer. In May its content reaches its maximum values (14.3%) in the 2-4 cm layer, then it decreases with depth down to 2.9% in 6-8 cm layer. It is shown that the proportion of aleurite-pelitic material in 0-5 cm surface layer decreased from May to September down to 0.6%, which is explained by the removal of finely dispersed material from the coastal zone. The analysis of data of natural water content vertical distribution (Fig. 3) revealed that fractional composition of sediment and its formation under effect of hydrodynamic factors play a key role in the vertical structure of water content profile. The period under study can be divided into three seasons. In spring the water content maximum value makes up 42% in the surface layer where the maximum content of slit fraction is observed (average over the profile -33%). In summer, during the period of the maximum formation of bacterial mat an increase of water content takes place: in June -93%, in August -92%. In autumn, a reduction of water content values begins. In September in 0-3 cm surface layer they make up 87%, in November they drop down to the level of the spring values (46% in 0-4 cm layer).
New in situ data on the vertical distribution of Сorg and CaCO3 values in the bottom sediments made it possible to study the intra-annual dynamics of the main geochemical characteristics in the areas of methane jet emissions and to assess the impact of mat structures on it. In June and August a comparison of profiles of Сorg and CaCO3 content in the bottom sediments was carried out for the samples taken in the central part of the mat and on its periphery.
Intra-annual variability in Сorg vertical distribution (Fig. 4) can be correlated with three stages of geochemical characteristics: before the formation of bacterial mass (May), the period of the maximum bacterial mat formation (June -September), the period of mat buildups destruction (November). It is shown that Сorg content in 0 -2 cm surface layer before the formation of mat structures in May is low (0.5%), in June it increases up to 15.5% and then it decreases down to 9.7% in September and 2.3% in November. The character of vertical distribution of the obtained characteristics persists throughout the year and the value of Сorg content decreases with depth. The results obtained in August and September showed that there are cases when the maximum values of Сorg content are noted in the subsurface layer. Such features are explained, on the one hand, by the processes of organic matter removal during storms, on the otherby its more intensive consumption and less intensive production.
The analysis of samples taken in the central part of the mat and on its periphery revealed the fact that the obtained quantitative characteristics of Сorg content have a number of differences. It is shown that in August in the central part of the mat the maximum concentrations of Сorg samples are 14.6% and they are found in 4-6 cm subsurface layer, 10.7%in 0-4 cm layer; in September 12.3% -in 3-5 cm layer, 9.7%in 0-3 cm layer. On the periphery Сorg content is maximal in 0-1 cm layer in August (4.9%) and in 0-2 cm layer in September (6.9%). The values of organic carbon concentration with the distance from the central part to the periphery significantly decrease with depth. In the surface layer they differ by two times and from 3-5 cm layerby an order of magnitude.
In the vertical structure of sample taken on the periphery in September two layers of mat are found. During the storm surface layer of mat is covered with sand on which new bacterial constructions occur. Two-step formation of mat occurs in this way. Vertical distribution of Сorg concentrations increases (from 0.3% in 4-9 cm layer to 3.6% in 9-13 cm layer), the one of CaCO3decreases at that (from 97.8 to 46.1% in the same layers).
CaCO3 content in the surface layer is maximal in the end of May (93.5%). It decreases in June (16.6%) during the mat formation period and then increases from August to November from 23.5 to 49.1%, respectively (Fig. 5). Inorganic carbon content in the sediments has an inverse relation to the organic carbon content (Fig. 6). Conclusions. Based on the latest data from field observations, the results of the study of the intra-annual dynamics of the main physical and chemical characteristics of bottom sediments in the area of coastal methane seeps were obtained. This is the uniqueness of the work.
Bottom sediments of the studied area are represented mainly by light yellow and grey sands with shell detritus inclusions, coarse material content varies within 0.4-1.1% range and in the surface layers the presence of slits is determined. Сorg content increases from 0.5% in May to 15.5% in June and then it decreases again down to 2.3% in November. It is shown that bacterial mass formation begins in May after the formation of stable vertical stratification of water column; the maximum is reached in August -September when the greatest bacterial biomass and the highest values of dissolved sulfide concentration are observed. Later, when the storm season comes, destruction and degradation of bacterial biomass layers takes place, and the processes of methane aerobic oxidation are replaced by anaerobic ones.
It is shown how the features of formation of clumps of anaerobic bacterial mats in different seasons affect the intensity of Сorg and hydrogen sulfide production at the methane assimilation. | 2019-04-27T13:09:50.774Z | 2018-05-01T00:00:00.000 | {
"year": 2018,
"sha1": "0c32141bb7b49a7726dfc0596ee6f112779f9834",
"oa_license": "CCBYNC",
"oa_url": "http://xn--c1agq7a.xn--p1ai/images/files/2018/02/201802_05.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "14e9c96a0ad1348617e8151bde3ae64e3cbb87de",
"s2fieldsofstudy": [
"Environmental Science",
"Geology"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
115593785 | pes2o/s2orc | v3-fos-license | Effect of Coil Width on Deformed Shape and Processing Efficiency during Ship Hull Forming by Induction Heating
The main hull of a ship is made up of a large number of plates with complex curvatures. Line heating is one of the main approaches used in the forming of a ship hull plate. Because line heating is based on manual heating using a handheld oxyacetylene gun, the typical heating width is extremely narrow. With the development of computer control technology, a newly developed automated plate forming equipment is available and its heat source is typically an electromagnetic induction coil. The temperature field and the induction coil size are correlated. However, investigations into the induction coil size are scarce. In this study, the effect that the induction coil width has on both the forming shape and processing efficiency is investigated via simulation and test. The results show that a moderate expansion of the induction coil width at different input powers has an insignificant impact on forming shapes that is attainable by common line heating. However, as the heating width expands with the expansion of the induction coil width, the number of the processing lines via line heating is reduced which improves the processing efficiency.
Introduction
During ship manufacturing, many plates with complex curvatures must be formed for the main hull of a ship, which is a time-consuming process. During this process, the line heating method creates local contraction deformation in a plate via a localized heating source, of which width is significantly narrower in size than the plate length or width as this processing method is based on moving a handheld oxyacetylene gun over a processed plate. In addition, it is believed that a narrower flame is beneficial to the formation of plate shapes with small curvatures. This method is widely deployed.
It is difficult to control the heat input when using a flame heat source. For line heating-based automated forming equipment, an electromagnetic induction heating has become a new heat source for the hot forming of ship hull plates because it is easy to control. Ueda [1,2] investigated the pattern of ship hull plates forming by line heating, discussed the heating line deployment principle and the heating criteria based on the theory of inherent strain, and verified and modified the path produced via induction heating. This work laid the foundation for the development of automated line heating forming equipment for ship hull plates. With the successful development of an automated system for ship hull plates forming by line heating [3], induction heating devices are gradually replacing manual oxyacetylene guns and are becoming a common heat source in line heating processing. Induction The induction heating frequency used for ship hull forming is usually 8-50 kHz. When current flows through an inductor that is near a steel workpiece, an eddy current is induced at the surface of the plate. Induction heat is generated in the plate.
The eddy current during induction heating can be given by Maxwell's equations in the frequency domain as [12]: where stands for permeability, is the magnetic vector potential, is the angular velocity, is the electric conductivity and is the external current. The magnetic vector potential can be obtained from Equation (1) when the external current is given. The resistance heat generated by the eddy current in the plate can be obtained as follows: where refers to eddy current density and can be described as = (− ).
For induction heating, the eddy current generated inside the steel plate is mainly concentrated on the plate surface. For a plate during induction heating, the heat flux can be considered as a surface heat source and obtained as follows: refers to the heat flux on the surface of the steel plate and denotes the plate thickness. The heat flux calculated before is used as the initial input for thermal analysis. The heat flow in a plate can be expressed as follows: where refers to density, is the specific heat capacity, is the temperature of the plate and is the thermal conductivity. Thermal deformation calculation is followed after the temperature field, which is treated as the thermal load obtained from Equation (4).
The basic equations related to mechanical analysis are considered as follows: , + = 0 (5) where refers to stress tensor and represents the body force. The induction coil used in this study is shown in Figure 2. The entire coil is divided equally into three internal frames, as shown in Figure 2a. Considering plate bending deformation, the coil is designed to be slightly curved, and the horizontal curvature radius is 1000 mm. The coil projection The induction heating frequency used for ship hull forming is usually 8-50 kHz. When current flows through an inductor that is near a steel workpiece, an eddy current is induced at the surface of the plate. Induction heat is generated in the plate.
The eddy current during induction heating can be given by Maxwell's equations in the frequency domain as [12]: 1 where µ stands for permeability, A is the magnetic vector potential, ω is the angular velocity, σ is the electric conductivity and J S is the external current. The magnetic vector potential A can be obtained from Equation (1) when the external current J S is given. The resistance heat generated by the eddy current in the plate can be obtained as follows: where J e refers to eddy current density and can be described as J e = σ(− ∂A ∂t ). For induction heating, the eddy current generated inside the steel plate is mainly concentrated on the plate surface. For a plate during induction heating, the heat flux can be considered as a surface heat source and obtained as follows: where q refers to the heat flux on the surface of the steel plate and t denotes the plate thickness. The heat flux calculated before is used as the initial input for thermal analysis. The heat flow in a plate can be expressed as follows: where ρ refers to density, c is the specific heat capacity, T is the temperature of the plate and λ is the thermal conductivity. Thermal deformation calculation is followed after the temperature field, which is treated as the thermal load obtained from Equation (4). The basic equations related to mechanical analysis are considered as follows: where σ ij refers to stress tensor and F bi represents the body force. The stress-strain relationship can be expressed as: where {dσ} refers to the stress increment, [D] is the elastoplastic matrix, {dε} is the strain increment, {C} is the thermal stiffness matrix and dT is the temperature increment. Deformation of the plate can be obtained by solving the equations under actual boundary conditions. The induction coil used in this study is shown in Figure 2. The entire coil is divided equally into three internal frames, as shown in Figure 2a. Considering plate bending deformation, the coil is designed to be slightly curved, and the horizontal curvature radius is 1000 mm. The coil projection outer edge width is L C = 220 mm, the outer edge length is L B = 200 mm, and L B is the direction of coil movement. The dimensions of the plate are 1000 mm × 800 mm × 16 mm (L × B × t). The element type of the plate is DC3D8 for heat transfer simulation and C3D8R for deformation calculation. Fine grids are generated near the heating lines and the element size is 5 mm × 5 mm × 4 mm. Sparse grids are generated away from the heating lines and the element size is 50 mm × 20 mm × 8 mm. Transitional grids are generated between them and the whole model consists of 68,600 elements and 86,399 nodes. The rigid body displacement of the plate is constrained during the calculation.
Induction Heating Experiment
To verify the accuracy of the finite element method, an induction heating test is designed. In this test, an experimental platform made of triaxial motion of the coil is developed that can easily control the coil's positioning, motion direction and travel speed over the test plate. The positioning precision of the experimental platform is 1 mm; and the controllable speed precision is 1 mm/s. To ensure a stable distance between the coil and the plate after plate deformation, the coil is embedded in the frame of a small vehicle, and the trolley wheel of the vehicle maintains rolling contact with the plate, which ensures no contact between the coil and plate, as shown in Figure 2b. For the initial condition, the lowest position of the coil center is 5 mm from the bottom of the trolley wheel. The experimental platform and test scenario are shown in Figure 3. The dimensions of the plate are 1000 mm × 800 mm × 16 mm (L × B × t). The element type of the plate is DC3D8 for heat transfer simulation and C3D8R for deformation calculation. Fine grids are generated near the heating lines and the element size is 5 mm × 5 mm × 4 mm. Sparse grids are generated away from the heating lines and the element size is 50 mm × 20 mm × 8 mm. Transitional grids are generated between them and the whole model consists of 68,600 elements and 86,399 nodes. The rigid body displacement of the plate is constrained during the calculation.
Induction Heating Experiment
To verify the accuracy of the finite element method, an induction heating test is designed. In this test, an experimental platform made of triaxial motion of the coil is developed that can easily control the coil's positioning, motion direction and travel speed over the test plate. The positioning precision of the experimental platform is 1 mm; and the controllable speed precision is 1 mm/s. To ensure a stable distance between the coil and the plate after plate deformation, the coil is embedded in the frame of a small vehicle, and the trolley wheel of the vehicle maintains rolling contact with the plate, which ensures no contact between the coil and plate, as shown in Figure 2b. For the initial condition, the lowest position of the coil center is 5 mm from the bottom of the trolley wheel. The experimental platform and test scenario are shown in Figure 3. The test plate is carbon steel, of which major thermal, physical and mechanical properties are listed in Table 1 and Table 2 which is similar to Reference [13]. The plate dimensions are as follows: the length L = 1000 mm, the width B = 800 mm, and the thickness t = 16 mm. L is the coil movement direction over the plate. Here, LC/B = 0.275.
In the test, the heating line is the upper surface central line along the plate width direction (i.e., B/2). The coil length LB is aligned with the plate length L and is moved along the heating line. In the test, the electromagnetic induction frequency is 15 kHz, the heat source output power is 60 kW, and the coil travel speed is 5 mm/s. As the coil moves over the plate surface in a small rolling vehicle, the distance between the coil and the plate during heating is essentially constant. The test plate is carbon steel, of which major thermal, physical and mechanical properties are listed in Tables 1 and 2 which is similar to Reference [13]. The plate dimensions are as follows: the length L = 1000 mm, the width B = 800 mm, and the thickness t = 16 mm. L is the coil movement direction over the plate. Here, L C /B = 0.275. In the test, the heating line is the upper surface central line along the plate width direction (i.e., B/2). The coil length L B is aligned with the plate length L and is moved along the heating line. In the test, the electromagnetic induction frequency is 15 kHz, the heat source output power is 60 kW, and the coil travel speed is 5 mm/s. As the coil moves over the plate surface in a small rolling vehicle, the distance between the coil and the plate during heating is essentially constant.
In the test, the infrared temperature sensor (with a measurement range of 250-1450 • C) detects the plate temperature change during heating, as shown in Figure 3. Temperature sensors are deployed on one side and at the rear of the coil, and they move together with the coil, i.e., temperature sensors C 1 and C 2 in Figure 3. The point measured by C 2 is 20 mm from the coil rear edge. A fixed temperature measurement point is deployed on one side of the plate to measure the temperature variation at the plate center T 1 . The final deformation of the plate is measured via a three-dimensional laser scanner measurement system after the entire plate cools to room temperature.
At the initial stage of the test, the entire coil is placed over the plate, and the trailing edge of the movement direction coincides with the plate's rear end. As the coil moves continuously, the first notable phenomenon is the upward bending deformation of the plate along the B direction. In contrast, the plate deformation along the L direction is smaller and demonstrates a downward-bending trend. This phenomenon continues until the coil has completed its movement over the entire heating line. After the entire coil moves away from the plate and the plate temperature drops, the upward-bending deformation of the plate along the B direction changes slightly. However, the downward bending of the plate along the L direction gradually becomes prominent, and the plate eventually becomes saddle-shaped. In the study, similar tests are performed under other conditions with different plate thicknesses, heat source input powers and the coil movement speeds, with essentially the same results.
Although both the side ratio and shape of the coil and plate are close, the constraints from plate regions outside the heat expansion area on the coil-induced heat expansion of the plate are different along the L and B directions because the coil motion is directional; i.e., the coil moves along the plate's L direction, as shown in Figure 4. Under the local contraction force P B along the B direction, the plate bends first along the B direction. Due to the deformation constraints of the plate itself and the heating movement direction, the contraction force P L along the plate's L direction lags behind P B . As shown in Figure 4b, after bending, deformation occurs along the B direction, the P L resultant force (T L ) is applied to the centroid of the B direction section, and a bending moment M L is created. Under M L , a deformation occurs along the plate's L direction that is opposite to the plate's B bending direction. This is why all of the shapes tested by the heating along the upper surface central line of the plate in the experiments are saddles.
Comparison between Experiment and Finite Element Method
Several sections and points in Figure 5 are selected to compare the temperature and deformation from the finite element calculation with the test results. The sections include the SL1-SL1 section, which coincides with the heating line, the SL2-SL2 section at the plate end, which is parallel to the heating line, the SB2-SB2 section at the plate end, which is vertically oriented to the heating line and the SB1-SB1 section at L/2. The point selected is point T1 at the plate's center. Figure 6 compares the temperature variation from the finite element calculations with the test results at the plate's central point, T1, during heating. The diagram shows that based on the coil's dimensions, time record and speed, the relative position of the coil versus T1 at any moment can be derived. The finite element calculation results show that before 50 s, the temperature at T1 does not rise. Next, the temperature fluctuates in a small range and rises continuously until approximately 97 s. Thereafter, it declines steadily. The coil center passes T1 approximately 80 s after the beginning of the test. When the coil center and T1 coincide, the temperature at T1 is not at the highest level.
Comparison between Experiment and Finite Element Method
Several sections and points in Figure 5 are selected to compare the temperature and deformation from the finite element calculation with the test results. The sections include the S L1 -S L1 section, which coincides with the heating line, the S L2 -S L2 section at the plate end, which is parallel to the heating line, the S B2 -S B2 section at the plate end, which is vertically oriented to the heating line and the S B1 -S B1 section at L/2. The point selected is point T 1 at the plate's center.
Comparison between Experiment and Finite Element Method
Several sections and points in Figure 5 are selected to compare the temperature and deformation from the finite element calculation with the test results. The sections include the SL1-SL1 section, which coincides with the heating line, the SL2-SL2 section at the plate end, which is parallel to the heating line, the SB2-SB2 section at the plate end, which is vertically oriented to the heating line and the SB1-SB1 section at L/2. The point selected is point T1 at the plate's center. The diagram shows that based on the coil's dimensions, time record and speed, the relative position of the coil versus T1 at any moment can be derived. The finite element calculation results show that before 50 s, the temperature at T1 does not rise. Next, the temperature fluctuates in a small range and rises continuously until approximately 97 s. Thereafter, it declines steadily. The coil center passes T1 approximately 80 s after the beginning of the test. When the coil center and T1 coincide, the temperature at T1 is not at the highest level. Figure 6 compares the temperature variation from the finite element calculations with the test results at the plate's central point, T 1 , during heating. The diagram shows that based on the coil's dimensions, time record and speed, the relative position of the coil versus T 1 at any moment can be derived. The finite element calculation results show that before 50 s, the temperature at T 1 does not rise. Next, the temperature fluctuates in a small range and rises continuously until approximately 97 s. Thereafter, it declines steadily. The coil center passes T 1 approximately 80 s after the beginning of the test. When the coil center and T 1 coincide, the temperature at T 1 is not at the highest level. The finite element calculation shows that the temperature at T1 fluctuates slightly before reaching its highest level. There are two reasons for this: the coil panel is not continuous. Before passing T1, there is a large amount of heat dissipation from the coil toward the unheated "cold plate" in the front. After the coil passes T1, the "cold plate" before T1 becomes a "hot plate", and the temperature at T1 declines steadily after a short period of fluctuation and increase. Additionally, because the temperature monitoring point at T1 is blocked by the moving coil near 80 s, the measurement data during this period are invalid. Because the temperature is measured at regular intervals (a discontinuous measurement method), small temperature fluctuations are not reflected. However, the finite element calculation results and the test data are closely related throughout the process. To further validate the finite element calculation method, data are collected from the temperature sensor C2 for comparison. The temperature variation versus time at C2 is the variation of the highest temperature at the plate heating line after 4 s (C2 is 20 mm from the coil edge, and the coil travel speed is 5 mm/s). The test results are compared with the finite element calculation and shown in Figure 7, which confirms that the two match closely. Figure 8 shows the heat distribution from the finite element model of heat transmission after heating for 115 s. The plate deformation shapes from the test and the finite element calculation are shown in Figure 9. Both results are saddle-shaped. Figure 10 The finite element calculation shows that the temperature at T 1 fluctuates slightly before reaching its highest level. There are two reasons for this: the coil panel is not continuous. Before passing T 1 , there is a large amount of heat dissipation from the coil toward the unheated "cold plate" in the front. After the coil passes T 1 , the "cold plate" before T 1 becomes a "hot plate", and the temperature at T 1 declines steadily after a short period of fluctuation and increase. Additionally, because the temperature monitoring point at T 1 is blocked by the moving coil near 80 s, the measurement data during this period are invalid. Because the temperature is measured at regular intervals (a discontinuous measurement method), small temperature fluctuations are not reflected. However, the finite element calculation results and the test data are closely related throughout the process. To further validate the finite element calculation method, data are collected from the temperature sensor C 2 for comparison. The temperature variation versus time at C 2 is the variation of the highest temperature at the plate heating line after 4 s (C 2 is 20 mm from the coil edge, and the coil travel speed is 5 mm/s). The test results are compared with the finite element calculation and shown in Figure 7, which confirms that the two match closely. The finite element calculation shows that the temperature at T1 fluctuates slightly before reaching its highest level. There are two reasons for this: the coil panel is not continuous. Before passing T1, there is a large amount of heat dissipation from the coil toward the unheated "cold plate" in the front. After the coil passes T1, the "cold plate" before T1 becomes a "hot plate", and the temperature at T1 declines steadily after a short period of fluctuation and increase. Additionally, because the temperature monitoring point at T1 is blocked by the moving coil near 80 s, the measurement data during this period are invalid. Because the temperature is measured at regular intervals (a discontinuous measurement method), small temperature fluctuations are not reflected. However, the finite element calculation results and the test data are closely related throughout the process. To further validate the finite element calculation method, data are collected from the These results show that the current finite element computing model and method can accurately simulate plate deformation under induction heating.
Effect of Coil Width during Forming
Once the applicability of the finite element computing method has been validated by the test results, the finite element method is employed to study the effect of coil dimensions on forming shape and processing efficiency by modifying the coil dimension and calculating the number of heating lines required to produce a specific deformation shape in the plate.
To facilitate discussion, a square plate with equal length and width is considered, i.e., L = B. The plate length is L = 1000 mm, and its width is B = 1000 mm. The plate thickness has 2 options: t = 16 mm or t = 20 mm. Additionally, the coil outer edge length and width are equal, i.e., LC = LB. The form of the coil's panel is chosen the same as in Figure 1a. The coil moves over the plate along the plate's central line, which is the same as the test. Table 3 lists calculation schemes for different coil widths. Since both the plate and coil are square in shape, to facilitate representation, the parameter that reflects the dimension ratio of the coil versus the plate is defined as the ratio of their widths, i.e., LC/B. Normally, the coil width is always less than the plate width. Therefore, LC/B is less than 1. Five conditions are calculated, and the maximum coil width is approximately 1/3 of the plate width. The highest temperature produced by the coil in the scheme is the highest acceptable temperature for a heated plate in the heating forming of a typical ship hull plate. Figure 12 and Figure 13 show the plate deformations caused by line heating with coils of various sizes. In the diagrams, δB represents the maximum vertical deflection of the SB1-SB1 section in Figure 5, and δL represents the maximum vertical deflection of the SL1-SL1 section in Figure 5. Figure 12 and Figure 13 show that when the coil travel speed and highest temperature are fixed, within a large coil width variation range, the plate's maximum transversal and longitudinal deflections increase with the coil dimension. Thus, the plate deformation increases with the coil dimension. These results show that the current finite element computing model and method can accurately simulate plate deformation under induction heating.
Effect of Coil Width during Forming
Once the applicability of the finite element computing method has been validated by the test results, the finite element method is employed to study the effect of coil dimensions on forming shape and processing efficiency by modifying the coil dimension and calculating the number of heating lines required to produce a specific deformation shape in the plate.
To facilitate discussion, a square plate with equal length and width is considered, i.e., L = B. The plate length is L = 1000 mm, and its width is B = 1000 mm. The plate thickness has 2 options: t = 16 mm or t = 20 mm. Additionally, the coil outer edge length and width are equal, i.e., L C = L B . The form of the coil's panel is chosen the same as in Figure 1a. The coil moves over the plate along the plate's central line, which is the same as the test. Table 3 lists calculation schemes for different coil widths. Since both the plate and coil are square in shape, to facilitate representation, the parameter that reflects the dimension ratio of the coil versus the plate is defined as the ratio of their widths, i.e., L C /B. Normally, the coil width is always less than the plate width. Therefore, L C /B is less than 1. Five conditions are calculated, and the maximum coil width is approximately 1/3 of the plate width. The highest temperature produced by the coil in the scheme is the highest acceptable temperature for a heated plate in the heating forming of a typical ship hull plate. Figures 12 and 13 show the plate deformations caused by line heating with coils of various sizes. In the diagrams, δ B represents the maximum vertical deflection of the S B1 -S B1 section in Figure 5, and δ L represents the maximum vertical deflection of the S L1 -S L1 section in Figure 5. Figures 12 and 13 show that when the coil travel speed and highest temperature are fixed, within a large coil width variation range, the plate's maximum transversal and longitudinal deflections increase with the coil dimension. Thus, the plate deformation increases with the coil dimension. For plates with different thicknesses, the calculation results show similar patterns. The diagrams also show that when the coil dimension increases to a certain level, the increasing trend of transversal deflection gradually slows. For longitudinal deflection, when LC/B > 0.3 and as the coil dimension continuously increases, the bending deflection demonstrates a declining trend. This occurs because for the transversal deflection, the distance between the heating surface of the coil and the upper surface of the deformed plate increases as the coil width increases. Hence, although the temperature at the heating line of the plate is fixed in the calculation, the high-temperature effective range decreases as this distance increases. Therefore, the deflection increase rate slows. For longitudinal deflection, as shown in Figure 4, the distance from section TL to the B-direction section centroid decreases, which reduces the bending moment ML and thus reduces the deflection in the L direction. The result shows that when coil size increases to a certain level, it is possible that heating gradually causes changes in the local characteristics, which gradually weakens the local deformation characteristics of the plate.
Width Effect on Deformation
The calculated effects of the coil width on the plate curvature are shown in Figure 14. The diagrams show that as coil size increases, the transversal maximum curvatures and curvature ranges increase accordingly. For plates with different thicknesses, the calculation results show similar patterns. The diagrams also show that when the coil dimension increases to a certain level, the increasing trend of transversal deflection gradually slows. For longitudinal deflection, when LC/B > 0.3 and as the coil dimension continuously increases, the bending deflection demonstrates a declining trend. This occurs because for the transversal deflection, the distance between the heating surface of the coil and the upper surface of the deformed plate increases as the coil width increases. Hence, although the temperature at the heating line of the plate is fixed in the calculation, the high-temperature effective range decreases as this distance increases. Therefore, the deflection increase rate slows. For longitudinal deflection, as shown in Figure 4, the distance from section TL to the B-direction section centroid decreases, which reduces the bending moment ML and thus reduces the deflection in the L direction. The result shows that when coil size increases to a certain level, it is possible that heating gradually causes changes in the local characteristics, which gradually weakens the local deformation characteristics of the plate.
The calculated effects of the coil width on the plate curvature are shown in Figure 14. The diagrams show that as coil size increases, the transversal maximum curvatures and curvature ranges increase accordingly. For plates with different thicknesses, the calculation results show similar patterns. The diagrams also show that when the coil dimension increases to a certain level, the increasing trend of transversal deflection gradually slows. For longitudinal deflection, when L C /B > 0.3 and as the coil dimension continuously increases, the bending deflection demonstrates a declining trend. This occurs because for the transversal deflection, the distance between the heating surface of the coil and the upper surface of the deformed plate increases as the coil width increases. Hence, although the temperature at the heating line of the plate is fixed in the calculation, the high-temperature effective range decreases as this distance increases. Therefore, the deflection increase rate slows. For longitudinal deflection, as shown in Figure 4, the distance from section T L to the B-direction section centroid decreases, which reduces the bending moment M L and thus reduces the deflection in the L direction. The result shows that when coil size increases to a certain level, it is possible that heating gradually causes changes in the local characteristics, which gradually weakens the local deformation characteristics of the plate.
The calculated effects of the coil width on the plate curvature are shown in Figure 14. The diagrams show that as coil size increases, the transversal maximum curvatures and curvature ranges increase accordingly. Appl. Sci. 2018, 8
Forming Capacity of a Wider Coil
The previous results show that increasing the coil width can produce a larger curvature. However, a wider coil could be unfavorable to produce shapes with smaller curvatures. In this study, the coil input power is changed to calculate and investigate the curvature coverage of a wider coil. The calculation is based on the coil used in Case 3. The plate deformation and curvature distribution under heating conditions with different input powers P are calculated, and the results are shown in Figure 15 and Figure 16.
Forming Capacity of a Wider Coil
The previous results show that increasing the coil width can produce a larger curvature. However, a wider coil could be unfavorable to produce shapes with smaller curvatures. In this study, the coil input power is changed to calculate and investigate the curvature coverage of a wider coil. The calculation is based on the coil used in Case 3. The plate deformation and curvature distribution under heating conditions with different input powers P are calculated, and the results are shown in Figures 15 and 16.
Forming Capacity of a Wider Coil
The previous results show that increasing the coil width can produce a larger curvature. However, a wider coil could be unfavorable to produce shapes with smaller curvatures. In this study, the coil input power is changed to calculate and investigate the curvature coverage of a wider coil. The calculation is based on the coil used in Case 3. The plate deformation and curvature distribution under heating conditions with different input powers P are calculated, and the results are shown in Figure 15 and Figure 16.
Forming Capacity of a Wider Coil
The previous results show that increasing the coil width can produce a larger curvature. However, a wider coil could be unfavorable to produce shapes with smaller curvatures. In this study, the coil input power is changed to calculate and investigate the curvature coverage of a wider coil. The calculation is based on the coil used in Case 3. The plate deformation and curvature distribution under heating conditions with different input powers P are calculated, and the results are shown in Figure 15 and Figure 16. Figures 15 and 16 show that for a wider coil, changing the input power can produce deflection and curvature on a corresponding scale. For ship hull plates, the formation range of a wider coil satisfies the actual processing requirements.
In the test and previous calculation, the formed plates are mainly saddle-type. In order to verify the ability of a wider coil in forming other types, the heating patterns shown in Figure 17 are calculated and the highest temperature during heating is 700 • C. The deflection of the formed plate is shown in Figure 18. It can be seen from Figures 16 and 18 that a wider coil can form different curvatures and different types of a ship hull plate. This means that the capacity of the wider coil can meet the processing needs of the ship hull plates. Figure 15 and Figure 16 show that for a wider coil, changing the input power can produce deflection and curvature on a corresponding scale. For ship hull plates, the formation range of a wider coil satisfies the actual processing requirements.
In the test and previous calculation, the formed plates are mainly saddle-type. In order to verify the ability of a wider coil in forming other types, the heating patterns shown in Figure 17 are calculated and the highest temperature during heating is 700 °C . The deflection of the formed plate is shown in Figure 18. It can be seen from Figure 16 and Figure 18 that a wider coil can form different curvatures and different types of a ship hull plate. This means that the capacity of the wider coil can meet the processing needs of the ship hull plates.
Efficiency Comparison of Different Coil Widths
In the following section, a forming deformation shape of the plate is selected to investigate the influence of the heating time with different coil widths. The calculations are based on the parallel heating of the coil in Case 5 (a narrower coil) along the plate's L direction as shown in Figure 19 and the single-pass heating of the coil in Case 3 (a wider coil) as shown in Figure 4. The two forming deformation shapes are then compared in order to study the forming shape and processing efficiency. In Case 5, the distance between the heating lines is W = 0.05 m. In both calculations, the highest temperature at the plate's heating line is fixed at 700 °C and the travel speeds at all heating lines are equal. Figure 15 and Figure 16 show that for a wider coil, changing the input power can produce deflection and curvature on a corresponding scale. For ship hull plates, the formation range of a wider coil satisfies the actual processing requirements.
In the test and previous calculation, the formed plates are mainly saddle-type. In order to verify the ability of a wider coil in forming other types, the heating patterns shown in Figure 17 are calculated and the highest temperature during heating is 700 °C . The deflection of the formed plate is shown in Figure 18. It can be seen from Figure 16 and Figure 18 that a wider coil can form different curvatures and different types of a ship hull plate. This means that the capacity of the wider coil can meet the processing needs of the ship hull plates.
Efficiency Comparison of Different Coil Widths
In the following section, a forming deformation shape of the plate is selected to investigate the influence of the heating time with different coil widths. The calculations are based on the parallel heating of the coil in Case 5 (a narrower coil) along the plate's L direction as shown in Figure 19 and the single-pass heating of the coil in Case 3 (a wider coil) as shown in Figure 4. The two forming deformation shapes are then compared in order to study the forming shape and processing efficiency. In Case 5, the distance between the heating lines is W = 0.05 m. In both calculations, the highest temperature at the plate's heating line is fixed at 700 °C and the travel speeds at all heating lines are equal.
Efficiency Comparison of Different Coil Widths
In the following section, a forming deformation shape of the plate is selected to investigate the influence of the heating time with different coil widths. The calculations are based on the parallel heating of the coil in Case 5 (a narrower coil) along the plate's L direction as shown in Figure 19 and the single-pass heating of the coil in Case 3 (a wider coil) as shown in Figure 4. The two forming deformation shapes are then compared in order to study the forming shape and processing efficiency. In Case 5, the distance between the heating lines is W = 0.05 m. In both calculations, the highest temperature at the plate's heating line is fixed at 700 • C and the travel speeds at all heating lines are equal. Considering the interval between the heating processes in Case 5, i.e., when the plate is heated for a second time, the residual temperature could affect the final plate deformation shape. This also makes it difficult to compare the heating deformation time for Case 5 with that of Case 3. For parallel heating, different intervals are applied to the two heating lines for calculation; i.e., after the first heating line is completed, different intervals are applied before the second heating. Figure 20 shows the final deflection at the plate B direction center for two heating lines subjected to parallel heating with different intervals. It shows that in parallel heating, the interval between the heating lines has an insignificant impact on the plate heating deformation. Hence, compared with heating completion time for Case 3, the parallel heating interval for Case 5 is zero in theory. This means that the processing time for Case 5 is twice that for Case 3. When the single-pass heating for Case 3 and the two-pass heating for Case 5 have the same travel speed and the same maximum temperature, the calculation results are as shown in Figure 21 and Figure 22. Considering the interval between the heating processes in Case 5, i.e., when the plate is heated for a second time, the residual temperature could affect the final plate deformation shape. This also makes it difficult to compare the heating deformation time for Case 5 with that of Case 3. For parallel heating, different intervals are applied to the two heating lines for calculation; i.e., after the first heating line is completed, different intervals are applied before the second heating. Figure 20 shows the final deflection at the plate B direction center for two heating lines subjected to parallel heating with different intervals. It shows that in parallel heating, the interval between the heating lines has an insignificant impact on the plate heating deformation. Hence, compared with heating completion time for Case 3, the parallel heating interval for Case 5 is zero in theory. This means that the processing time for Case 5 is twice that for Case 3. When the single-pass heating for Case 3 and the two-pass heating for Case 5 have the same travel speed and the same maximum temperature, the calculation results are as shown in Figures 21 and 22. Considering the interval between the heating processes in Case 5, i.e., when the plate is heated for a second time, the residual temperature could affect the final plate deformation shape. This also makes it difficult to compare the heating deformation time for Case 5 with that of Case 3. For parallel heating, different intervals are applied to the two heating lines for calculation; i.e., after the first heating line is completed, different intervals are applied before the second heating. Figure 20 shows the final deflection at the plate B direction center for two heating lines subjected to parallel heating with different intervals. It shows that in parallel heating, the interval between the heating lines has an insignificant impact on the plate heating deformation. Hence, compared with heating completion time for Case 3, the parallel heating interval for Case 5 is zero in theory. This means that the processing time for Case 5 is twice that for Case 3. When the single-pass heating for Case 3 and the two-pass heating for Case 5 have the same travel speed and the same maximum temperature, the calculation results are as shown in Figure 21 and Figure 22. shapes of the plate, Case 5 requires two-pass heating to achieve the same plate deformation shape as single-pass heating for Case 3, which means a higher forming processing efficiency for Case 3.
Based on the above calculation results, a moderate increase in coil width essentially increases the plate heating range and the local contraction force, which is favorable to increasing the deformation level of the processed plate. During heating, when the coil travel speed is fixed and for a specific plate shape, a wider coil requires less heating time than a narrower coil does. This means that a wider coil has a higher forming efficiency than a narrower one. Additionally, as the maximum temperature cannot increase infinitely, for a wider coil, the input power can be reduced to cover a narrower coil to produce a small deformation or small curvature. However, because an increase in power input results in a further increase in temperature, when the maximum temperature is constrained, it is difficult for a narrower coil to achieve a large deformation or a large curvature produced by a wider coil by increasing the input power of a narrower coil.
It is worth noting that an excessively wide coil could change the localized heating condition and reduce the local deformation of the heated plate. Therefore, the coil dimension selection should be optimized. Figure 22 show comparisons of the deformation shapes at the plate transversal and longitudinal flexure lines of center sections. For the transversal and longitudinal deformation shapes of the plate, Case 5 requires two-pass heating to achieve the same plate deformation shape as single-pass heating for Case 3, which means a higher forming processing efficiency for Case 3.
Based on the above calculation results, a moderate increase in coil width essentially increases the plate heating range and the local contraction force, which is favorable to increasing the deformation level of the processed plate. During heating, when the coil travel speed is fixed and for a specific plate shape, a wider coil requires less heating time than a narrower coil does. This means that a wider coil has a higher forming efficiency than a narrower one. Additionally, as the maximum temperature cannot increase infinitely, for a wider coil, the input power can be reduced to cover a narrower coil to produce a small deformation or small curvature. However, because an increase in power input results in a further increase in temperature, when the maximum temperature is constrained, it is difficult for a narrower coil to achieve a large deformation or a large curvature produced by a wider coil by increasing the input power of a narrower coil.
It is worth noting that an excessively wide coil could change the localized heating condition and reduce the local deformation of the heated plate. Therefore, the coil dimension selection should be optimized. For the transversal and longitudinal deformation shapes of the plate, Case 5 requires two-pass heating to achieve the same plate deformation shape as single-pass heating for Case 3, which means a higher forming processing efficiency for Case 3.
Based on the above calculation results, a moderate increase in coil width essentially increases the plate heating range and the local contraction force, which is favorable to increasing the deformation level of the processed plate. During heating, when the coil travel speed is fixed and for a specific plate shape, a wider coil requires less heating time than a narrower coil does. This means that a wider coil has a higher forming efficiency than a narrower one. Additionally, as the maximum temperature cannot increase infinitely, for a wider coil, the input power can be reduced to cover a narrower coil to produce a small deformation or small curvature. However, because an increase in power input results in a further increase in temperature, when the maximum temperature is constrained, it is difficult for a narrower coil to achieve a large deformation or a large curvature produced by a wider coil by increasing the input power of a narrower coil.
It is worth noting that an excessively wide coil could change the localized heating condition and reduce the local deformation of the heated plate. Therefore, the coil dimension selection should be optimized. | 2019-04-16T13:28:46.302Z | 2018-09-07T00:00:00.000 | {
"year": 2018,
"sha1": "f13c802eb07a964bdef0ffd7556ca5d104829c02",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/8/9/1585/pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "8da8d56a5b3a2e2bbc71fe79f9584750f36c42d8",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Engineering"
]
} |
16711882 | pes2o/s2orc | v3-fos-license | Potential Maximal Clique Algorithms for Perfect Phylogeny Problems
Kloks, Kratsch, and Spinrad showed how treewidth and minimum-fill, NP-hard combinatorial optimization problems related to minimal triangulations, are broken into subproblems by block subgraphs defined by minimal separators. These ideas were expanded on by Bouchitt\'e and Todinca, who used potential maximal cliques to solve these problems using a dynamic programming approach in time polynomial in the number of minimal separators of a graph. It is known that solutions to the perfect phylogeny problem, maximum compatibility problem, and unique perfect phylogeny problem are characterized by minimal triangulations of the partition intersection graph. In this paper, we show that techniques similar to those proposed by Bouchitt\'e and Todinca can be used to solve the perfect phylogeny problem with missing data, the two- state maximum compatibility problem with missing data, and the unique perfect phylogeny problem with missing data in time polynomial in the number of minimal separators of the partition intersection graph.
Introduction
The perfect phylogeny problem, also called the character compatibility problem, is a classic NP-hard [5,26] problem in phylogenetics [11,25]. Characters that have a perfect phylogeny are called homoplasy-free, i.e. they map to a tree with no horizontal evolutionary events such as recombination or gene transfer. For a collection of partially labeled (a.k.a. missing data) unrooted trees, one can construct characters that have a perfect phylogeny precisely when the collection has a compatible supertree [25]. The more general problem of supertree estimation is of wide interest.
Solutions to the perfect phylogeny problem are characterized by the existence of restricted (minimal) triangulations of the partition intersection graph [10,21,26], and minimal triangulations of the partition intersection graph also play an important role in two variants of this problem. The first, the maximum compatibility problem, asks to find the largest subset of a set of given characters that has a perfect phylogeny [7,15], and the second, asks if a set of characters has a unique perfect phylogeny 1 [24,13]. Interestingly, the unique perfect phylogeny problem is NP-hard even when a perfect phylogeny for the characters is given [6,16]. Despite considerable advances in the field of minimal triangulations, to our knowledge these results have not been extended to the aforementioned problems, although the use of such methods to solve at least the perfect phylogeny problem may have been alluded to (see p.2 of [12]).
Bouchitté and Todinca [9] used potential maximal cliques to create the first algorithm that solves minimum-fill and treewidth in time polynomial in |∆ G |, and this algorithm was improved upon in [12]. In this paper, we show how to extend the potential maximal clique approach to solve the perfect phylogeny problem, the maximum compatibility problem, and the unique perfect phylogeny problem. This approach is motivated by the following: first, the algorithms in [8,12] run in time polynomial in the number of minimal separators of the graph, and second, that data generated by the coalescent-based program ms [18] often results in a partition intersection graph with a reasonable number of minimal separators [14], despite there being an exponential number of minimal separators in general. In order to unify our approach, we use a weighted variant of the wellstudied minimum-fill problem, which is NP-hard [27] and is an active area of research [4,12].
Given full characters (a.k.a. complete data), the perfect phylogeny problem is solvable in polynomial time when the number of characters is fixed [20] or when the number of parts is bounded [1]. Our results apply to the most general setting, where the characters may be partial (a.k.a. missing data), and each character has unbounded parts (a.k.a. unbounded maxstates). See [17] for a survey on minimal triangulations, [11,25] for further reading on the perfect phylogeny / character compatibility problem, and [13] for further reading on unique perfect phylogeny.
Definitions and results
An X−tree is a pair T = (T , φ), where T is an undirected tree, and φ is a mapping from X to the nodes of T such that every node of T with degree two or one is mapped to by φ. A character on X is a partition χ = A 1 |A 2 | . . . |A r of a subset of X. For i = 1, 2, . . . , r the set A i is a cell of χ. Given a cell A of a character, the minimal subtree of T that connects φ(A) is denoted T (A). An X−tree T displays a character χ if, for each pair of distinct cells A and A of χ, the trees T (A) and T (A ) have no nodes in common. Given a set C of characters, the perfect phylogeny problem is to determine if there is an X−tree T that displays every character in C. In this case, we call T a perfect phylogeny for C, and say that C is compatible.
The perfect phylogeny problem reduces to a graph theoretic problem that we detail now. A graph is chordal if any cycle it has on four or more vertices has a chord, that is, an edge between two non-consecutive vertices in the cycle. When G is not chordal, we may add edges to G to obtain a chordal supergraph H that is called a triangulation of G. The edges added to G to obtain H are called fill edges of H. When no proper subset of H's fill edges can be added to G to obtain a triangulation, we call H a minimal triangulation of G. . An X−tree T displaying C = {abcdef |gh|ij|kl, ag|dj|f l, bh|ci|ek} and the corresponding partition intersection graph int(C). We use A to denote abcdef , and have labeled the vertices by their cells. The character abcdef |gh|ij|kl distinguishes the edges of T marked with dashes. Removing these edges results in the four subtrees defined by T (abcdef ), T (gh), T (ij), and T (kl). The dashed edges of int(C) define a proper triangulation, and the solid edges are obtained by cell intersection. If we replace ag|dj|f l and bh|ci|ek with the characters ag|bh, ci|dj, and ek|f l, we would obtain a partition intersection graph isomorphic to int(C) but with a different coloring. In that case, there is no proper triangulation because each four cycle has only two colors, and the fill edge ag, bh is monochromatic. Note that T does not display ag|bh, ci|dj, or ek|f l.
Given a set of characters C, the partition intersection graph int(C) is the graph with vertex set {(A, χ) | χ ∈ C and A is a cell of χ}, and two vertices (A, χ) and (A , χ ) are adjacent in int(C) if and only if A and A have non-empty intersection. If A 1 and A 2 are cells of a character χ, then A 1 and A 2 are disjoint because χ is a partition of a subset of X, so (A 1 , χ) and (A 2 , χ) are not adjacent in int(C). The vertex (A, χ) has cell A and character χ. A triangulation of int(C) is proper if, for each fill edge, the vertices involved in the fill edge have different characters. This may be viewed as coloring each vertex (A, χ) of int(C) by its character χ, resulting in a properly colored graph, and then proper triangulations are those whose fill edges preserve the proper coloring. If u and v are vertices of int(C) that have the same character/color, we say that u and v are monochromatic. If a triangulation of int(C) has uv as an edge, we say that uv is a monochromatic fill edge of the triangulation. See Figure 1 for an example of these concepts.
For the remainder of this section, we characterize solutions to perfect phylogeny problems as constrained minimal triangulations of the partition intersection graph, and state our algorithmic results. These problems will then be discussed in terms of minimum-weight minimal triangulations in Section 2, and we prove our computational results in Section 3, all of which rely on Algorithm 1. The connection between triangulations and perfect phylogeny stems from the following result.
Theorem 1. [10,21,26] Let C be a set of characters on X. Then C is compatible if and only if int(C) has a proper minimal triangulation.
While Theorem 1 was not originally stated in terms of minimal triangulations, it follows from the definitions that there is a proper triangulation if and only if there is a proper minimal triangulation. The set of minimal separators of int(C) are denoted ∆ int(C) (their definition appears in Section 3). Our first algorithmic result is the following theorem.
Theorem 2. Let C be a set of characters on X with at most r parts per character. There is an O(|X||C| 2 + (r|C|) 4 |∆ int(C) | 2 ) time algorithm that solves the perfect phylogeny problem.
If C is not compatible, then the maximum compatibility problem is to determine the largest subset C * of C that is compatible, and C * is an optimal solution. In order to characterize solutions to the maximum compatibility problem in terms of minimal triangulations, we must consider non-proper triangulations of the partition intersection graph. We say χ is broken by a fill edge (A, χ)(A , χ) because χ is the shared character of both vertices. Given a triangulation H of int(C), the displayed characters of H are the characters of C that are not broken by any fill edge.
Theorem 3. [7,15] Let C be a set of characters on X. Then C * is an optimal solution to the maximum compatibility problem if and only if there is a minimal triangulation H * of int(C) that has C * as its displayed characters, and for every other minimal triangulation H of int(C) with displayed characters C , |C | ≤ |C * |.
Given a set of characters C, a character weight is a function w from C to the positive real numbers (i.e. excluding zero). For a subset C of C, define w(C ) = χ∈C w(χ). The w−maximum compatibility problem is to find a the subset C * of C such that w(C * ) = max w(C ), where the maximum is taken over all compatible subsets C of C. We generalize Theorem 3 below, and reserve its proof for Section 2.
Theorem 4. Let C be a set of characters on X with character weight w. Then C * is an optimal solution to the w−maximum compatibility problem if and only if there is a minimal triangulation H * of int(C) that has C * as its displayed characters, and for any other minimal triangulation H of int(C) with displayed characters C , w(C ) ≤ w(C * ).
Our second algorithmic result is for two-state characters only. Such characters are interesting because they are related to finding compatible supertrees. In that context, an optimal solution to maximum compatibility corresponds to a supertree that agrees with the most edges from the partially labeled trees given as input.
Theorem 5. Let C be a set of (w−weighted) two-state characters on X, i.e., each χ ∈ C has two cells. There is an O(|X||C| 2 + |C| 4 |∆ int(C) | 2 ) time algorithm that solves the (w-)maximum compatibility problem.
The unique perfect phylogeny problem is to determine if a perfect phylogeny for a set of characters is the only perfect phylogeny for those characters. An edge uv of an X−tree T is distinguished by a character χ if contracting uv results in an X−tree that does not display χ, and T is distinguished by C if each edge of T is distinguished by a character of C. An X−tree T = (T , φ) is ternary if every internal node of T has degree three. Semple and Steel characterized the existence of a unique perfect phylogeny as follows.
Theorem 6.
[24] Let C be a set of characters on X. Then C has a unique perfect phylogeny T = (T , φ) if and only if the following conditions hold: 1. there is a ternary perfect phylogeny T = (T , φ) for C and T is distinguished by C; 2. int(C) has a unique proper minimal triangulation.
It is well known how to create a perfect phylogeny T = (T , φ) for C from a clique tree of a proper minimal triangulation in polynomial time (e.g. see the proof of Lemma 5.1 in [7]), and a clique tree of a chordal graph can be computed in linear time [3]. Checking if T is ternary and distinguished by C is also easy to do: an edge uv is distinguished by χ if and only if u is a node of T (A) and v is a node of T (A) for distinct cells A, A of χ. So if it is known that int(C) has a unique proper minimal triangulation, it is possible to determine if C has a unique perfect phylogeny in polynomial time. On the other hand, it has recently been shown [6,16] that if a perfect phylogeny is given for a set of characters, it is still NP-hard to determine if it is the unique perfect phylogeny for those characters 2 . That is, determining if int(C) has a unique proper minimal triangulation is NPhard [16]. This makes our last algorithmic result of interest.
Theorem 7. Let C be a set of characters on X with at most r parts per character. There is an O(|X||C| 2 + (r|C|) 4 |∆ int(C) | 2 ) time algorithm that determines if int(C) has a unique proper minimal triangulation, i.e. it solves the unique perfect phylogeny problem.
Suppose G is a non-complete graph. If U is a subset of G's vertices, then the potential fill edges pf(U ) of U are pairs of vertices of U that are not edges of G. A fill weight on G = (V, E) is a function F w from pf(V ) to the nonnegative real numbers, i.e., including zero. For a triangulation H of G with zero triangulation that is also a minimal triangulation of G, then H is a F w −minimum minimal triangulation or F w −zero minimal triangulation, respectively. Note that if a F w −zero triangulation exists, it must be a F w −minimum triangulation. Additionally, because F w is non-negative, there is always a minimal triangulation that is a F w −minimum triangulation. Proof. The lemma follows from Theorem 1 and Observation 1.
The following two lemmas, which follow from results in [7,15], will be helpful for proving Theorem 4.
Lemma 2.
Suppose C is a set of characters and C ⊆ C is compatible. Then there is a minimal triangulation of int(C) and C is a subset of its displayed characters.
Lemma 3. Suppose C is a set of characters and H is a triangulation of int(C).
Then the displayed characters of H are a compatible subset of C.
(Proof of Theorem 4) Let C * be an optimal solution to the w−maximum compatibility problem. By Lemma 2, there is a minimal triangulation H * of int(C) that has at least C * as its displayed characters. Displayed character sets are compatible by Lemma 3, so by positivity of w and optimality of C * , the displayed characters of H * are exactly C * . If H is another minimal triangulation of int(C) with displayed character set C(H ), then C(H ) is compatible by Lemma 3, so w(C(H )) ≤ w(C * ) by optimality of C * .
For the converse, let H be a minimal triangulation of int(C) with displayed characters C(H), and suppose w(C(H)) is greater than the weight of the displayed characters of any other minimal triangulation of int(C). Then w(C * ) ≤ w(C(H)) because C * are the displayed characters of H * . By Lemma 3 the set C(H) is compatible, so w(C * ) = w(C(H)) by optimality of C * . Therefore C(H) is an optimal solution. Definition 2. Let C be a set of characters on X that are weighted by w. Then the fill weight F w of int(C) induced by w is Proof. For each χ in C there is exactly one potential fill edge uv of int(C) such that u and v are monochromatic with shared character χ because χ has two states. In particular, if χ = A|A then u = (A, χ) and v = (A , χ). Hence there is a one-to-one correspondence between characters in C and potential fill edges stemming from monochromatic pairs of vertices of int(C). Further, each monochromatic pair of vertices is either a fill edge of H, or it corresponds to a displayed character of H. Any other potential fill edge u v of int(C) that does not arise in this way is not monochromatic, and in this case F w (u v ) = 0. Letting X be the set of monochromatic potential fill edges of int(C), we have Theorem 8. Let C be a collection of two-state characters weighted by w. Then C * is a w−maximum compatible subset of C if and only if there is a F w -minimum minimal triangulation H * of int(C) that has C * as its displayed characters.
Proof. Suppose that C * is a w−maximum compatible subset of C. By Theorem 4, there is a minimal triangulation H * of int(C) that has C * as its displayed characters. For the sake of contradiction suppose H * is not a F w −minimum minimal triangulation, so there is a triangulation H of int(C) such that F w (H) < F w (H * ). Letting C(H) be the displayed characters of H, by Lemma 4 we have w(C) − w(C(H)) < w(C) − w(C * ) and therefore w(C * ) < w(C(H)). This contradicts the optimality of C * , so H * must be a F w −minimum minimal triangulation. Now let H be a F w −minimum minimal triangulation of int(C) with displayed characters C(H ). Then F w (H ) = F w (H * ) by F w −minimization, and w(C) − w(C(H )) = w(C) − w(C * ) by Lemma 4 so w(C(H )) = w(C * ). The set C(H) is compatible by Lemma 3, so C(H) is an optimal solution.
The weighted maximum compatibility problem can be used to solve the maximum compatibility problem by using the character weight where each χ ∈ C has weight one. This character weighting induces the fill weight I C , giving the following corollary.
Corollary 1. Let C be a collection of two-state characters. Then C * is a maximum compatible subset of C if and only if there is a I C -minimum minimal triangulation H * of int(C) that has C * as its displayed characters.
We conclude this section by characterizing solutions to unique perfect phylogeny. Let G = (V, E) be an undirected graph and S ⊆ V . We will use G − S to denote the graph obtained from G by removing the vertices S and edges that are incident to a vertex in S. If x, y are connected vertices in G but disconnected in G − S, then S is an xy−separator. When no proper subset of S is also an xy−separator, then S is a minimal xy−separator 3 . If there is at least one pair of vertices x and y such that S is a minimal xy−separator, then it is a minimal separator of G. The set of minimal separators of G is denoted by ∆ G . Suppose Φ is a subset of G's minimal separators. The graph G Φ is obtained from G by adding the fill edge uv whenever uv ∈ pf(S) for some S in Φ, and we say G is obtained by saturating each minimal separator in Φ. The following fundamental result characterizes the minimal triangulations of a graph in terms of its minimal separators.
Theorem 9. [22,23] see also [19] Let G a graph and ∆ G its minimal separators. If H is a minimal triangulation of G, then ∆ H is a maximal pairwise-parallel set of minimal separators of G and H = G ∆ H . Conversely, if Φ is any maximal pairwise-parallel set of minimal separators of G, then G Φ is a minimal triangulation of G and ∆ G Φ = Φ.
An important observation from this theorem is that if H is a minimal triangulation of G, then ∆ H ⊆ ∆ G . Let C be a set of characters and F w be a fill weight on int(C). We will use ∆ min Fw to denote the set of minimal separators S of int(C) such that there is a F w −minimum minimal triangulation H of int(C) with S ∈ ∆ H . Theorem 10. Suppose C is a collection of characters on X. Then int(C) has a unique proper minimal triangulation if and only if 1. int(C) has a I C −zero minimal triangulation; and 2. ∆ min I C is a maximal set of pairwise-parallel minimal separators of int(C).
Proof. Suppose int(C) has a unique proper minimal triangulation H * . By Observation 1 it is a I C −zero minimal triangulation of int(C), and each minimal separator of H * is a minimal separator of int(C) by Theorem 9, so ∆ H * ⊆ ∆ min I C .
Alternatively, if S ∈ ∆ min I C , then S is a minimal separator of a I C −zero minimal triangulation of int(C). This minimal triangulation is proper by Observation 1, so S ∈ ∆ H * by uniqueness. Therefore ∆ min I C = ∆ H * , and ∆ min I C is a maximal pairwise-parallel set of minimal separators of int(C) by Theorem 9.
To prove the converse, suppose that int(C) has a I C −zero minimal triangulation, and ∆ min I C is a maximal set of pairwise-parallel minimal separators of int(C). By Theorem 9, the graph H obtained from int(C) by saturating each minimal separator in ∆ min I C is a minimal triangulation of int(C), and further, for each fill edge uv of H, there is a S ∈ ∆ min I C such that u, v ∈ S . By definition there is some I C −zero minimal triangulation that has S as a minimal separator. This triangulation has uv as a fill edge by Theorem 9 so I C (uv) = 0. Therefore H is an I C −zero minimal triangulation of int(C), and by Observation 1, H is a proper minimal triangulation of int(C). Now let H be any proper minimal triangulation of int(C). By Observation 1, H is an I C −zero minimal triangulation of int(C), so ∆ H ⊆ ∆ min I C . We assumed ∆ min I C is pairwise-parallel, and ∆ H is maximal with respect to being pairwiseparallel by Theorem 9, so ∆ H = ∆ min I C . Thus both H and H are obtained from int(C) by saturating each minimal separator of ∆ H = ∆ min I C , so H = H. Therefore H is the unique proper minimal triangulation of int(C).
Finding weighted minimum triangulations
In this section we show that, given a fill weight F w for G, both mfi Fw (G) and ∆ min Fw can be computed in O(|X||C| 2 + (r|C|) 4 |∆ int(C) | 2 ) time. After that, we present proofs of our algorithmic results.
Given a graph G and X ⊆ V , a set C ⊆ V − X is a connected component of G − X if it is connected in G − X and it is maximal with respect to this property. A block of a graph G is a pair (S, C) where S ∈ ∆ G and C is a connected component of G − S, and it is full or full with respect to S if every vertex of S has at least one neighboring vertex that is in C (we write N (C) = S). The realization of a block (S, C) is the graph R(S, C) with vertex set S ∪ C, and for any u and v in S ∪ C, uv is an edge of R(S, C) if either uv is an edge of G or uv ∈ pf(S).
Kloks, Kratsch, and Spinrad [19] showed that the minimal triangulations of G that have S ∈ ∆ G as a minimal separator (i.e. S is saturated to obtain the minimal triangulation) can be obtained by independently minimally triangulating R(S, C) for each connected component C of G − S. They used this fact to relate minimum fill to the realizations of the blocks of a minimal separator, an important first consideration for computing minimum fill using potential maximal cliques and minimal separators. We extend this fact to weighted-minimum fill with the following lemma, whose proof follows with a slight modification of the proof of Theorem 3.4 in [19], so we omit it.
Lemma 5. Let G be a non-complete graph and F w be a fill weight on G. Then where the sum occurs over the connected components C of G − S and It turns out that non-full blocks with respect to S ∈ ∆ G are full blocks with respect to a different minimal separator of G. They also allow us to compute mfi Fw (R(S, C)), which is a useful fact for later when we restrict our attention to full blocks of G. This gives us the following, an extension of Corollary 4.5 in [8].
Corollary 2. Let G be a graph, S ∈ ∆ G , and C be a connected component of G − S. If N (C) = S ⊂ S, then mfi Fw (R(S, C)) = mfi Fw (R(S , C)) for any fill weight F w .
In order to compute mfi Fw (R(S, C)), we need the notion of a potential maximal clique. Let G be a graph and K be a subset of its vertices. Then K is a potential maximal clique of G if there is a minimal triangulation H of G and K is a maximal clique of H. That is, every pair of vertices in K are adjacent in H, and no proper superset of K has this property. The set of potential maximal cliques of G is denoted by Π G . The next two lemmas describe the interplay between potential maximal cliques, minimal separators, and blocks. Therefore if K ∈ Π G and C 1 , C 2 , . . . , C k are the connected components of G − K, each (S i , C i ) where N (C i ) = S i is a full block of G (i.e. S i ∈ ∆ G ). These blocks are called the blocks associated to K. 1. there is a potential maximal clique K of G such that S ⊂ K ⊆ (S, C); and 2. letting (S i , C i ) for 1 ≤ i ≤ p be the blocks associated to K such that The following lemma is an extension of Corollary 4.8 in [8]. For completeness, we provide a proof. Lemma 9. Let (S,C) be a full block of G and F w be a fill weight on G. Then mfi Fw (R(S, C)) = min where the minimum is taken over all K ∈ Π G such that S ⊂ K ⊆ (S, C), and (S i , C i ) are the blocks associated to K in G such that S i ∪ C i ⊂ S ∪ C.
Proof. Let H(S, C) be a triangulation of R(S, C) such that mfi Fw (R(S, C)) = F w (H(S, C)). Without loss of generality, we may assume H(S, C) is a minimal triangulation of R(S, C) because F w is non-negative. By Lemma 8, there is a potential maximal clique K such that S ⊂ K ⊆ (S, C) with blocks (S i , C i ) associated to K such that S i ∪ C i ⊂ S ∪ C for 1 ≤ i ≤ p. Further, the fill edges of H(S, C) are disjointly obtained from the fill edges of H i for 1 ≤ i ≤ p and pf(K) − pf(S) (because S is already saturated in R(S, C)). Therefore mfi Fw (R(S, C)) = F w (H(S, C)) = fill Fw (K) − fill Fw (S) + p i=1 F w (H i ). Now, for a given 1 ≤ k ≤ p, suppose for the sake of contradiction that H k is not a F w −minimum fill of R(S k , C k ). Then there is a minimal triangulation H k of R(S k , C k ) such that F w (H k ) < F w (H k ). Further, the graph H (S, C) with vertex set (S, C) and edge set E(H(S, C)) − E(H k ) ∪ E(H k ) is a minimal triangulation of R(S, C) by Lemma 8, and it has a weighted fill of (H(S, C)). This contradicts the F w −minimality of H(S, C), so it must be that F w (H k ) = mfi Fw (R(S k , C k )), and therefore mfi Fw (R(S, C)) = fill Letting LHS and RHS denote the left-hand side and right-hand side of equation (1), respectively, we have shown that LHS ≥ RHS. Now suppose K * ∈ Π G such that S ⊂ K ⊆ (S, C) and where (S * i , C * i ) are the blocks associated to K * in R(S, C) for 1 ≤ i ≤ p * . For 1 ≤ i ≤ p * , let H * i be a minimal triangulation of R(S * i , C * i ) such that F w (H * i ) = mfi Fw (R(S * i , C * i )). By Lemma 8, there is a minimal triangulation H * (S, C) of R(S, C) obtained by adding the fill edges pf(K * ) − pf(S) and E(H * i ) − E(R(S * i , C * i )) for 1 ≤ i ≤ p * , and hence F w (H * (S, C)) = fill Fw (K * ) − fill Fw (S) + mfi Fw (R(S * i , C * i )). Now, mfi Fw (R(S, C)) ≤ F w (H * (S, C)) by definition, so LHS ≤ RHS and therefore LHS = RHS.
Theorem 11. Let C be a set of partial characters on X with at most r parts per character, and F w be a fill weight on int(C). There is an O(|X||C| 2 +(r|C|) 4 |∆ int(C) | 2 ) algorithm that computes mfi Fw (int(C)) and ∆ min Fw .
Proof. Our approach is described in Algorithm 1. Constructing int(C) can be done in O((|X| + r 2 )|C| 2 ) time as follows. There are at most r|C| vertices of int(C), one per part of each character. Recall that a pair of vertices (A, χ) and (A , χ ) of int(C) form an edge if and only if there is some a ∈ A ∩ A . For each a ∈ X, let C(a) be the vertices of int(C) whose cell contains a. These sets are line of the algorithm takes O(|∆ int(C) |) time. Aside from the O(|X||C| 2 ) term, the bottleneck of the algorithm is the first nested for loop and calculating Π int(C) , so the entire algorithm runs in O(|X||C| 2 + (r|C|) 4 |∆ int(C) | 2 ) time.
Proof of Theorem 5. By Theorem 8, it suffices to compute the F w −minimum / I C -minimum fill of int(C). Each character has only two states, so this takes O(|X||C| 2 + |C| 4 |∆ int(C) | 2 ) time by Theorem 11.
Proof of Theorem 7. By Theorem 10, it suffices to determine if ∆ min I C is a pairwise-parallel set of minimal separators. Computing ∆ min I C takes O(|X||C| 2 + (r|C|) 4 |∆ int(C) | 2 ) time by Theorem 11. To determine if S and S ∈ ∆ min I C are parallel, we compute the connected components of G−S in linear time, and then count the number of connected components that have a vertex from S . S and S are parallel if and only if this count is one. This takes at most O((r|C|) 2 |∆ min I C | 2 ) time, and ∆ min I C ⊆ ∆ int(C) giving a total of O(|X||C| 2 +(r|C|) 4 |∆ int(C) | 2 ) time.
Discussion
An immediate question is whether or not Theorem 5 can be extended to the case where r is unbounded. This does not seem possible to do for the following reason. Let S be a minimal separator that is a clique in a minimal triangulation of int(C) that is an optimal solution to the maximum compatibility problem, and C 1 , C 2 , . . . , C k be the connected components of int(C) − S. Then the minimal triangulations of R(S, C i ) for 1 ≤ i ≤ k are dependent with respect to a fill weight F w , unlike the two-state case, as illustrated in Figure 2. It is possible to construct similar examples for unweighted characters. Hence any sort of separator-based approach for a given optimization function seems to require a decomposition property similar to that of Lemma 5. | 2013-03-15T17:16:47.000Z | 2013-03-15T00:00:00.000 | {
"year": 2013,
"sha1": "718e175312faf36b8f99e9bf4c8863d7dd1ee779",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "718e175312faf36b8f99e9bf4c8863d7dd1ee779",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Computer Science"
]
} |
136237640 | pes2o/s2orc | v3-fos-license | Tissue-Mimicking Phantom Useful in Simulating Laser Light-Tissue Interactions
Significant methodological advancements in preclinical experiments using Schlieren imaging, in vitro cell lines, ex vivo tissues, human cadaver tissues, and tissue-mimicking (TM) phantoms have been tremendously helpful in outlining actual skin reactions to laser or light energy. Polyacrylamide hydrogel TM phantom containing bovine serum albumin, with some modifications, depending on the study design, can be used to simulate laseror light-induced tissue reactions. Although experiments in TM phantoms do not mirror in vivo laser light-induced skin reactions exactly, mainly due to the lack of cellular components and adnexal structures, TM phantoms are useful in determining which devices and settings offer better treatment results and safer delivery of laser energy to experimental and clinical subjects.
Periscope
Laser-or light-based therapeutic devices are continually introduced for the treatment of dermatologic disorders. However, experimental results of laser light-tissue interactions achieved when using such devices are limited by the lack of standardized guidelines and regulations, as well as ethical problems with the use of animal and human test subjects. Recently, significant methodological advancements in schlieren imaging, in vitro cell lines, ex vivo tissues, human cadaver tissues, and tissue-mimicking (TM) phantoms have helped with visualizing actual skin reactions to laser or light energy. Although these methods are not without limitations, preclinical study data can help practitioners and investigators better predict macro-and microscopic tissue reactions for various laser-or light-based therapeutic devices.
For example, our study group recently reported the geometric patterns of thermal injury zone (TIZ) formation upon high-intensity focused ultrasound (HIFU) treatment of TM phantom composed of bovine serum albumin and polyacrylamide hydrogel. 1 In our previous study of HIFUtissue interactions, 9% (w/v) bovine serum albumin was added as a temperature-sensitive indicator to the standard TM phantom. 1 In the study, focused acoustic energy from five different HIFU devices was delivered to the TM phantom at focal penetration depths of 3 mm and 4.5 mm and at low power settings of 0.45-1.2 J. 1 Then, digital photographs were taken to measure the mean heights and widths of the TIZs using image processing software. 1 In the study, HIFU-induced TIZs and the patterns of thermal injury varied for each device. 1 The standard TM phantom is made from an optically transparent polyacrylamide hydrogel and bovine serum albumin, which works as a temperature-sensitive indicator. 2 Additionally, by adding higher acrylamide concentrations, the attenuation coefficients can be heightened, compared to biologic tissues or standard TM phatom. 2 For specific experimental purposes, a suspension of glass beads can be added to obtain a backscatter coefficient: Choi et al. 2 proposed that the use of a bovine serum albumin and polyacrylamide hydrogel-based TM phantom containing glass beads more accurately predicts HIFUinduced tissue reactions, as it more closely mimics the acoustics of human and animal tissue.
Recently, we have conducted experiments, the results of which have yet to be published, to simulate the effects of Q-switched and long-pulsed lasers under various settings on tattoo ink embedded in TM phantom. To do so, we added 5% (w/v) bovine serum albumin, rather than 3%, 7%, and 9%, to the standard TM phantom for better visualization of the photoacoustic effects of the lasers on the tattoo ink and surrounding TM phantom. To develop the tattoo pigment-TM phantom, a 30-gauge, 2.5-mm long needle attached to a 1-ml syringe was utilized to inject approximately 0.02 ml of tattoo ink into the TM phantom. In this experimental setting, treatment of the ink particles with a Q-switched 1,064-nm neodymium-doped yttrium aluminum garnet (Nd:YAG) laser or long-pulsed 755-nm alexandrite laser resulted in various photoacoustic tissue reactions (Fig. 1A). Among the many laser-tattoo interactions, tissue vacuolization was readily noted, which is difficult to observe in in vivo tissue samples due to the rapid dissolution of plasma or gas into the surrounding tissue (data not published).
Maxwell et al. 3 demonstrated that HIFU-induced cavitation can be visualized using agarose hydrogel TM phantom supplemented with red blood cells (RBCs). In their study, RBC-agarose TM phantom was prepared by solidifying agarose phantom, pouring RBCs onto the agarose phantom, allowing it to solidify again, and finally, pouring agarose solution onto the RBCs-suspended agarose phantom. 3 Additionally, in order to simulate interactions between laser energy and hair shafts, our study group also developed a hair shaft-embedded bovine serum al- bumin and polyacrylamide hydrogel-based TM phantom.
A B
To do so, TM phantom was prepared by mixing polyacrylamide hydrogel and 5% (w/v) bovine serum albumin in distilled water. After degassing and polymerization, the final mixture was poured into a rectangular polycarbonate housing, and plucked hairs were inserted in the mixed solution before it solidified. On the hair shaft-embedded TM phantom, Q-switched 1,064-nm Nd:YAG or longpulsed 755-nm alexandrite laser treatment was applied and resulted in various photoacoustic tissue reactions on the hair shaft and surrounding tissues (Fig. 1B).
Due to the lack of cellular components and adnexal structures, TM phantoms do not exactly mirror in vivo laser light-induced skin reactions. However, we believe that TM phantoms are useful in evaluating which laser devices and settings offer the best treatment results and facilitate the safest delivery of laser energy to experimental and clinical subjects. Further investigations should be conducted to establish standardized guidelines for using schlieren imaging, in vitro cell lines, ex vivo tissues, human cadaver tissues, and TM phantoms in preclinical evaluations of newly developed laser-or light-based devices. | 2019-04-29T13:17:30.782Z | 2015-12-30T00:00:00.000 | {
"year": 2015,
"sha1": "cc9652c3ed55b290a652f11908e34fdbd14034e2",
"oa_license": "CCBYNC",
"oa_url": "http://www.jkslms.or.kr/journal/download_pdf.php?doi=10.25289/ML.2015.4.2.86",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "c9638136a8119c54caa05300a0cd72f73b35755d",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Materials Science"
]
} |
214771973 | pes2o/s2orc | v3-fos-license | Proteomic Analysis of Renal Biomarkers of Kidney Allograft Fibrosis-A Study in Renal Transplant Patients.
Renal transplantation is the preferred treatment of end stage renal disease, but allograft survival is limited by the development of interstitial fibrosis and tubular atrophy in response to various stimuli. Much effort has been put into identifying new protein markers of fibrosis to support the diagnosis. In the present work, we performed an in-depth quantitative proteomics analysis of allograft biopsies from 31 prevalent renal transplant patients and correlated the quantified proteins with the volume fraction of fibrosis as determined by a morphometric method. Linear regression analysis identified four proteins that were highly associated with the degree of interstitial fibrosis, namely Coagulation Factor XIII A chain (estimate 18.7, adjusted p < 0.03), Uridine Phosphorylase 1 (estimate 19.4, adjusted p < 0.001), Actin-related protein 2/3 subunit 2 (estimate 34.2, adjusted p < 0.05) and Cytochrome C Oxidase Assembly Factor 6 homolog (estimate -44.9, adjusted p < 0.002), even after multiple testing. Proteins that were negatively associated with fibrosis (p < 0.005) were primarily related to normal metabolic processes and respiration, whereas proteins that were positively associated with fibrosis (p < 0.005) were involved in catabolic processes, cytoskeleton organization and the immune response. The identified proteins may be candidates for further validation with regards to renal fibrosis. The results support the notion that cytoskeleton organization and immune responses are prevalent processes in renal allograft fibrosis.
Introduction
Kidney transplantation is the preferred treatment of end stage renal disease, reducing mortality when compared to any type of dialysis [1], and increasing quality of life [2]. The introduction of modern induction therapy, as well as highly effective immunosuppressive regimens, has reduced graft loss due to acute rejection [3]. Over time, however, kidney graft function inevitably deteriorates. Improving long-term graft survival remains a key issue in renal transplantation. The histological finding of chronic allograft lesions with no known etiology is referred to as IFTA (interstitial fibrosis and tubular atrophy) [4]. IFTA is often accompanied by deteriorating renal function, and the presence of IFTA predicts an adverse renal outcome [5]. The development of kidney fibrosis is a multifactorial process including inflammation and ischemia, which ultimately leads to the deposition of extracellular matrix proteins [6].
The causes of chronic allograft failure are diverse and might include acute or chronic rejection, recurrent disease, drug toxicity or infection [7]. Irrespective of the primary insult, IFTA is characterized by largely irreversible damage. Hence, IFTA is important to distinguish from reversible conditions that need specific interventions. Also, insight into the pathogenesis of IFTA might provide the opportunity for early, preventative therapy and guide the development of pharmacological interventions.
Currently, the diagnosis of IFTA is made by kidney biopsy. Although a minor procedure, a biopsy introduces a risk of a gross hematuria of 3%-4%, and a risk of a perirenal hematoma of 2%-4% [8], which necessitates post-procedural observation. Furthermore, there is a risk of sampling variability [9]. Much effort has been devoted to identifying biomarkers to guide diagnostics in acute or chronic renal failure, but so far, no biomarkers are in routine clinical use. In tissue, proteins are prevalent in both the intracellular and extracellular compartments, and have diverse functions, serving as structural components and as mediators of a wide range of biological functions (i.e., as transcription factors, hormones, antibodies and enzymes) [10]. The emergence of large-scale proteomic approaches provides a unique opportunity to gain insight into proteins involved in specific diseases. The use of proteomics is, however, critically dependent on interpreting results in context to make biological sense of the large amounts of information provided [11].
Only a few studies have investigated the proteome in chronic allograft dysfunction. Early discovery-driven proteogenomic analyses of biopsies with varying degrees of IFTA revealed several differences in the proteome between fibrotic and non-fibrotic kidneys [12]. The objective of the current study was to identify proteins that correlate with the degree of fibrosis in renal biopsies from stable kidney transplant patients. The purpose was to gain insight into the mechanisms of fibrogenesis and to identify biomarker candidates to guide the diagnosis of IFTA. To address this, a set of biopsies from 31 prevalent renal transplant patients were evaluated by explorative in-depth proteomics based on nano-liquid chromatography combined with orbitrap mass spectrometry analysis and 10-plex tandem mass tags.
Results
Biopsies from 31 individuals were examined and their clinical characteristics are summarized in Table 1. Twenty two (71%) participants were male and the median age was 60 (ranging from 24-73) years. The median time since transplantation was 3 years, but ranged from 0.3 to 12.8 years. All patients were treated with a calcineurin inhibitor. The most prevalent immunosuppressive regimen was tacrolimus in combination with mycophenolate mofetil, but four subjects (13%) were additionally treated with a small dosage of prednisolone. The majority of the included subjects had hypertension (94%). Two patients (6%) had diabetes at the time of inclusion. Three patients had a previous rejection occurring between 2 and 3.5 years prior to inclusion in the study. In all cases, the rejection episode was treated with prednisolone, and renal function returned to pre-rejection levels. The fibrosis quantifications in these subjects at inclusion were below average, at 28%, 29% and 32%, respectively.
Characterization of Fibrosis in Kidney Allograft Biopsies
The distribution of interstitial fibrosis was evaluated by point counting (Figure 1a) using the Banff Lesion Score method, which relies on the assessment of the presence and degree of different histopathological changes in different compartments of the renal biopsy, such as fibrosis, interstitial inflammation, and mild tubulitis, important for the diagnosis of graft rejection. The median number of points counted per biopsy was 284 (IQR 252-329). The median extent of fibrosis, estimated as the volume fraction, was 33% (IQR 30%-41%). Interstitial inflammation was present in 16 biopsies (n = 12 (29%) for Banff I1 score, n = 3 (10%) for Banff I2 score and n = 1 (3%) for Banff I3 score). Mild tubulitis (Banff T1 score) was present in three biopsies (10%) (see Supplementary Table S1 for a complete summary of the Banff scores). No biopsies gave the suspicion of acute or chronic rejection. The association between renal function and the extent of fibrosis is shown in Figure 1b. The variation in the extent of fibrosis increased with decreasing renal function. There was a weak, but not significant, correlation between renal function and the volume fraction of interstitial fibrosis The association between renal function and the extent of fibrosis is shown in Figure 1b. The variation in the extent of fibrosis increased with decreasing renal function. There was a weak, but not significant, correlation between renal function and the volume fraction of interstitial fibrosis (Spearman's = −0.23, p = 0.21).
Kidney Allograft Proteomics
Nano-LC-MSMS analysis analyzed 4717 proteins across the analyzed biopsies, whereof 1973 proteins were quantified in all patient biopsies (Supplementary Table S2). Having missing values is a well-known phenomenon in quantitative mass spectrometry-based proteomics when using data-dependent acquisition, and we found that the number of missing values per quantified protein acceptable for downstream statistical analyses was 11, leaving a dataset of 2687 proteins analyzed in 31 kidney biopsies for further evaluation.
Proteins Correlated with the Degree of Fibrosis
The multiple linear regression identified 26 proteins with an adjusted p-value of less than 0.5 (Table 2). Interestingly, four proteins were highly significantly associated with the degree of fibrosis after correction for multiple testing. Of these, three were positively associated: Coagulation Factor XIII A chain (estimate = 18.7, adjusted p < 0.03, Figure 2a), Uridine Phosphorylase 1 (estimate = 19.4, adjusted p < 0.001, Figure 2d) and Actin-related protein 2/3 subunit 2 (estimate = 34.2, adjusted p < 0.05, Figure 2b). Cytochrome C Oxidase Assembly Factor 6 homolog was negatively associated (estimate = −44.9, adjusted p < 0.002, Figure 2c). The multiple linear regression identified 26 proteins with an adjusted p-value of less than 0.5 (Table 2). Interestingly, four proteins were highly significantly associated with the degree of fibrosis after correction for multiple testing. Of these, three were positively associated: Coagulation Factor XIII A chain (estimate = 18.7, adjusted p < 0.03, Figure 2a), Uridine Phosphorylase 1 (estimate = 19.4, adjusted p < 0.001, Figure 2d) and Actin-related protein 2/3 subunit 2 (estimate = 34.2, adjusted p < 0.05, Figure 2b). Cytochrome C Oxidase Assembly Factor 6 homolog was negatively associated (estimate = -44.9, adjusted p < 0.002, Figure 2c). Then we performed a LASSO analysis in order to identify the most predictive proteins for fibrosis (Supplementary Table S3). The nine proteins selected by the LASSO in at least 10% of the subsamples are listed in Table 3. Interestingly, Cytochrome C Oxidase Assembly Factor 6 homolog (selection probability 56.7%), Coagulation Factor XIII A chain (selection probability 21.7%) and Actin-related protein 2/3 subunit 2 (selection probability 10.7%), which all strongly correlated with the degree of fibroses, were also highly predictive of fibrosis. The fourth protein that strongly correlated with fibrosis, Uridine Phosphorylase 1, was not identified in the analysis, most probably because this protein was not detected in all biopsies, and only proteins with no missing values across patients were included in the LASSO analysis.
In order to identify molecular pathways and biological processes, we evaluated 133 proteins positively correlated with fibrosis (unadjusted p < 0.005) by Gene Ontology enrichment analysis. This analysis revealed an abundance of proteins mainly related to catabolic processes, cytoskeleton organization and immune responses (Table 4). By contrast, the 76 proteins negatively associated with fibrosis (unadjusted p < 0.005) were mainly involved in normal metabolic processes and respiration (Table 4 and Supplementary Table S4). Table 4. Significantly enriched Gene Ontology categories. Gene ontology (GO) enrichment analysis of proteins that are significantly (p < 0.005) positively and negatively associated with fibrosis as determined by multiple regression analysis (Supplementary Table S4). GO categories were identified using two unranked list options of the GOrilla Gene Ontology (GO) program (http://cbl-gorilla.cs.technion.ac.il/). N is the total number of recognized proteins, B is the total number of recognized proteins associated with a specific GO term, n is the number of recognized proteins in the target set, and b is the number of proteins in the intersection. Enrichment = (b/n)/(B/N). The p-Value is the enrichment p-value computed according to the mHG or HG model. This p-value is not corrected for the multiple testing of 2893 GO terms, and the FDR q-value is the correction of the above p-value for multiple testing using the Benjamini and Hochberg method.
GO Term
Description Categories positively associated with fibrosis
Discussion
The current study investigated kidney graft biopsies from patients with creatinine clearance ≥ 30 mL/min and varying degrees of interstitial fibrosis by in-depth quantitative proteomic analysis using nano-LC-MSMS combined with 10-plex tandem mass tags (TMT), and the current study constitutes a first step towards the identification of non-invasive biomarkers of renal allograft fibrosis. Our proteomic analysis of renal tissue provides a detailed characterization of the protein composition of renal allografts; however, proteins of very low abundance may not be detected. Despite this, we identified four proteins-Coagulation Factor XIII A chain, Actin-related protein 2/3 subunit 2, Cytochrome C Oxidase Assembly Factor 6 homolog and Uridine Phosphorylase 1-that, even after multiple testing, showed a strong correlation with renal allograft fibrosis. Moreover, three of these proteins were also shown to be strongly predictive for fibrosis.
Coagulation factor XIII (FXIII) A chain was positively correlated with the degree of fibrosis and was also chosen by the LASSO in 21.7% of cases. FXIII is synthesized in cells of bone-marrow origin (i.e., thrombocytes and monocytes) [13]. Circulating FXIII is a tetramer consisting of two A chains and two B chains. The A chain can be activated by thrombin and calcium to become a transglutaminase that cross-links and stabilizes fibrin, thereby catalyzing the final steps in the coagulation cascade [14]. FXIII also cross-links proteins of the extracellular matrix [15], increases the proliferation of monocytes, and further increases the migration and decreases the apoptosis of both monocytes and fibroblasts [16]. While the recruitment and longevity of fibroblasts and the cross-linking of ECM proteins are beneficial in the context of wound healing, they constitute core processes in the progression of tissue fibrosis. Thus, it is feasible that the overexpression of FXIII might serve to promote fibrogenesis through increased fibroblast activity and inflammation, and reduced ECM degradation.
Actin-related protein 2/3 (ARP2/3) subunit 2 was similarly strongly associated with fibrosis. ARP2/3 is a protein complex involved in actin cytoskeleton organization and has been associated with a range of cell functions, primarily relating to cell motility [17]. Of possible interest in relation to fibrosis is the observation that disrupting actin cytoskeleton assembly via ARP2/3 in fibroblasts reduces fibroblast motility in an in vitro model of wound healing, presumably by preventing the polarization of the Golgi apparatus [18]. A high renal abundance of ARP2/3 might be an indicator of ongoing inflammation or increased fibroblast motility.
Cytochrome C oxidase is located in the mitochondrial membrane and plays a major role in cell respiration and ATP synthesis in eukaryotic cells [19]. Cytochrome C oxidase assembly factor, also located in the mitochondrion, contributes to the correct assembly and function of Cytochrome C oxidase [20]. Cytochrome C oxidase assembly factor was negatively correlated to fibrosis and chosen by the LASSO in 56.7% of cases, which indicates that it is a strong predictor of fibrosis. This observation may reflect a lack of healthy cells in the fibrotic kidneys. A proteomic study of CsA toxicity to human proximal tubular cells in vitro found a reduction of proteins from the inner mitochondrial membrane associated with CsA treatment [21]. One obvious difference to the current study, however, was the use of cells rather than renal biopsies. The proteomic analysis of biopsy specimens includes all parts of the nephron, as well as invading cells and fibrotic areas, which constitute a more heterogeneous sample than a single cell line.
Uridine phosphorylase catalyzes the reversible conversion of uridine to uracil, which is an important step in the pyrimidine salvage pathway [22]. We found a positive correlation between this protein and the extent of fibrosis. The previously mentioned in vitro study of human proximal tubule cells treated with CsA found an upregulation of proteins associated with purine salvage in relation to CsA treatment. The authors hypothesized that respiratory chain dysfunction led the cells to use this pathway rather than the more energy-consuming de novo synthesis [21]. Uridine phosphorylase is present in most human cells and can be induced by inflammatory cytokines in a number of tumor cell lines [23]. It has previously been shown to co-localize with intermediate filament vimentin in fibroblasts and colon cells; however, the significance of this co-localization is unknown [22].
Functional annotation of the proteins most significantly associated with fibrosis and eGFR revealed a marked difference between those positively and those negatively associated, i.e., proteins positively associated with fibrosis were negatively associated with eGFR and vice versa, indicating that kidney function and fibrosis are linked processes at the proteome microenvironment level (Table 4 and Supplementary Table S5). The observation that proteins involved in cytoskeleton organization and immune responses were abundant in fibrotic tissue has been reported in previous studies. Nakorchevsky et al. performed a proteomic analysis of renal allograft biopsies divided into three groups: no IFTA, mild IFTA or moderate to severe IFTA, and subsequently compared protein abundances between the groups. The authors identified 492 proteins uniquely expressed across the groups, and a further 904 proteins differentially expressed between the groups. Functional annotation revealed that proteins related to acute phase responses, actin cytoskeleton signaling and complement activation were particularly related to severe IFTA, which points towards immunologic factors playing a role in the progression of IFTA [12]. Two of the proteins identified in the current study, Coagulation factor XIII and Actin-related protein 2/3, belong to the same functional annotations identified by Nakorchevsky to be of importance in the development of IFTA.
In a proteomic analysis of renal allografts from rats, Reuter et al. identified ten proteins that were differentially regulated in allogeneic transplants compared to in syngeneic transplants. The authors pointed towards imbalances of energy homeostasis and oxidative stress as causal explanations for the identified proteins [24]. None of the identified proteins were found to correlate with the degree of fibrosis in the current study. These differences may be due to differences between the species, but additionally, the renal changes associated with the allogeneic rat transplantation model resemble chronic rejection [25] and thus may reflect more immune-driven processes than the present biopsies from stable renal transplant patients. Moreover, a study by Späth et al. confirmed our findings on the involvement of proteins related to inflammatory and fibrotic responses in kidney function and fibrosis [26].
Traditional evaluation of fibrosis is performed by semi-quantitative Banff-scoring in which the pathologist grades the extent of fibrosis as CI1 (<25%), CI2 (26%-50%) or CI3 (>50%) [27]. Since this scale is non-linear and allows for large variations within each category, a morphometric method (point counting) was chosen to quantify the extent of fibrosis in the current study. A previous study of point counting in renal allograft biopsies found a coefficient of variability of 7% with repeated measurements and established point counting as a reproducible way of quantifying interstitial fibrosis [28]. In renal allograft biopsies obtained 6 months post-transplantation, interstitial fibrosis evaluated by point counting was inversely correlated with renal function and predicted allograft survival [29]; however, no significant correlation was found in the current study. This is likely due to the limited sample size. Furthermore, sampling error may have influenced the results in some cases. Previous studies have similarly noted that there is not a strictly monotonous relationship between chronic allograft damage and renal function [5].
The proteome analysis does not discriminate between the renal compartments nor distinguish resident cells from invading cells. The morphometric evaluation of interstitial fibrosis aimed to provide an objective and continuous measure of the extent of fibrosis. The decision to include perivascular fibrosis served to minimize the bias introduced by subjective evaluation, but may have caused the overestimation of the extent of fibrosis, and thus weakened the correlation to renal function.
We are aware that the plethora of information yielded from the proteomic analysis poses the risk of giving false positive results. Moreover, it is well-known that the quantitative proteomic approach based on relative quantification using 10-plex TMT isobaric tags applied in the present study suffers from dynamic range compression that affects the linearity of the signal, which may also affect biological conclusions [30]. This underlines the need to interpret results in a biological context. In the current study, we have, however, performed corrections for multiple testing to keep the risk of type I error low. In summary, we have provided an extensive characterization of the renal proteome in stable renal transplant patients, and the suggestion of four novel biomarkers of renal allograft fibrosis. Our results support the notion that cytoskeleton organization and immune responses are prevalent processes in renal allograft fibrosis. The prognostic value of the identified biomarkers of fibrosis remains to be validated in independent cohorts.
Participants
The study population included 31 patients from the SPIREN trial [31]. In brief, kidney transplant patients, at any time after the transplantation, were included from the outpatient clinic at Odense University Hospital. The eligibility criteria are listed in Table 5. At baseline, we performed a clinical examination, draw blood and urine samples, measured chrome-EDTA clearance, performed ambulatory blood pressure measurements, collected 24-hour urine samples and performed a kidney graft biopsy. The collection of kidney graft biopsies and laboratory variables, fibrosis quantification and ambulatory blood pressure measurements were done as previously described [28]. GFR was determined by chrome-EDTA clearance following the standard procedure, in the Department of Nuclear Medicine, Odense University Hospital, in which a single injection of 51 CrEDTA was given, followed by the taking of a venous blood sample after 240 min to determine residual radioactivity. An additional blood sample 24 h after injection was taken from male patients with p-creatinine ≥200 µmol/L and female patients with p-creatinine ≥150 µmol/L. Ultrasound-guided kidney graft biopsies were performed by trained senior physicians in the Department of Nephrology, kept in formalin and subsequently embedded in paraffin. All biopsies were scored according to the Banff classification by an experienced renal pathologist. The Banff classification is a semi-quantitative score, where each biopsy is given a score between 0 and 3 for acute (tubulitis (T), interstitial mononuclear infiltration (I), glomerulitis (G), vascular (V) and arteriolar hyalinosis (AH)) or chronic changes (chronic glomerulopathy (CG), interstitial fibrosis (CI), tubular atrophy (CT) and chronic vascular changes (CV)) [32]. The morphometric quantification of fibrosis was determined by point counting performed on Masson Trichrome-stained sections using the Cast 2.0 software. In brief, this method estimates the volume fraction of fibrosis by superimposing a grid with 12 intersection points on a computerized image of the biopsy. After manually delineating the renal cortex, the program randomly selects sections of the biopsy for quantification. The volume fraction of fibrosis is calculated by determining the number of intersection points that overlie fibrotic areas relative to the number of intersection points overlying normal renal tissue. In the current study, no exclusion of perivascular fibrosis was performed.
Sample Preparation
Sections of formalin-fixed, paraffin-embedded kidney biopsies were washed three times in chloroform. The proteins were extracted by dissolving the deparaffinized tissue sections in extraction buffer (1M dithiothretiol (DTT), 0.2 M tetraethylammonium bicarbonate (TEAB) and 10% sodium dodecyl sulphate (SDS)) followed by two rounds of ultrasonification (15 min of ultrasonification/15 min of cooling on ice), and incubated at 99 • C for 20 min then at 80 • C for 120 min. Protein alkylation was done by adding a 200 mM iodoacetamide (IAA) solution to a final DTT/IAA concentration ratio of 1:3. The acetone-precipitated proteins were re-dissolved in 5 µL 8 M urea with 1 µg LysC and incubated at 30 • C for 4 h, followed by a further dilution to 1 M urea, the addition of 2 µg trypsin, and an overnight incubation at 30 • C. The resulting tryptic peptides were isotopically labelled using the 10-plex tandem mass tag (TMT, ThermoFisher Scientific, Waltham, MA, USA). Peptide samples were randomly labelled with the 127N, 127C, 128N, 128C, 129N, 129C and 130N mass tags, whereas a pool of all samples were labelled with the mass tag 126 that served as an internal standard. Tagged peptides were mixed into 7 mixed peptide samples that were fractionated using hydrophilic interaction chromatography (HILIC), as described below.
HILIC Fractionation
Briefly, the labelled peptide mixtures were re-dissolved in 90% ACN/0.1% TFA, and 15 µL aliquots corresponding to approximately 25 µg peptides were injected onto an in-house-packed TSK gel Amide-80 HILIC 300 µm × 300 mm capillary high-performance liquid chromatography (HPLC) column and fractionated into 42 fractions by a Dionex UltiMate 3000 nanoHPLC using a linear 59 min gradient (85.5% B to 54% B; solvent A: 0.1%TFA; solvent B: 90% ACN/0.1% TFA) at a flow rate of 6 µL per minute. The fractions were automatically collected in micro-well plates at 1-minute intervals after UV detection at 210 nm. The fractions were dried by vacuum centrifugation, re-dissolved in 10 µL 0.1% TFA and analyzed by nano-LC-MS/MS, as described below.
NanoLC-MS/MS
A Q-Exactive mass spectrometer (Thermo Fisher Scientific, Bremen, Germany) equipped with a nano-HPLC interface (Dionex UltiMate 3000 nano HPLC) was applied for nano-LC-MSMS analyses. The samples (5 µL) were loaded onto a custom-made, fused capillary pre-column (2 cm length, 360 µm OD, 75 µm ID) and separated on a custom-made fused capillary column (20 cm length, 360 µm OD, 100 µm ID-both columns were packed with ReporSil Pur C13 3 µm resin) using a linear gradient from 95% solution A (0.1% formic acid) to 30% B (100% acetonitrile in 0.1% formic acid) over 51 min, followed by 5 min at 90% B and 5 min at 98% A, at a flow rate of 300 nL per minute. The acquisition of mass spectra was done in positive ion mode using an automatic data-dependent switch between an Orbitrap survey MS scan in the mass range from 400 to 1200 m/z, and high-energy collisional dissociation fragmentation (HCD) and Orbitrap detection of the 15 most intense ions observed in the MS scan. The Orbitrap target values for the MS and MSMS scans were 1,000,000 and 50,000 ions, at resolutions of 70,000 and 35,000 at m/z 200. Fragmentation in the HCD cell was performed at the normalized collision energy of peptides and 31 eV for TMT-labelled peptides. The ion selection threshold was set to 17,000 counts. Selected sequenced ions were dynamically excluded for 60 s. The data are available via ProteomeXchange with the identifier PXD017867. Reviewer account details; Username: reviewer23512@ebi.ac.uk; password: AOuDknvS.
Data Analysis
The Sequest search engine and Mascot search Engine (v. 2.2.3) integrated with the Proteome Discoverer (PD) version 2.1 software (Thermo Scientific) were used to search raw data files with the following criteria; Protein database: Uniprot/Swissprot (downloaded 7th November 2012, 452,768 entries) and restricted to humans. Trypsin, one missed cleavage allowed, carbamidomethylation at cysteines, and 10-plex TMT labelling at lysine and N-terminal amines were set as fixed parameters, while methionine oxidation and deamidation were set as dynamic. The precursor mass tolerance was set to 8 PPM and the MSMS tolerance was set to 0.05 Da. The peptide data were extracted using a Mascot significance threshold of 0.05 and a minimum peptide length of 6. The false discovery rate (FDR) was calculated using a decoy database search, and only high-confidence peptide identifications (false discovery rate <1%) were included. The data were normalized by using the "Total peptide amount" setting in the Reporter Ions Quantifier node in PD, i.e., the sum of abundance values for each channel over all peptides identified within a file was calculated, and the all channels were normalized against the channel with the highest total abundance. Search files were further processed using the Proteome Discoverer software. Gene Ontology analysis was performed using the GOrila tool for GO enrichment analysis (http://cbl-gorilla.cs.technion.ac.il/GOrilla/kh37217t/GOResults.html) [33].
Statistical Analysis
Analyses of baseline values were performed using Stata15 (StataCorp, College Station, TX, USA). The data are described by the median (interquartile range). Due to the non-Normal distribution of the fibrosis data, a non-parametric correlation test was used (Spearman's rank correlation). The association of specific proteins with fibrosis was evaluated using the R software. For each protein, we applied a multiple linear regression model for the volume fraction of fibrosis, with sex, age and donor type (deceased/living related) as covariates. The resulting p-values were corrected for multiple testing by the Benjamini-Hochberg procedure. To find a subset of proteins predicting the degree of fibrosis, we used LASSO with stability selection [34]. This method identifies the proteins with the highest predictive value with regards to fibrosis. The LASSO was applied to a linear model where all proteins with no missing values (1973 proteins), sex, age, donor type, and time since transplantation were used as covariates. This was repeated 1000 times on a randomly selected subsample consisting of 16 samples. The probability of selecting each protein was computed.
Ethics
All subjects provided written consent to participate in the study.
Conflicts of Interest:
The authors declare no conflict of interest. | 2020-04-02T09:12:58.593Z | 2020-03-30T00:00:00.000 | {
"year": 2020,
"sha1": "d5c5a23d1c246619f8dc28116355189a27e36878",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/21/7/2371/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "efb7cdf95bd946a6384f833a7fbbbfd92fa796d5",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
157454321 | pes2o/s2orc | v3-fos-license | Watch Your Language : Translating Science-Based Research for Public Consumption
Communications professionals involved with agriculture and natural resources have a stake in developing a scientifically literate public. Much of the terminology used to discuss science-based research issues has its foundation in Latin and Greek. Research has demonstrated that scientific literacy is directly affected by the educational process. However, limited work has been done about the relationship between scientific literacy skills and coursework in the sciences and foreign language. The purpose of this research was to explore this relationship. A descriptive survey was administered to a sample of undergraduate students at a large southeastern university. The survey was designed to assess the respondents’ ability to define a set of scientific terms as a function of the respondents’ educational background in science and foreign language. The results of the study indicated that coursework in sciences at the college level, and a major in a science-related field, were the most significant predictors of the respondents’ ability to accurately define the scientific terms. This suggests that a strong background in science coursework, in addition to the traditional journalism courses, may provide the foundation that allows communicators to translate science-based information to the general public. Creative Commons License This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 4.0 License. This research is available in Journal of Applied Communications: http://newprairiepress.org/jac/vol88/iss1/3
explore this relationship.A descriptive survey was administered to a sample of undergraduate students at a large southeastern university.The survey was designed to assess the respondents' ability to define a set of scientific terms as a function of the respondents' educational background in science and foreign language.The results of the study indicated that coursework in sciences at the college level, and a major in a science-related field, were the most significant predictors of the respondents' ability to accurately define the scientific terms.This suggests that a strong background in science coursework, in addition to the traditional journalism courses, may provide the foundation that allows communicators to translate science-based information to the general public.
"Watch your language" may have an entirely different meaning for communicators who translate science-based research information for use by the general public than it did when Mom issued that warning.For communicators in the agricultural and natural resource disciplines, part of the job is to write for both specialized and general audiences.To promote science literacy and ultimately knowledge, this requires using the appropriate scientific term followed by a definition.
The concept of developing a scientifically literate public has gained momentum in the past decade (Maienschein, 1999).Organizations such as the American Association for the Advancement of Science, the National Academy of Sciences, and the National Science Foundation have all developed interpretations of what being scientifically literate means (National Research Council, 1996).Traditionally, those who communicate about science may have only the most cursory background in science or science communications (Treise and Weigold, 2002).Nelkin (1995) states that after formal schooling, people rely on the "filter of journalistic language and imagery . . .as major sources of information about these implications for their lives" (p.2).Bandura (1994) states that people are "self-reactors with a capacity for self-direction," meaning that they choose what they pay attention to and what they ignore (p.63).Topics that require a lot of decoding are simply abandoned in favor of the familiar (Gregory, 2000).Fiske (1995) acknowledges that some people are "cognitive misers."They do not like to exert a lot of effort to think.To be scientifically literate, one must possess the ability to take newly presented information and interpret it in relationship to what one already knows.To effectively deal with continually changing scientific issues, individuals need to possess at least a basic knowledge of science.
To provide the framework for interpreting scientific information, the American Association for the Advancement of Science states that people need a basic inventory of key science concepts as a basis for learning (1990).These key concepts include the need to grasp the meaning of scientific terminology that goes beyond simply memorizing vocabulary.Systems, such as the one to classify taxonomy by Carolus Linnaeus, were developed to standardize and bring meaning to scientific endeavors (Hagberg, 1952).To a considerable extent, language determines how people see, comprehend, and characterize the world around them (Whorf, 1956).While many cultural linguists believe that domination of any one language in global affairs is a dangerous issue, this dictum does not apply to Latin in its service as the common language of science.Languages such as Latin supply word roots which provide a framework with which individuals can use as decoding skills to go from the known to the unknown (Arsky & Cherny, 1996).
Purpose and Objectives
The purpose of this study was to explore and describe the relationship between the ability to decode (define) scientific terms and the respondents' foundation in science and language curriculum.The terms developed for this study were both product-and process-oriented and represented terminology commonly found in mass communication.
The objectives of the study were to assess the respondents' overall ability to correctly decode and/or define a variety of scientific terms and to explore the relationship between the level of accuracy in defining the terms and the respondents' educational background in the sciences and foreign languages.
Methods/Procedures
The research design for this study was a one-shot case study in which a survey was administered to a convenience sample (N= 87) of undergraduate students attending a large southeastern university.The respondents were students in an agricultural writing course.To conduct this study, respondents were first asked to define a set of 15 scientific terms including six health-related terms, three biological terms, two technical terms, two environmental terms, one physics related term, and one made-up term.
The terms were selected by a panel of experts based upon the following criteria: 1) it was a commonly used term (e.g., menopause), and/or, 2) it had obvious word roots (e.g., xeriscape).A made-up word, purgaraphobia (fear of cleaning/vomiting), was developed to determine if any of the respondents were able to decode the word from its roots.
The definitions were coded by three independent coders using a scale of 0 -3 where: 0 = no response, 1 = incorrect response, 2 = correct response but incomplete, and 3 = correct and complete response.Independent coder reliability was evaluated using Cohen's Kappa where results reported a reliability between coder 1 and coder 2 (.91), between coder 2 and coder 3 (.93) and between coder 1 and coder 3 (.91).Respondents were also asked to selfreport about foreign language and science course work in high school and college as well as their major.
Results
The educational background of the respondents was stronger in the sciences than in foreign language; 65.5% (N = 57) declared themselves majors in a science-based field while 34.5% (N = 30) declared themselves majors in other fields.Nearly all of the respondents (N = 86) had science coursework at the college level with M = 22.13 hours.Only 25% (N = 22) had any foreign language at the college level with M = 1.64 hours.
The science-based majors had more than twice as many science credits at the college level.Science and foreign language course work at the high school level and foreign language course work at the college level were closely matched between the science-based and nonscience-based majors.None of the respondents reported majoring in languages.
An analysis of the relationship between the level of accuracy in defining the terms and the respondents' educational background in science and foreign language at the college level indicated that subjects who had more than two classes (6 credit hours) of foreign language coursework (classified as "high foreign language") had the highest mean scores (M = 31.82)followed by subjects who were classified as "high science" with 4 classes (12 credit hours) of science coursework (M = 29.77).The results also indicate that subjects who had less than two classes (6 credit hours) of foreign language coursework (classified as "low foreign language") had higher mean scores (M = 27.29)than subjects who were classified as "low science" with 4 classes (12 credit hours) of science coursework (M = 24.72).High foreign language = 6 or more credit hours at the college level and low foreign language = 5 or fewer credits at the college level.
Table 2. Distribution of Responses in Defining Scientific Terminology
To assess these relationships predictively, multiple linear regression was conducted to explore which variables would significantly explain the largest portion of the variance associated with the respondent's level of accuracy in defining the terminology.Using the stepwise method, the independent variables of science-based major (r = .423,p = .000),cumulative science coursework (r = .436,p = .000),and cumulative foreign language coursework at the college level (r = .231,p = .033)were regressed against the dependent variable of overall score.The cumulative foreign language at the college level variable was subsequently dropped from the model.The final model with science-based major and cumulative science coursework at the college level explained approximately 23% of the variance in demonstrated ability to accurately define scientific terminology.
Conclusions and Recommendations
The results of this study indicate that extensive coursework in the sciences at the college level as well as being in a science-based major contribute to one's ability to accurately decode scientific terminology.Coursework in sciences at the high school and coursework in languages at the high school or college level were not, however, significant predictors of the ability to define scientific terms.Although generalization of the results is limited to this sample, these findings indicate that substantial coursework in sciences provides the background needed to decode and define scientific terminology, even outside of one's area of specialization, and thus enable respondents to possess to a greater degree the skills needed to be considered scientifically literate.
Table 3 .
Accuracy of Respondents in Defining Terms with a Range of 0-45 b | 2019-05-19T13:07:08.003Z | 2004-03-01T00:00:00.000 | {
"year": 2004,
"sha1": "501f1b066b26a0d7295f78e62d6ab4be928fe0b7",
"oa_license": "CCBYNCSA",
"oa_url": "https://newprairiepress.org/cgi/viewcontent.cgi?article=1317&context=jac",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "501f1b066b26a0d7295f78e62d6ab4be928fe0b7",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
} |
13962842 | pes2o/s2orc | v3-fos-license | Juvenile rainbow trout responses to diets containing distillers dried grain with solubles , phytase , and amino acid supplements
Distillers dried grain with solubles (DDGS) was evaluated in juvenile Shasta-strain rainbow trout Oncorhynchus mykiss diets during a 36-day feeding trial. Two experimental diets containing either 10% or 20% DDGS with supplemented amino acids (lysine, methionine, isoleucine, and his-tidine) and phytase were compared to a fish meal-only control diet. Tanks of trout receiving diets containing either concentration of DDGS weighed significantly less at the end of the trial and had significantly poorer feed conversion ratios than tanks of fish being fed the fish mealonly control. There was no significant difference in individual fish length, weight, condition factor, or any fish health measurements among diet treatments. Both the hepatosomatic index and viscerosomatic index were significantly less in the fish fed 10% DDGS than those fed the control diet. Body fat was significantly greater in the fish receiving 20% DDGS compared to fish fed either of the other two diets. Fillet composition, as determined by crude protein, crude lipid, ash, and water, was not significantly different among fish reared on any of the diets. There was also no significant difference in estimated protein digestibility coefficients among fish receiving any of the diets. The results suggest that DDGS, even if supplemented with essential amino acids and phytase, will lead to decreased juvenile rainbow trout growth at dietary concentrations of at 10% or greater.
INTRODUCTION
During hatchery rearing, rainbow trout Oncorhynchus mykiss and other carnivorous salmonids are typically fed high protein diets containing fish meal as the primary protein source [1][2][3][4].However, because fish meal is a limited quantity, its price has increased dramatically with the rapid growth in global aquaculture [5][6][7].Thus, lowercost, plant-based proteins will likely play a greater role in salmonid diets [7].
The availability of distillers dried grain with solubles (DDGS), a coproduct produced by the corn-based ethanol biofuel industry, has increased with increased ethanol production in the USA [8,9].Conventional DDGS are relatively high protein, with levels approaching 30% [8,10,11], and contain few, if any, of the anti-nutritional factors found in other plant protein sources [12][13][14][15].Compared to other corn products, nutrients are more concentrated in DDGS [16], but essential amino acids such as lysine and methionine are present in lower concentrations than fish meal [4].
Philips [17] conducted the first experiments examining DDGS in rainbow trout diets, while Sinnhuber [18] documented the successful inclusion of 3% dietary DDGS.Phillips et al. [19] used distillers dried solubles, which are similar to DDGS.At low inclusion levels in salmonid diets, other distillers grain products showed no deleterious nutritional effects [20,21].When fed to juvenile rainbow trout, DDGS concentrations of 15% in the diet produced positive results [4], while concentrations of 22.5% were acceptable with lysine and methionine supplementation.However, all of the diets used by Cheng and Hardy [4] contained 15% soybean meal and there was no true fish meal-only control.In a study with a fish meal control, Stone et al. [22] noted that rainbow trout receiving dietary DDGS exhibited significantly reduced growth.DDGS were also used in experimental and control diets by Cheng et al. [23] as part of a study focused more on the use of soybean meal and a methionine hydroxyl analogue.Lastly, Cheng and Hardy [24] also determined that phytase supplementation improved the apparent digestibility coefficients for total-phosphorous and other minerals in rainbow trout diets containing DDGS.
The inconsistent results reported with the incorporation of DDGS in rainbow trout diets, and the lack of studies comparing DDGS to an appropriate fish mealonly control, requires additional research.Thus, the objective of this study was to examine the effects of two DDGS-dietary inclusion levels, in comparison to a fish meal-only control diet, on the performance of juvenile rainbow trout during hatchery rearing.
Location and Fish Culture
The trial occurred at McNenny State Fish Hatchery, Spearfish, South Dakota, USA, using degassed and aerated well water at a constant temperature of 11˚C (total hardness as CaCO 3 , 360 mg•L -1 ; alkalinity as CaCO 3 , 210 mg•L -1 ; pH, 7.6; total dissolved solids, 390 mg•L -1 ).Flow rates in each tank were set at 40 L•min -1 .Juvenile Shasta strain rainbow trout (initial weight 33.6 ± 1.5 g, length 146.7 ± 2.1 mm, mean ± SE) were used because of availability at the time of hatchery tank space availability, and were randomly placed into each of nine fiberglass circular tanks (1.8 m diameter, 0.6 m depth) on September 2, 2010.Tanks were loaded with 40 fish, and total tank weights were measured to ±1 g.Feeding commenced the following day and continued for 36 days until the end of the trial.Feeding amounts for the tanks were determined by the hatchery constant (HC) method [25], with a planned feed conversion of 1.1 and a maximum growth rate of 0.066 cm/day, which was based on the historical performance of the Shasta strain at Mc-Nenny State Fish Hatchery.Feed amounts were updated daily.Fish were hand fed once per day, with all feed fed and mortality data recorded daily for each tank.
Diets and Chemical Analysis
The nine tanks were randomly assigned to one of three different diets (Table 1), with three experiment units receiving the same diet.In addition to a fish-meal only control, two other diets contained either 10% or 20% DDGS (Poet BPX, Glenville East, South Dakota, USA).To make the essential amino acid profiles similar in all of the diets and potentially improve the acceptability of dietary DDGS [23,27], the DDGS-containing diets were supplemented with lysine, methionine, isoleucine, and histidine.In addition, phytase was added to the DDGScontaining diets to facilitate DDGS digestibility [4].Pelleted diets were produced by extrusion processing.Experimental diets were analyzed according to AOAC [28] methodology for protein (method 2001.11) and crude lipid (method 2003.5, modified by substituting petroleum ether for diethyl ether), and ash content by AACC [29] method 08 -03.The protein and lipid amounts obtained by these methods were multiplied by their respective physiological fuel values of 23.6 and 39.5 mJ [3] to obtain estimated digestible energy values.
Data Collection
At the end of the trial, total tank weights were meas-ured to ±1 g, with weight gain calculated by subtracting the initial weight from the final weight for each tank.Percent (relative) gain was calculated by dividing the total amount of food fed by the initial tank weight.Feed conversion ratio for each tank was calculated by dividing the total amount of food fed by the total weight gain.In addition to total tank measurements, five fish from each tank were randomly selected from each tank, euthanized and individually weighed to ±1 g and measured (total length) to ±1 mm.Fish health profiles, based on a modification of Goede and Barton [30], Adams et al. [31], and Barton et al. [32], were completed using the score sheet described in Table 2. Liver weights were recorded to ±1 mg and the hepatosomatic index (HSI) determined by dividing the liver weight [g] by whole fish weight [g] and multiplying by 100 [33].Viscera weights were also recorded to the nearest mg and the viscerosomatic index (VSI) determined by dividing the viscera weight [g] by the whole fish weight [g] and multiplying by 100.Condition factor was calculated as K = 10 5 × (weight [g])/ (length 3 [mm]).
Protein Digestability
Apparent protein digestability was determined using a direct method [34].Digesta was removed from five fish per tank at the end of the trial.Each fish was dissected and the last cm of the distal end of the intestine was gently squeezed to remove the contents.Digesta from five fish per tank was pooled and flash frozen on dry ice prior to analysis.Protein analysis was conducted using AOAC [28] method 990.03.Percent apparent protein digestability was calculated by subtracting the protein in the digesta from the protein in the diet, dividing this quantity by the protein in the diet, and multiplying by 100.
Fillet Composition
Muscle fillets were removed and flash frozen for determination of carcass composition.The fillets from each tank were pooled and analyzed for crude protein levels with a TruSpec CNS combustion analyzer (LECO Corp., St. Joseph, Michigan, USA) using AOAC [28] method 992.15.AOAC [28] acid hydrolysis method 948.15 with a 50:50 mix of diethyl ether and petroleum ether for extraction was used for fat analysis.Moisture was determined by drying loss using AOAC [28] method 952.08.
Statistical Analysis of Data
Data were analyzed using the SPSS (9.0) statistical analysis program (SPSS, Chicago, Illinois, USA) with significance predetermined at P < 0.05.One-way analysis of variance (ANOVA) was conducted, and if the treatments were significantly different, pairwise mean comparisons were performed using the Tukey HSD test [35].Mortality (%) data were arcsine transformed prior to analysis to stabilize the variances [35].
RESULTS
Tanks of trout receiving diets containing either concentration of DDGS weighed significantly less at the end of the trial than those tanks of fish being fed the fish meal-only control (Table 3).Total tank weight gain was 1011 g in the control tanks, and only 962 g and 940 g in the tanks receiving diets with 10% DDGS or 20% DDGS, respectively.Relative gain was not significantly different among the diets (P = 0.07).Feed conversion ratio among the diets followed a similar pattern, at 0.82 in the fish meal-only control tanks, which was significantly different from the 10% DDGS tanks at 0.87 and the tanks receiving the 20% DDGS diet at 0.89.At 94%, the apparent digestion coefficient of protein in the tanks receiving 20% dietary DDGS was significantly different from either of the other two diets.
Individual fish lengths, weights, and condition factors were not significantly different among the diets tested (Table 4).However, viscera weight and the viscera somatic index (VSI) were significantly lower in the fish fed 2.
the 10% DDGS than in the fish receiving the control diet.
There was no significant difference in either viscera weight or VSI between the fish receiving 20% DDGS compared to fish receiving either of the other two diets.
The hepatosomatic index (HSI) followed a similar pattern as viscera weight and VSI.Liver weight was significantly different among all of the diets.With a mean ranking of 1.9, body fat was significantly greater in the fish receiving 20% DDGS compared to fish fed either of the other two diets with mean rankings of 1.6.There were no significant differences for any of the other fish health or condition parameters evaluated.There were no significant differences in fillet composition in fish receiving any of the three diets (Table 5).Mean fillet protein levels ranged from 18.6 from the trout receiving the fish meal-only control diet to 19.5 in the fish fed the 10% DDGS-diet.Variation in crude lipid levels was observed from the fish receiving either of the diets containing DDGS.
DISCUSSION
The decreased gain and feed conversion ratio observed in any of the tanks receiving either of the diets containing DDGS in this study differs from the conclusions of Cheng and Hardy [4] that incorporation of 15% DDGS, or 22.5% DDGS with methionine and lysine supplementation, is acceptable in rainbow trout diets.Cheng and Hardy [4] did not compare their DDGS-containing diets to a fish meal-only control however; experimental and control diets in that study also contained 15% soybean meal as a protein source.Stone et al. [22] also found decreased growth in rainbow trout receiving DDGS compared to a fish meal control.Further comparison of this study to Cheng and Hardy [4] reveals that they manufactured their diets via cold-pelleting, versus twinscrew extrusion in this study, which may contribute to the differing results [36][37][38].The difference in the water temperature used between the two studies, 11˚C in the present study versus 14.5˚C by Cheng and Hardy [4], may have had some effect [39][40][41].The often dramatic differences in conventional DDGS nutritional composition [8,42,43] also make it difficult to compare the re- sults between studies examining DDGS use in rainbow trout diets.The 10% DDGS diet used in this study had the same amount of fish meal and slightly higher protein levels than the control diet, yet still produced significantly poorer trout growth.The difference in growth could be due to some unidentified anti-nutritional factors [12][13][14][15] present in the DDGS.Because the diets also differed in the amount of wheat and corn gluten, it is also possible that these ingredients may have influenced the results.
Although Cheng and Hardy [24] recommend the inclusion of phytase in rainbow trout diets containing DDGS, the inclusion of phytase in the experimental diets of this study did not lead to similar rearing performance as the fish meal-only control diet.Phytase supplementation had no effect on growth or feed conversion in the rainbow trout fed diets containing 15% DDGS [24].Although the results of phytase supplementation in diets incorporating soy products on rainbow trout growth are mixed [44][45][46][47][48], it generally has had no effect on growth or feed conversion ratio when fed to other fish species [49][50][51][52], with the exception of common carp Cyprinus carpio [53] and Nile tilapia Oreochromis niloticus [54].Phytase does increase the availability of phosphorous in fish feeds containing plant ingredients [55][56][57][58][59], but the diets in the current study were supplemented with a dietary phosphate, likely preventing any plant-related phosphorus deficiency.The efficiency of phytase may be somewhat dependent on the method used to incorporate it into the feed [51], making it possible that the phytase might have been partially inactivated during extrusion processing in this study [60].Phytase activity was not measured in this study however.
Protein digestibility was significantly increased in the fish receiving 20% DDGS feed.The 94% estimated digestibility was greater than that observed by Cheng and Hardy [24].These relatively high protein digestibilities could be because of the high protein digestibility of DDGS itself [24] or in combination with the amino acid supplementation in the experimental diets [61][62][63][64].
Unlike the studies by Lim et al. [65] and Li et al. [66], fillet lipid concentrations did not increase with increasing DDGS levels.Rather, the results from this study were similar to those of Johnsen et al. [67], who observed no difference in Atlantic salmon Salmo salar muscle fat concentrations fed diets containing a wide range of plant protein inclusion rates.Fillet protein levels also did not decrease with the addition of DDGS to the diet, in contrast to Li et al. [66] and Li et al. [68].The percent moisture and crude protein of fillets from the rainbow trout receiving the control, fish meal-only diet were very similar to that reported by Yildiz [69], but less than that reported by Sealey et al. [70].However, the rainbow trout fillets analyzed by Sealey et al. [70] came from fish that were fed a 29% fish meal control diet that also contained 16% soybean meal.
Although the hepatosomatic index is positively related to carbohydrate levels in the diet [71,72], HSI was only increased in the fish receiving 10% DDGS in this study.Other diet studies examining HSI have produced inconsistent results.For example, HSI either slightly decreased, or showed no effect, due to dietary DDGS in tilapia [73,74], and was also unaffected by dietary protein in common carp [75].Because dietary phosphorus is inversely related to liver lipid levels and HSI, [76], phytase supplementation may explain some of the variation in results.
The VSI followed a similar pattern as HSI, with only the 10% DDGS diets producing significantly decreased levels.It was expected that VSI would have been elevated in the fish receiving 20% DDGS because of the greater dietary lipid levels [77][78][79], but that was not the case with the results from this study.However, visual estimates of fat were significantly greater in the fish fed 20% DDGS, which could indicate either that not all of the visceral fat was removed during weighing of the viscera, or that the visual estimation technique is unreliable.
The relatively low feed conversion ratios for both the control and reference diet are not unusual for production rainbow trout at this size at hatcheries in South Dakota [80] or elsewhere [81].The low feed conversion ratios could also possibly be explained by the low rearing densities used in the trial [82,83].
Although specific feeding trial durations are not universally specified, they generally need to last long enough for any potential significant differences among the diets to materialize [84].In a study by de Francesco et al. [85], differences in trout rearing performance between fish meal and plant-based diets did not become apparent until after 12 weeks.This study lasted only 36 days, but this was long enough for significant differences in gain and feed conversion ratio to appear among the diets.While possible, it is unlikely that prolonging this trial for a longer period would have led different results.
Lastly, although the DDGS-containing diets did lead to significantly decreased growth, the replacement of fish meal with DDGS in the 20% DDGS diets do produce a positive economic benefit.Based on the most current United States Department of Agriculture Economic Research Service Data from October 2011 [86], with the price of DDGS and fish meal at US $0.233/kg and $1.158/kg respectively, the cost per kg of fish flesh gained for the fish meal component of the control diet would be $0.565,compared to $0.443 for the fish meal and DDGS component of the 20% DDGS diet.Thus, substituting DDGS for a portion of the fish meal is a viable option if decreases in growth and feed efficiency can be tolerated.
CONCLUSION
The results of this study indicate that, given the ingredients used, the inclusion of 10% or greater amounts of DDGS in the diet of juvenile Shasta-strain rainbow trout will lead to decreased growth and feed conversion, but still may be economically advantageous.It is unknown if these results would be similar for other trout strains or species reared at different water temperatures and different projected growth rates.The effect of dietary DDGS on different sizes of trout is also uncertain.
Table 1 .
Percent composition and chemical analysis of the diets used in the trial.
Table 2 .
Criteria [32] at the end of the study for fish health observations (based on Goede and Barton[30], Adams et al.[31], and Barton et al.[32]).
Table 3 .
Total tank rearing data (means ± SE), including feed conversion ratio and estimated digestion coefficient of protein | 2018-04-30T08:24:39.052Z | 2012-04-19T00:00:00.000 | {
"year": 2012,
"sha1": "ceaee74644208568506483fd4781de99b2b14837",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=18527",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "ceaee74644208568506483fd4781de99b2b14837",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
} |
225044339 | pes2o/s2orc | v3-fos-license | Anticancer activity of ylang-ylang essential oil in Ehrlich Ascites Carcinoma cell-treated mice
Ajay P Malgi*1, Vijaykumar P Rasal1, Vishal Shivalingappa Patil2, Priyanka P Patil1, Shamanad P Mallapur3, Vrushabh B Hupparage1, Sathgowda A Patil1 1Department of Pharmacology and Toxicology, KLE College of Pharmacy, KLE Academy of Higher Education and Research, Belagavi-590010 Karnataka, India 2Department of Molecular Biology and Biotechnology, ICMRNational Institute of Traditional Medicine, Belagavi-590010 Karnataka, India 3Department of Toxicology, RCC Laboratories India Pvt. Ltd. Hyderabad500078 India
INTRODUCTION
Cancer is one of the leading cause of the mortality worldwide responsible for an estimated 9.6 million deaths. In 2018 WHO reported that deaths due to cancer worldwide are continued to rise to over 11 million by 2030 (Bray et al., 2018). Cancer develops in the cells that contain unrepaired damaged DNA which grow, divide and abnormally invade other parts of the body, instead of selfdestruction through programmed cell death (Hanahan and Weinberg, 2011). To date, the management of this condition has been limited to the use of single and / or co-administered chemotherapy, radiation, and surgery (Islam et al., 2014). Although these options are evolving with higher survival rates, patients are undergoing a lot of strain with long term side effects. This has necessitated to deploy novel cost-effective treatments involving minimal human suffering (American Cancer Society, 2007). Herbal medicines including essential oils have received considerable attention in recent years as there is increasing realization that these remedies can impact the progression of carcinoma and its treatment can aid in reestablishing balanced body systems (Rajapoor et al., 2007;Takeoka and Dao, 2003). Also these medications are easily available and cost-effective. Essential oils contain a multicomponent system mainly terpenes. These essential oils being secondary metabolites of the plant have antimutagenic, antiproliferative, antioxidant, and detoxifying properties (Blowman et al., 2018).
Ylang Ylang (Cananga odorata), which belongs to the Annonaceae family, is a traditional plant and is cultivated in Asian regions such as the Philippines, Indonesia, Malaysia and Madagascar. Ylang Ylang essential oil (YYEO), derived from the lowers of Cananga odorata, recently introduced in countries such as China, Africa, India, America, etc., is well known for its aroma (Mazari, 2020). There are about 161 bioactive phytoconstituents reported in Ylang Ylang oil including Linalool, Geranyl acetate, Germacrene-D, β-caryophyllene, Benzyl acetate, Geraniol, Meethyl benzoate, Germenyl acetate, Farnasene & Benzyl benzoate and so on. It also contains monoterpine hydrogen, which contains oxygen, monoterpenes, sesquiterpenes, benzonaid and phenol. monoterpene hydrocarbons containing oxygen, monoterpenes, sesquiterpenes, benzonaid and phenols (Brokl et al., 2013). Traditionally YYEO is employed as aphrodisiac (Kodithala and Murali, 2018), anxiolytics (Zhang et al., 2016), antihypertensive (Jung et al., 2013), antiseptic (Caacbay and Jacinto, 2009), in food and beverages as a fragrance agent (Burdock and Carabin, 2008). Cananga odorata extract was proven to have a cytotoxic effect against hepatocellular carcinoma cancer cell lines, HepG2, and Hep2.2.15 Tan et al. (2015) and antiin lammatory effect (Maniyar and CH, 2015). The cytotoxic oxoaporphine alkaloid liriodenine, isolated from Cananga odorata, was found to be a potent inhibitor of topoisomerase II both in vitro and in vivo (Woo et al., 1997). However, there are no scienti ic reports on the anti-cancer activity of Ylang Ylang essential oil on EAC tumor-bearing mice. Therefore, the current study was conducted to investigate the anticancer potential of Ylang Ylang essential oil in EAC tumor-bearing mice.
In vitro antioxidant activity of YYEO
The in vitro antioxidant assay was performed by the DPPH radical scavenging assay method (Noreen et al., 2017;Caacbay and Jacinto, 2009). Brie ly, 3.6 ml of a methanolic solution of DPPH (0.004% w/v) was mixed with 0.4ml solution of different concentration (50-800 µg/ml) of YYEO. After incubating at 37ºC for 40-45min absorbance was read at 517nm using a spectrophotometer. Ascorbic acid was used as standard. The inhibition curve was plotted and IC 50 calculated.
Experimental animals
Balb/c female mice (20-30g) were procured from In Vivo Bio Sciences, Bengaluru. All mice were kept in clean cages for acclimatization under standard husbandry conditions (22-28 0 C) and relative humidity was maintained at 65±10% for a 12hr light-dark cycle with food and water ad libitum. The experimental protocol was reviewed and experimental animals were approved by IAEC, KLE College of Pharmacy, Belagavi (IAEC Reg No. 221/ Po/Re/S/2000/ CPCSEA) and the experiment was carried out in accordance with the CPCSEA guidelines.
Acute toxicity studies
Ylang Ylang Essential oil was found to be safe and showed no mortality amongst the treated animals at a dose of 2000mg/kg for 14 days (Naik and Rasal, 2019).
Dose selection
Based on the LD 50 obtained from the acute toxicity study, doses of 200, 400, 800 mg/kg were chosen to evaluate the therapeutic effect. Furthermore, YYEO is an oil, the dose of YYEO (mg) has been converted to ml in accordance with the speci ic gravity of YYEO. As the quantity was not measurable, the oil was diluted through emulsion with Tween 20 as an emulsi ier in the ratio 2: 2: 1, Ylang Ylang oil: Water: Tween 20.
Ehrlich ascites carcinoma
The mice with Ascitic Carcinoma (Donor) were taken after 15 days of tumor inoculation. The ascitic luid was withdrawn using a 24-gauge needle into a sterilized syringe and tested for microbial contamination. The ascetic luid was appropriately diluted in normal saline to obtain a concentration of 10 6 cell/ml of tumor suspension and was administered intraperitoneally (0.1ml x 10 6 cells/ml) to induce a tumor (Gupta et al., 2004).
Experimental design
Animals were divided into six groups of six mice in each. Except for the Normal group, all other groups were inoculated with EAC. Further, treatment with YYEO was started after 24 hours of inoculation for 14 days. Group VI: EAC cell line (0.1ml x 10 6 / ml) + YYEO (800 mg/kg p.o.)
Body weight and vital organ weight
The change in body weight (BW) was recorded once every 3 days. At the end of the study, all the animals were decapitated and the weight of the vital body organ was recorded.
Tumor volume and tumor weight
Tumor weight (TW) was determined by collecting ascetic luid after sacri icing the animal. Tumor vol-ume (TV) is quanti ied by collecting ascetic luid from the peritoneal cavity.
Serum estimation
At the end of the study, blood was collected via retroorbital route and centrifuged at 3000rpm for 10 min to separate serum for the estimation of antioxidant enzyme mechanism such as Aspartate aminotransferase (AST), Alanine aminotransferase (ALT), Lactate Dehydrogenase (LDH), Triglycerides, and Total Protein using standard ERBA and UNICHEM kit protocols.
Hematological parameters
Blood was collected from retro-orbital route to determine the level of WBC, RBC, Haemoglobin, lymphocytes.
Biochemical estimation
The liver tissue was collected from the animal and homogenated to determine the in vivo antioxidant levels of Lipid Peroxidation (LPO), Glutathione (GSH), and Superoxide Dismutase (SOD). All the estimations were carried out by the method (Chandrashekhar et al., 2013).
Histopathology
At the end of the study, animals were euthanized and the liver was collected and stored in 10% neutral buffered formalin for further evaluation. The liver tissues were stained with hematoxylin-eosin and tissue lesions were evaluated using an electronic microscope at ×40 magni ication.
In vitro antioxidant activity
DPPH scavenging activity of the YYEO and ascorbic acid was found to be dose-de pendent as shown in Figure 1. 800µg/ml concentration of YYEO showed the highest DPPH scavenging property i.e. 85.6% compared to other concentration of YYEO. Similarly, ascorbic acid has a percentage inhibition of 90.4%. (Figure 2) and change in BW from day 1 to day 14 are shown in Table 1.
Organ weight
Increase in vital organ weight (Liver, Heart, Kidney, and Lungs) were observed in DC group. However, treatment with YYEO signi icantly reduced organ weight on day 14. The effect of 200, 400, and 800mg/kg dose of YYEO on EAC bearing mice organs were shown in Figure 3 and Table 2.
Tumor volume and tumor weight
Mice inoculated with EAC showed a signi icant increase in tumor volume (p <0.001) compared to the NG until end of the study. The TV and TW of DC group was found to be 5.56 ± 0.16ml and 5.91 ± 0.12g respectively whereas EAC bearing mice treated with YYEO signi icantly reduced to 4.06 ± 0.19ml and 4.30 ± 0.20g in 200mg / kg, 3.26 ± 0.24 ml and 3.60 ± 0.23g in 400 mg / kg, 2.41 ± 0.16 ml and 2.75 ± 0.13g in 800 mg / kg. Whereas in EAC bearing mice treated with 5-FU treated group was found to be 1.92 ± 0.08 ml and 2.20 ± 0.09 g. Compared to the 200 mg / kg group, the 400 mg / kg i.e. 3.26 ± 0.24 ml and 3.60 ± 0.23g and 800 mg / kg i.e. 2.41 ± 0.16 ml and 2.75 ± 0.13g groups showed a signi icant reduction in TV and TW. (Table 3) (Figure 4).
Serum markers estimation
The level of AST were signi icantly increased (p <0.001), by) in EAC bearing DC group to 123.5 ± 4.11 U/L when compared to the NG i.e. 52.67 ± 2.18 U/L. After administration of YYEO at different doses (200 mg/kg, 400 mg/kg, 800 mg/kg) to EAC bearing mice the levels of AST was signi icantly reduced (p <0.001) to 98.17 ± 2.05 U/L, 79.50 ± 2.12 U/L, 70.17 ± 1.55 U/L respectively as compared to dis-ease control. Also 5-FU showed decrease in AST level when compared with DC. (Table 4) ( Figure 5).
Inoculation of EAC drastically increased the level of ALT, LDH and Triglycerides in DC group as compared to NG (p <0.001). Administration of YYEO at different doses (200 mg/kg, 400 mg/kg, 800 mg/kg) signi icantly decreased (p <0.001) ALT, LDH and Triglycerides levels when compared to DC group (Table 4) ( Figure 5).
Haematological parameter
It was found that all hematological parameters of mice with tumors on day 14 signi icantly altered from NG (Table 5) ( Figure 6). In malignancy, there was a decrease in the level of Hb, RBC and Lymphocytes, which was accompanied by an increase in WBC. At the same time interval YYEO (400 and 800 mg/kg p.o.) treatment signi icantly changed these altered parameters (p <0.001) to almost normal in dose dependent manner. The 5-FU group also showed altered parameters (p <0.001) when compared to NG and DC.
Biochemical parameter
The levels of LPO in liver tissue was signi icantly ( (Figure 7).
Histopathology
It has been observed that hepatocellular architecture damaged with neoplastic lesions and modi ied hepatocytes have developed with more than one nucleus and of a hyperchromatic nature. Lymphocyte invasion and marked central vein enlargement were found in the DC group. Slight hepatocellular architecture damaged with neoplastic lesions was observed in groups of 200 mg/kg and 400 mg/kg. Inversion of this damage observed in the 5-Fluorouracil and 800 mg/kg group (Figure 8).
DISCUSSION
The current study utilized the in vivo cancer models to assess the anticancer potential of YYEO against EAC bearing mice. Initially, we treated with YYEO at doses of 200 mg/kg, 400 mg/kg and 800 mg/kg to the EAC bearing mice and 5-Fluorouracil was used as an internal standard. YYEO at doses of 200 mg/kg, 400 mg/kg and 800 mg/kg signi icantly reduced Tumor volume and Tumor weight at the inal level of the study. The standard drug 5-Fluorouracil showed a signi icant result compared with YYEO treated groups and was more effective. The antioxidant potency of the YYEO was evaluated by DPPH, the effects were compared against the ascorbic acid. The result revealed that the effect of both YYEO and ascorbic acid was dose dependent i.e. as the concentration increases, the inhibitory potency also increased.
High serum enzyme levels such as AST and ALT are indicatory of cell proliferation and various diseases of the liver and bones and the drugs used for treatment should lower the level of these enzymes to a normal level (Benirschke et al., 1978). The increased level of LDH presence in the blood or body luid can be directly attributed to the extent of injured body tissues (Ramalingam et al., 2019). Elevated triglycerides are indicative of damage to the liver and cell membranes (Patra et al., 2015). The liver is the main source of serum protein. Protein intake is directly determined by the level of total proteins. Previous studies have shown that depletion in protein exhibits liver dysfunction and inhibition of protein synthesis (Onifade and Tewe, 2010). Treatment with YYEO showed a remarkable reversal of all biochemical variables towards normal signifying the repair of hepatic injury caused by EAC.
To date, chemotherapy is a serious issue as it causes myelosuppression and anemia during malignant growth. Anemia occurs in mice with a tumor mainly due to a decrease in the Red Blood Cell or Hb level and may reoccur due to iron insuf iciency or myelopathy (Ve and Re, 1958). Treatment with the YYEO roughly normalized WBC, Red Blood Cells, Hb, and Lymphocytes. This re lected the drug activity of the hematological variables. LPO is a process associated with free radicals in biological systems that can occur under enzymatic control (Fenninger and Mider, 1954). Malondialdehyde (MDA) being the end product of LPO, was found to be higher in the DC group than in treated groups. Due to the excessive oxidative stress, GSH levels were decreased in the DC group but in the treatment group, GSH levels were increased to normal levels, which may be due to decreased proliferation of the cells (Arrick and Nathan, 1984). Similarly, Tumor growth is recorded due to blockade of SOD (Sun et al., 1989). The treated group showcased an enhanced level of SOD re lecting the restoration of natural antioxidant enzymes.
CONCLUSIONS
In conclusion, the treatment of YYEO was effective in inhibiting the tumor growth in EAC treated mice model. Further studies are needed to characterize the active principle and to elucidate mechanism of action involved in anti-tumor activity. | 2020-10-22T23:44:54.711Z | 2020-09-29T00:00:00.000 | {
"year": 2020,
"sha1": "60371dc8a43e69c814cccbb0d78a7f58ce338392",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.26452/ijrps.v11i4.3221",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "60371dc8a43e69c814cccbb0d78a7f58ce338392",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
} |
244732140 | pes2o/s2orc | v3-fos-license | Dementia in Southeast Asia: influence of onset-type, education, and cerebrovascular disease
Background Southeast Asia represents 10% of the global population, yet little is known about regional clinical characteristics of dementia and risk factors for dementia progression. This study aims to describe the clinico-demographic profiles of dementia in Southeast Asia and investigate the association of onset-type, education, and cerebrovascular disease (CVD) on dementia progression in a real-world clinic setting. Methods In this longitudinal study, participants were consecutive series of 1606 patients with dementia from 2010 to 2019 from a tertiary memory clinic from Singapore. The frequency of dementia subtypes stratified into young-onset (YOD; <65 years age-at-onset) and late-onset dementia (LOD; ≥65 years age-at-onset) was studied. Association of onset-type (YOD or LOD), years of lifespan education, and CVD on the trajectory of cognition was evaluated using linear mixed models. The time to significant cognitive decline was investigated using Kaplan-Meier analysis. Results Dementia of the Alzheimer’s type (DAT) was the most common diagnosis (59.8%), followed by vascular dementia (14.9%) and frontotemporal dementia (11.1%). YOD patients accounted for 28.5% of all dementia patients. Patients with higher lifespan education had a steeper decline in global cognition (p<0.001), with this finding being more pronounced in YOD (p=0.0006). Older patients with a moderate-to-severe burden of CVD demonstrated a trend for a faster decline in global cognition compared to those with a mild burden. Conclusions There is a high frequency of YOD with DAT being most common in our Southeast Asian memory clinic cohort. YOD patients with higher lifespan education and LOD patients with moderate-to-severe CVD experience a steep decline in cognition. Supplementary Information The online version contains supplementary material available at 10.1186/s13195-021-00936-y.
Background
The prevalence of dementia types and their clinical trajectory has been extensively reported from western settings [1][2][3][4][5][6][7][8][9]. The estimates from Asia are quite variable, likely related to differing diagnostic criteria and variations in diagnostic tools. There is also limited literature on dementia progression in Asia, especially involving naturalistic patient follow-up over long durations. Longitudinal studies in Asia have largely focussed on older populations [10] and cognitive changes in non-dementia participants [11]. Cognitive trajectories in youngonset dementia (YOD) have not been compared with late-onset dementia (LOD) counterparts in Asia [12][13][14]. Additionally, the prevalence of dementia sub-types remains to be elucidated in Southeast Asia.
From an etiological perspective, reports illustrate cerebrovascular contribution to dementia in Asia [15,16], involving white matter hyperintensities (WMH) as surrogate MRI markers for cerebrovascular disease (CVD) [17]. While vascular cognitive impairment (VCI) is more prevalent in Asia [18,19], there is limited data from naturalistic studies comparing cognitive trajectories in such patients. Additionally, the contribution of education attainment to cognitive decline between YOD and LOD remains to be further elucidated.
This study describes the demographic and clinical trends and longitudinal profile of YOD and LOD patients from a dementia clinic of a tertiary hospital, in Singapore between 2010 and 2019. The tertiary hospital is the largest provider of neuroscience care in Singapore, providing care to 70% of the population. Additionally, the study examined the association between education and longitudinal cognitive change in YOD and LOD. The impact of CVD determined by the modified Fazekas scale on clinically relevant time to a significant decline in global cognition was examined, as this may allow clinicians to institute intensive management of vascular risk factors for those with CVD. We also assessed the influence of education and CVD on depression. We hypothesized that higher levels of lifespan education would be protective against cognitive decline and higher CVD load would result in a more rapid cognitive decline.
Participants and study design
Data was extracted from a longitudinal database of consecutive series of patients with cognitive impairment. The database included patients attending the dementia clinic of a tertiary hospital in Singapore between 2010 and 2019. All patients were evaluated by a team comprising cognitive neurologists, psychologists, and dementiatrained nurses. Patients underwent neuroimaging with MRI or a CT scan as part of the diagnostic workup where clinically indicated. Diagnoses included subjective cognitive impairment, mild cognitive impairment, and dementia. For the purposes of this study, only patients with dementia were included in the analysis. Dementia was diagnosed based on the DSM IV and 5 criteria [20,21]. Patients with dementia included dementia of the Alzheimer's type (DAT), frontotemporal dementia (FTD), vascular dementia (VaD), Parkinsonism spectrum dementia, rapidly progressing dementia (RPD), and autoimmune dementia. Clinical symptoms and presentation largely informed the diagnoses in the case of mixed dementias, wherein the consulting team arrived at a final diagnosis based on the presentation and clinical history of individual patients. Parkinsonism spectrum dementia consisted of Parkinson's dementia (PDD), dementia with Lewy body (DLB), and normal pressure hydrocephalus (NPH); diagnosis of DAT was per the NINCDS-ADRDA [22] and NIA-AA criteria [23]. VaD was diagnosed based on the NINDS-AIREN criteria [24], FTD was diagnosed based on the Raskovsky criteria [25], PDD was diagnosed based on the MDS task force criteria [26,27], while DLB was diagnosed based on the McKeith criteria [28,29]. Patients who were under the age of 65 years at the time of symptom onset were classified as YOD [30], while patients aged 65 years or older were classified as LOD. A total of 2890 unique patients attended the tertiary memory clinic from 2010 to 2019. Of them, 1606 were given a consensus diagnosis of dementia based on clinical criteria (as referenced above) on their initial visit, a subset of whom came back for follow-up visits every 6 to 9 months. At every follow-up visit, patient diagnosis and demographics were determined and they underwent cognitive testing as detailed in section 2.2. Dementia patients were included in the study if they had at least one follow-up visit following the initial visit and based on these criteria a total of 786 dementia patients completed one or more follow-up visits. As part of the routine clinical screening, patients were evaluated for thyroid function and vitamin B12 levels at every visit. While we have not collected individual data on these measures, appropriate management was employed for patients with any abnormal findings.
The study was approved by the Singhealth Centralized Review Board. The informed consent process was in accordance with Declaration of Helsinki and local clinical research regulations.
Cognitive and behavioral measurements
Patients underwent evaluation for global cognition with the local versions of the Mini Mental State Examination (MMSE) and the Montreal Cognitive Assessment (MoCA), which have been validated in Singapore [31,32]. The local version of the MMSE and MOCA has a score range of 0-30 points and score categories similar to the original versions. Patients were also screened for depression with the geriatric depression scale (GDS). Clinically relevant depression was determined using a cut-off of GDS≥5. The MMSE and MoCA evaluation was conducted at the patient's initial visit to the clinic and was classified as the baseline MMSE and baseline MoCA score. These evaluations were then repeated at every clinical visit which ranged from 6 to 9 months and were used as follow-up measurements in the statistical analyses. The MMSE was the most regularly recorded measure due to the ease of administration with limited patient-facing time in a naturalistic clinical setting. It was also an effective tool for consistent tracking of cognitive performance as disease progression did not hinder the administration of the test until more advanced stages. All evaluations were performed by psychologists and trained dementia nurses using a standardized protocol.
Measurement of cerebrovascular disease burden
Patients underwent neuroimaging using either 1.5T MRI scanner (Philips Ingenia) or 3T Siemens Prisma Fit (Siemens, Erlangen, Germany) or coronal fine cut CT scans. T1-weighted and fluid-attenuated inversion recovery were used for visual rating of scans based on the modified Fazekas scale for WMH severity [33]. Specifically, periventricular WMH and deep subcortical WMH were separately rated on a 0-3 point scale for both hemispheres. The scoring criteria were as follows: for periventricular WMH, the absence of any WMH = 0; the presence of caps or pencil-thin lining = 1; a smooth halo along the edges of the lateral ventricle = 2; and irregular hyperintensities extending into deep white matter = 3. For deep subcortical WMH, the absence of any WMH = 0; the presence of non-confluent foci of WMH in the deep subcortical region = 1; beginning confluence of WMH foci = 2; and the presence of large confluent areas = 3. The modified Fazekas scale allows for quantification of white matter lesions in four brain regions, namely right periventricular, left periventricular, right deep subcortical, and left deep subcortical to provide a score range of 0-12. All visual ratings were performed by two independent raters, and any significant discordance in scores was resolved by consensus. Absent-to-mild CVD was defined as a score of 0-4 on the total Fazekas scale while those with a score of 5-12 were defined as moderate-to-severe CVD. A score of 5-12 on the modified Fazekas score corresponds to Fazekas grades 2-3 of the original grading, indicative of significant CVD.
Statistical analyses
Analyses of baseline demographic information for the dementia group included descriptive statistics, and continuous variables were reported as mean and standard deviation and categorical variables as frequency and percentage across YOD and LOD cohorts. Group differences were examined using two independent samples t test or Mann-Whitney U test (where appropriate) and χ 2 or Fisher's exact test (where appropriate) for continuous and categorical outcomes, respectively.
In order to investigate the cognitive trajectories of YOD and LOD over time (duration of follow-up), linear mixed model analysis was conducted to examine two-way interactions between onset-type and time, for patients with at least one follow-up MMSE score. Longitudinal MMSE scores were the key dependent variable. The key independent variable was onset-type, coded as either YOD (<65 years) or LOD (≥65 years) and an onset-type*Time interaction term. Baseline age, race, sex, and lifespan education years (referring to education from primary school and throughout the lifespan) and duration of follow-up were included as covariates. Additionally, a separate model also assessed the effect of lifespan education years on MMSE decline over time with a lifespan education years*time interaction term as the key independent variable. Baseline age, race, sex, baseline MMSE score, and duration of follow-up were included as covariates. The random effects were modeled at the individual subject level represented by the individual variability in intercepts and longitudinal slopes.
Additionally, the relationship between onset-type, lifespan education, and cognition was elucidated in a linear mixed model three-way interaction analysis. The key dependent and independent variables remained longitudinal MMSE scores and onset-type respectively with an onset-type*lifespan education years*time interaction term. Baseline age, sex, race, lifespan education years, baseline MMSE score, and duration of follow-up were included as covariates. The random effects were modeled at the individual subject level represented by the individual variability in intercepts and longitudinal slopes. The linear mixed model analyses were performed using R 3.6.3 (R Core Team, 2014) with RStudio (RStudio Team, 2012).
A subset of our patients also had longitudinal GDS scores. Thus, we also investigated what factors contributed to the onset of clinically relevant depression or the continued presence of depression using a cut-off of GDS≥5. Based on this cut-off, we classified patients on a binary scale (0=not depressed; 1=depressed). In binary logistic regression models, we assessed whether lifespan education years, sex, and white matter hyperintensity (Fazekas visual rating scores) influenced the development of depression in YOD and LOD separately.
In order to examine a clinically relevant effect of CVD burden on the rapid progression of cognitive decline, a sub-group analysis looking at the long-term effect of CVD burden on those diagnosed with DAT and VaD was conducted. We investigated the effect of CVD burden on the significant decline in MMSE and MoCA using a Kaplan-Meier plot and log-rank test. For MMSE and MoCA scores, a three-point drop was used to define significant decline within the maximum duration of follow-up for each patient. As defined earlier, patients with absent-to-mild cerebrovascular disease (CVD) had a score of 0-4 on the total Fazekas scale while those with a score of 5-12 were defined as moderate-to-severe CVD. The sub-group statistical analysis was conducted using SAS software version 9.4 for Windows (Cary, NC: SAS Institute Inc.) and R 3.6.3 (R Core Team, 2014) with RStudio (RStudio Team, 2012).
Demographics of dementia
A total of 2890 unique patients attended the tertiary memory clinic from 2010 to 2019. Of them, 1606 were diagnosed with dementia. The mean age of the dementia cohort was 71.2 (± 10.5) years, 53.9% females, and 85.4% Chinese ethnicity with a mean lifespan education of 7.4 (± 5.2) years. DAT was the most common diagnosis (59.8%), followed by VaD (14.9%), FTD (11.1%), Parkinsonism spectrum (11.1%), autoimmune dementia (1.6%), and RPD (1.4%). YOD patients accounted for 28.5% of all dementia patients. Over the 10-year period, there was an increasing trend in the yearly incidence of YOD and LOD patients (Additional File 1: Supplementary Fig. 1). Fig. 1). Table 2) compared to LOD patients after controlling for age, sex, race, and lifespan education years.
Effect of onset-type and education on cognitive trajectories of patients with amnestic-type dementia
Years of lifespan education were found to be a predictor of baseline MMSE (p<0.001) such that higher years of education resulted in higher baseline MMSE scores. Despite the higher baseline MMSE scores, higher years of education related in a steeper decline in MMSE scores over time (years of education*Time: β=−0.05, p<0.001; Table 2) after controlling for age, race, and sex. These results remained even after controlling for baseline MMSE score.
When onset-type was added in the model evaluating lifespan education and MMSE trajectory, the effect of years of education was found to be more pronounced. Patients with higher years of education in the YOD group experienced a steeper decline than patients with comparable years of education in the LOD group (onset-type*Lifespan education years*Time: β=−0.13, p=0.0006; Fig. 2, Table 2). At baseline, there was no statistical difference in the MMSE score between those in the YOD and LOD groups with comparable years of education (p=0.812). Importantly, these results remained even after controlling for baseline MMSE score.
Longitudinal depression in young-onset and late-onset dementia
From the total cohort, a subset of 430 patients had longitudinal GDS scores comprising YOD (n=104) and LOD (n=326) patients. In our memory clinic, we found that 23.07% (n=24) of our YOD and 23.33% (n=76) of our LOD patients were classified as having depression at follow-up. In separate binary logistic regression models for YOD and LOD, lifespan education years, sex, and white matter hyperintensity (Fazekas visual rating scores) were not significant predictors of depression development ( Table 2).
Influence of cerebrovascular disease burden on a global cognitive trajectory in DAT and VaD
The influence of CVD on global cognitive MMSE trajectory was studied among 592 DAT and VaD YOD and LOD patients who had baseline MRI or CT brain. These patients had a baseline MRI scan for quantification of CVD and longitudinal MMSE scores. CVD burden was coded as a categorical variable, absent-to-mild CVD burden and moderate-to-severe CVD burden as described in the "Methods" section. A three-point drop in MMSE was used to define a significant decline in global cognition [34].
Among the 592 patients, 417 (70.4%) had a moderate-to-severe burden of CVD while 175 (29.6%) had an absent-to-mild burden of CVD. Kaplan-Meier analysis demonstrated that patients in the LOD group with the moderate-to-severe burden of CVD demonstrated a statistical trend for faster decline compared to those with absent-to-mild CVD. Among LOD patients with moderate-to-severe CVD, 75% of them demonstrated a 3-point MMSE decline at 2.5 years, while it took 3.6 years for 75% of LOD patients with absent-to-mild CVD to have a similar MMSE decline (p=0.063; Fig. 3). There was no significant difference in time to cognitive decline among YOD patients based on CVD severity (p=0.715).
Additionally, a subset of patients also had longitudinal MoCA scores and the influence of CVD on MoCA trajectory was studied among 368 YOD and LOD patients who had baseline MRI or CT brain. There was no significant difference in the average time to 3-point MoCA decline between the absent-to-mild CVD group (2.09 years) and the moderate-to-severe CVD group (2.03 years, p=0.69). Additionally, there was also no significant difference in a significant decline in MoCA scores based on CVD severity in both the YOD and LOD groups.
Discussion
In a Southeast Asian memory clinic cohort with data spanning a decade, we found a trend towards increasing yearly incidence of both YOD and LOD. We additionally demonstrate that YOD accounted for 28.5% of all dementia patients. DAT was the leading type of dementia as in western cohorts, with VaD being the second leading cause of dementia. Our data also showed that while patients with higher education have higher baseline global cognition, they however have a steeper decline in global cognition, with this finding being more pronounced in YOD patients. LOD patients with moderateto-severe CVD burden also displayed a faster decline in global cognition compared to those with absent-to-mild CVD.
Over the period from 2010 to 2019, overall, there has been an increasing trend in patients with YOD. The reasons for this increase are likely to be multifactorial. Specifically, awareness of YOD both among the general public and within the healthcare system has steadily increased owing to dementia awareness campaigns by many organizations including the Health Promotion Board and the National Neuroscience Institute. The increase in YOD may also be related to the increasing burden of vascular risk factors. Our findings demonstrate that there is a high frequency of vascular risk factors among younger patients. The frequency of hypertension among YOD was 29.7%, while the frequency of hyperlipidemia, diabetes mellitus, and smoking in YOD was 31.2%, 15.7%, and 15.5%, respectively. These vascular risk factors may have contributed to cerebrovascular disease via vascular injury to the brain as well as accelerated amyloid pathology [35][36][37]. Recent studies in Asian and Western cohorts have also indicated a significant role of vascular disease burden in YOD [38][39][40]. In support of our findings, prior studies have shown a high prevalence of vascular risk factors and cerebrovascular disease burden in Singapore and other cities in Asia [19,41]. In turn, high vascular disease burden has been shown to increase the odds of conversion from a prodromal to clinical dementia stage in patients from Singapore [42].
Our findings additionally demonstrate that LOD patients with a moderate-to-severe burden of CVD experience a faster decline in global cognition compared to those with an absent-to-mild CVD burden. Among LOD patients with moderate-to-severe CVD, 75% demonstrated a 3-point decline in MMSE within 2.5 years from diagnosis of dementia, compared to patients with absentto-mild CVD, wherein a 3-point MMSE decline was observed after 3.6 years These findings suggest that CVD may have a major role in the pathogenesis of dementia in Fig. 1 Diagnostic breakdown in young-onset dementia and late-onset dementia. Dementia of the Alzheimer's type was the most common diagnosis in both the young-onset and late-onset groups. The late-onset dementia group had a significantly higher proportion of dementia of the Alzheimer's type patients. Abbreviations: YOD young-onset dementia, LOD late-onset dementia LOD. In this regard, findings from Asia indicate a high co-occurrence of CVD in LOD [43]. Such a co-occurrence and additional presence of vascular risk factors in Asian cohorts have also been associated with greater memory and executive function decline and worse clinical outcomes [44][45][46][47][48]. Additionally, greater dysconnectivity in the default mode network as well as executive control network with consequent impairment in episodic memory as well as executive function has been associated with CVD burden in studies from Asia [46,49,50]. Findings from our group demonstrate that the presence of CVD may also result in atrophy of specific gray matter regions crucial for episodic memory and executive function [51,52]. Moreover, in the Honolulu-Asia Ageing Study, dementia frequency increased with increasing neuritic plaque density and increased further in the presence of cerebrovascular lesions. The association was strongest in patients with sparse neuritic plaques where dementia frequency more than doubled with coexistent cerebrovascular lesions [53]. These findings suggest that CVD may result in accelerated accumulation of amyloid pathology. As for the stronger CVD effect in LOD compared to YOD, it is likely that CVD interacts with the multiple pathologies in the older brain to result in more accelerated neurodegeneration, hence the faster decline in cognition in LOD patients harboring CVD. From a clinical management perspective, our finding of more rapid cognitive decline in patients with concomitant CVD may help clinicians in selecting intensive management of vascular risk factors to slow the rate of decline in their patients with a high CVD burden.
The finding that longitudinal cognitive decline in YOD patients was steeper, especially among those with a higher number of years of lifespan education has important clinical and public health implications. Similar to our findings, previous studies have also reported Table 2 Summary of linear mixed-effects models examining the independent and interactive effects of onset-type, lifespan education, and time on longitudinal MMSE decline and summary of binary logistic regression models examining the effect of lifespan education, sex, and white matter hyperintensity on depression development in YOD and LOD associated with pure pathologies affecting specific brain regions with a more aggressive disease course, resulting in greater neuronal loss and cerebral hypometabolism than LOD. Thus, the protective role of education is unable to sustain optimal cognitive function in YOD and hence they experience a greater cognitive decline [56][57][58]. This is an indication that more resources may need to be devoted to the care, treatment, and research for young-onset dementia. Findings from our group indicate that the opportunity cost to the young-onset group in terms of economic, social, and emotional well-being may be more drastic coupled with ripple effects that impact family members of those with young-onset dementia [59,60]. Notably, little is known about the relationship between onset-type, lifespan education, and cognitive decline in Asian populations and needs further exploration. Findings from our study thus provide important insights into the influence of lifespan education on the future cognitive decline between YOD and LOD in an Asian clinic cohort.
Limitations
The limitations of our paper include that the data came from a single memory clinic and thus may lack generalizability. However, this being the first report from a memory clinic population, we believe our findings will allow other centers in Southeast Asia to examine their cohorts and perform comparisons with our cohort. The lack of biomarker data may be viewed as a weakness, although the availability of clinical data may allow better allocation of resources for the development of biomarkers in this growing region of the world. Additionally, we were unable to include information on the longitudinal progression of CVD as well as diagnostic status and its influence on cognitive decline in the current study. This will be a key focus of our future studies. The availability of data over a 10-year period as well as the use of neuroimaging to define cerebrovascular disease is key strengths of our study.
Conclusion
In conclusion, we demonstrate that there is a high frequency of YOD in our cohort with DAT and VaD being the most common in our clinic cohort. Younger patients with greater years of lifespan education and LOD patients with moderate-to-severe CVD experienced a steep decline in cognition. | 2021-12-01T14:34:48.744Z | 2021-11-30T00:00:00.000 | {
"year": 2021,
"sha1": "6822c8da95c1eea22f5cd969482e4d21f60b7f08",
"oa_license": "CCBY",
"oa_url": "https://alzres.biomedcentral.com/track/pdf/10.1186/s13195-021-00936-y",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d0958ce75546d7008eb7f295e9ddb9258d63cfda",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
211476324 | pes2o/s2orc | v3-fos-license | Compressive Strength, Chloride Ion Penetrability, and Carbonation Characteristic of Concrete with Mixed Slag Aggregate
The shortage of natural aggregates has recently emerged as a serious problem owing to the tremendous growth of the concrete industry. Consequently, the social interest in identifying aggregate materials as alternatives to natural aggregates has increased. In South Korea’s growing steel industry, a large amount of steel slag is generated and discarded every year, thereby causing environmental pollution. In previous studies, steel slag, such as blast furnace slag (BFS), has been used as substitutes for concrete aggregates; however, few studies have been conducted on concrete containing both BFS and Ferronickel slag (FNS) as the fine aggregate. In this study, the compressive strength, chloride ion penetrability, and carbonation characteristic of concrete with both FNS and BFS were investigated. The mixed slag fine aggregate (MSFA) was used to replace 0, 25%, 50%, 75%, and 100% of the natural fine aggregate volume. From the test results, the highest compressive strength after 56 days was observed for the B/F100 sample. The 56 days chloride ion penetrability of the B/F75, and B/F100 samples with the MSFA contents of 75% and 100% were low level, approximately 34%, and 54% lower than that of the plain sample, respectively. In addition, the carbonation depth of the samples decreased with the increase in replacement ratio of MSFA.
Introduction
Owing to the immense growth of the global concrete industry, the shortage of natural aggregates has emerged as a serious problem. In Korea, the lack of aggregates has often led to construction problems. Therefore, a considerable amount of social and research interest has been focused on finding alternative aggregate materials to replace natural aggregates [1][2][3][4][5][6][7]. Various types of steel slag can be considered as alternatives to aggregates for concrete. Ferronickel slag (FNS) is an industrial byproduct of the ferronickel production process. It is obtained after nickel ore and bituminous coal used as raw materials in the ferronickel smelting process are melted at a high temperature and separated from ferronickel [2]. The annual amount of FNS produced in South Korea is over 2 million tons. Most are discarded and cause serious environmental pollution. Generally, the FNS is used as a substitute material for foundry sand, abrasive, and serpentine [2]. In addition, studies were carried out on the use of FNS as fine aggregate for concrete [5][6][7].
Saha et al. [5] studied the strength and durability of cement mortar using FNS as the replacement of natural sand. The maximum compressive strength of cement mortar was obtained by replacing 50% sand with FNS. In addition, X-ray diffraction test results showed that the pozzolanic reaction of fly ash helped to reduce the strength loss.
Choi et al. [6] investigated the alkali-silica reactivity of cementitious materials with FNS fine aggregate produced under different cooling conditions. The alkali-silica reactivity of mortar using FNS fine aggregate was dependent on the cooling speed and particle size of FNS.
Lee et al. [7] investigated the mechanical properties and resistances to freezing and thawing of concrete using an air-cooled ferronickel slag (ACFNS) fine aggregate. The compressive strength and static modulus of elasticity of the concrete with ACFNS fine-aggregate increased with increasing the replacement ratio of ACFNS.
Blast furnace slag (BFS) is also another steel industry byproduct that is obtained from blast furnaces used in the manufacturing of pig iron. The annual amount of produced BFS in Korea is approximately 15 million tons. The BFS has been extensively used as a successful replacement material for Portland cement in concrete materials to improve the durability and in the production of high-strength concrete, with environmental and economic benefits, such as resource conservation, CO 2 reduction, and energy savings [8][9][10][11]. In addition, BFS also can be used as the aggregate for cement mortar of concrete. In previous studies [12,13], BFS was used as a substitute material for concrete aggregate; however, few studies have been conducted on concrete with slag aggregate containing both FNS and BFS as the fine aggregate.
In this study, the slump, air content, compressive strength, resistance to chloride ions, and carbonation characteristic of concrete with both FNS and BFS as the fine aggregate were investigated to effectively utilize the mixed slag fine aggregate (MSFA) as a substitute material for the natural aggregate in the concrete industry.
Materials
An ASTM type I ordinary Portland cement manufactured by Asia Cement Co. (Seoul, Korea), BFS powder obtained from Daehan Slag Co., Ltd. (Gwangyang, Korea), and fly ash obtained from the Honam power plant in Korea were used as cementitious materials in this study. In addition, a crushed coarse aggregate (Granite, G max 25 mm) with a density of 2.65 and fineness modulus of 6.49 was used. Table 1 summarizes the chemical compositions of the cement, BFS powder, and fly ash used in the experiment. Natural, BFS, and FNS fine aggregates were used in the experiment. Natural sand (NS) was used as the natural fine aggregate with a maximum size of 5 mm and fineness modulus of 2.89. The BFS sand (BS) and FNS sand (FS) used as the slag fine aggregates were obtained from POSCO, Korea. Figure 1 shows the used fine aggregate samples, while Table 2 summarizes their physical properties. Figure 2 shows the particle size distributions of the NS, BS, FS, and MSFA (B/F) with a BFS:FNS ratio of 5:5 by volume. The particle size distribution of each aggregate was compared with the standard proposed by KS F 2527. The fineness modulus of the BS was smaller than that of the NS, while that of the FS was higher than that of the NS. The fineness modulus of B/F was 2.94, similar to that of the NS.
Mixing Proportions and Specimen Preparation
In this study, the MSFA with the BFS:FNS mixture ratio of 5:5 was used to replace 0 (plain), 25%, 50%, 75%, and 100% of the volume of the NS. A constant water-to-binder ratio of 0.518 was used. In all mixtures, the BFS powder and fly ash were used to replace 20% and 10% (weight) of the cement, respectively. The mixing proportions of the concrete samples are summarized in Table 3. In addition, a water-reducing agent (WRA; S Co., Seoul, Korea) was used to control the fluidities of all mixtures. The components of the concrete samples were mixed in a mechanical mixer. Cylindrical molds (∅100 × 200 mm) were fabricated for the compressive strength test. After 24 h, the specimens were removed from their molds and cured at 20 °C in a water tank.
Mixing Proportions and Specimen Preparation
In this study, the MSFA with the BFS:FNS mixture ratio of 5:5 was used to replace 0 (plain), 25%, 50%, 75%, and 100% of the volume of the NS. A constant water-to-binder ratio of 0.518 was used. In all mixtures, the BFS powder and fly ash were used to replace 20% and 10% (weight) of the cement, respectively. The mixing proportions of the concrete samples are summarized in Table 3. In addition, a water-reducing agent (WRA; S Co., Seoul, Korea) was used to control the fluidities of all mixtures. The components of the concrete samples were mixed in a mechanical mixer. Cylindrical molds (∅100 × 200 mm) were fabricated for the compressive strength test. After 24 h, the specimens were removed from their molds and cured at 20 °C in a water tank.
Mixing Proportions and Specimen Preparation
In this study, the MSFA with the BFS:FNS mixture ratio of 5:5 was used to replace 0 (plain), 25%, 50%, 75%, and 100% of the volume of the NS. A constant water-to-binder ratio of 0.518 was used. In all mixtures, the BFS powder and fly ash were used to replace 20% and 10% (weight) of the cement, respectively. The mixing proportions of the concrete samples are summarized in Table 3. In addition, a water-reducing agent (WRA; S Co., Seoul, Korea) was used to control the fluidities of all mixtures. The components of the concrete samples were mixed in a mechanical mixer. Cylindrical molds (∅100 × 200 mm) were fabricated for the compressive strength test. After 24 h, the specimens were removed from their molds and cured at 20 • C in a water tank. The slump and air content tests of the concrete samples were carried out in accordance with Korean Standards (KS) F 2402 [14] and KS F 2421 [15], respectively. The compressive strength test was carried out after 7, 14, 28, and 56 days in accordance with KS F 2405 [16]. The presented strength test values are the average values of three samples.
Chloride ion penetration tests were carried out after 7, 14, 28, and 56 days, according to ASTM 1202 C [17]. Specimens having dimensions of ∅100 × 50 mm, obtained by cutting the ∅100 × 200 mm cylindrical specimens, were used in the test. The specimen and equipment used in the chloride ion penetration test are shown in Figure 3. The slump and air content tests of the concrete samples were carried out in accordance with Korean Standards (KS) F 2402 [14] and KS F 2421 [15], respectively. The compressive strength test was carried out after 7, 14, 28, and 56 days in accordance with KS F 2405 [16]. The presented strength test values are the average values of three samples.
Chloride ion penetration tests were carried out after 7, 14, 28, and 56 days, according to ASTM 1202 C [17]. Specimens having dimensions of ∅100 × 50 mm, obtained by cutting the ∅100 × 200 mm cylindrical specimens, were used in the test. The specimen and equipment used in the chloride ion penetration test are shown in Figure 3. Accelerated-carbonation test (Figure 4) of the concrete samples (∅100 × 200 mm) was carried out during 7, 28, and 56 days, according to KS F 2584 [18], by using an accelerated-carbonation chamber at a constant temperature of 20 ± 2 °C, constant humidity of 60 ± 5%, and constant CO2 concentration of 5 ± 0.2%. During the testing period, the samples were split into two halves and the carbonation depth was measured by spraying an approximately 1% phenolphthalein solution on the broken surface of the sample, after the dust was removed.
(a) carbonation chamber (b) split into two halves Accelerated-carbonation test (Figure 4) of the concrete samples (∅100 × 200 mm) was carried out during 7, 28, and 56 days, according to KS F 2584 [18], by using an accelerated-carbonation chamber at a constant temperature of 20 ± 2 • C, constant humidity of 60 ± 5%, and constant CO 2 concentration of 5 ± 0.2%. During the testing period, the samples were split into two halves and the carbonation depth was measured by spraying an approximately 1% phenolphthalein solution on the broken surface of the sample, after the dust was removed. The slump and air content tests of the concrete samples were carried out in accordance with Korean Standards (KS) F 2402 [14] and KS F 2421 [15], respectively. The compressive strength test was carried out after 7, 14, 28, and 56 days in accordance with KS F 2405 [16]. The presented strength test values are the average values of three samples.
Chloride ion penetration tests were carried out after 7, 14, 28, and 56 days, according to ASTM 1202 C [17]. Specimens having dimensions of ∅100 × 50 mm, obtained by cutting the ∅100 × 200 mm cylindrical specimens, were used in the test. The specimen and equipment used in the chloride ion penetration test are shown in Figure 3. Accelerated-carbonation test (Figure 4) of the concrete samples (∅100 × 200 mm) was carried out during 7, 28, and 56 days, according to KS F 2584 [18], by using an accelerated-carbonation chamber at a constant temperature of 20 ± 2 °C, constant humidity of 60 ± 5%, and constant CO2 concentration of 5 ± 0.2%. During the testing period, the samples were split into two halves and the carbonation depth was measured by spraying an approximately 1% phenolphthalein solution on the broken surface of the sample, after the dust was removed. Figure 5 shows the slumps and WRA dosages of the samples with the MSFAs. The slumps of all mixtures were similar, in the range of 200 to 210 mm, regardless of the replacement ratio of MSFA. In addition, the dosage of WRA used to control the fluidity of the plain sample with the NS was 0.9% of the binder weight. The WRA dosage decreased with the increase in replacement ratio of MSFA. The WRA dosage of the B/F100 sample (only with the MSFA) was 0.2% of the binder weight. The tendency that the fluidity of the mixture with the BFS fine aggregate is better than that of the mixture with the NS owing to the vitreous texture of the BFS particle is similar to those in previous reports [13,19]. Figure 5 shows the slumps and WRA dosages of the samples with the MSFAs. The slumps of all mixtures were similar, in the range of 200 to 210 mm, regardless of the replacement ratio of MSFA. In addition, the dosage of WRA used to control the fluidity of the plain sample with the NS was 0.9% of the binder weight. The WRA dosage decreased with the increase in replacement ratio of MSFA. The WRA dosage of the B/F100 sample (only with the MSFA) was 0.2% of the binder weight. The tendency that the fluidity of the mixture with the BFS fine aggregate is better than that of the mixture with the NS owing to the vitreous texture of the BFS particle is similar to those in previous reports [13,19]. Figure 8 shows the variation in chloride ion penetrability of the sample with both BFS and FNS as the fine aggregate. The total charge passed through the sample during the considered period was calculated according to ASTM C 1202. After seven days, the charge passed through the plain sample was approximately 9273 C. The charge passed through the B/F100 sample with 100% MSFA was the smallest, approximately 37% smaller than that of the plain sample. After 14 days of curing, the largest passed charge (approximately 8083 C) was observed for the plain sample, which contained only the NS. The charges passed through all samples with the MSFAs were smaller than that through the plain sample. The charge passed through the B/F100 sample was the smallest (4106 C), approximately 50% smaller than that through the plain sample (8084 C). After 28 days of curing, the charge passed through the sample decreased with the increase in replacement ratio of MSFA (3993 C (plain) to 2041 C (B/F100)). The chloride ion penetrabilities of all samples were moderate level (2000-4000 C; ASTM C 1202 [17]). After 56 days, the charge passed through the sample decreased with the increase in replacement ratio of MSFA. The chloride ion penetrabilities of B/F50, B/F75, and B/F100 with MSFA contents of 50%, 75%, and 100% were low level (1000-2000 C; ASTM C 1202), approximately 17%, 34%, and 54% lower than that of the plain sample, respectively. The resistances to penetration of chloride ions of the samples with the MSFAs were better than that of the plain sample. The tendency that the concrete with BFS has a good resistance to chloride ions is similar to those observed in previous studies [21,22]. contents of 50%, 75%, and 100% were low level (1000-2000 C; ASTM C 1202), approximately 17%, 34%, and 54% lower than that of the plain sample, respectively. The resistances to penetration of chloride ions of the samples with the MSFAs were better than that of the plain sample. The tendency that the concrete with BFS has a good resistance to chloride ions is similar to those observed in previous studies [21,22]. Figure 9 shows the relation between the compressive strength and chloride ion penetrability for the samples with different replacement ratios of MSFA. The chloride ion penetrability decreased with the increase in compressive strength. In addition, the chloride ion penetrabilities of the samples with the MSFAs were lower than that of the sample with the NS at the same compressive strength. Figure 10 shows the variation in carbonation depth of the sample with both BFS and FNS as the fine aggregate. A higher replacement ratio of MSFA led to a smaller carbonation depth. After seven days of treatment in the accelerated carbonation chamber, the carbonation depth of the plain sample was approximately 1.18 mm, while those of the samples with the MSFAs were approximately 45% to 69% (0.53 to 0.82 mm) of that of the plain sample. After 28 days, the carbonation depths of the plain, B/F25, B/F50, B/F75, and B/F100 samples were approximately 1.16, 0.94, 0.83, 0.76, and 0.68 mm, respectively. The carbonation depth of B/F100 was approximately 41% smaller than that of the plain sample. After 56 days of accelerated carbonation testing, the carbonation depths of all samples were increased. The largest carbonation depth (1.26 mm) was observed for the plain sample. The carbonation depth decreased with the increase in replacement ratio of MSFA. The carbonation depth of B/F100 was the smallest (0.79 mm). The tendency that the resistance to carbonation of the concrete with the steel slag as the aggregate is better than that of the concrete with the natural aggregate is similar to that in a previous report [23]. This shows that the use of the MSFA in the mortar or concrete Figure 10 shows the variation in carbonation depth of the sample with both BFS and FNS as the fine aggregate. A higher replacement ratio of MSFA led to a smaller carbonation depth. After seven days of treatment in the accelerated carbonation chamber, the carbonation depth of the plain sample was approximately 1.18 mm, while those of the samples with the MSFAs were approximately 45% to 69% (0.53 to 0.82 mm) of that of the plain sample. After 28 days, the carbonation depths of the plain, B/F25, B/F50, B/F75, and B/F100 samples were approximately 1.16, 0.94, 0.83, 0.76, and 0.68 mm, respectively. The carbonation depth of B/F100 was approximately 41% smaller than that of the plain sample. After 56 days of accelerated carbonation testing, the carbonation depths of all samples were increased. The largest carbonation depth (1.26 mm) was observed for the plain sample. The carbonation Materials 2020, 13, 940 8 of 10 depth decreased with the increase in replacement ratio of MSFA. The carbonation depth of B/F100 was the smallest (0.79 mm). The tendency that the resistance to carbonation of the concrete with the steel slag as the aggregate is better than that of the concrete with the natural aggregate is similar to that in a previous report [23]. This shows that the use of the MSFA in the mortar or concrete can be effective for the improvement in carbonation resistance. was approximately 1.18 mm, while those of the samples with the MSFAs were approximately 45% to 69% (0.53 to 0.82 mm) of that of the plain sample. After 28 days, the carbonation depths of the plain, B/F25, B/F50, B/F75, and B/F100 samples were approximately 1.16, 0.94, 0.83, 0.76, and 0.68 mm, respectively. The carbonation depth of B/F100 was approximately 41% smaller than that of the plain sample. After 56 days of accelerated carbonation testing, the carbonation depths of all samples were increased. The largest carbonation depth (1.26 mm) was observed for the plain sample. The carbonation depth decreased with the increase in replacement ratio of MSFA. The carbonation depth of B/F100 was the smallest (0.79 mm). The tendency that the resistance to carbonation of the concrete with the steel slag as the aggregate is better than that of the concrete with the natural aggregate is similar to that in a previous report [23]. This shows that the use of the MSFA in the mortar or concrete can be effective for the improvement in carbonation resistance. Figure 11 shows the relation between the compressive strength and carbonation depth for the concrete samples with different replacement ratios of MSFA. With the increase in accelerated carbonation testing period, the carbonation depth increase was accompanied by an increase in compressive strength. In addition, the carbonation depths of the samples with the MSFAs were smaller than that of the plain sample at the same compressive strength.
Carbonation Depth
Materials 2020, 13, x FOR PEER REVIEW 9 of 11 Figure 11 shows the relation between the compressive strength and carbonation depth for the concrete samples with different replacement ratios of MSFA. With the increase in accelerated carbonation testing period, the carbonation depth increase was accompanied by an increase in compressive strength. In addition, the carbonation depths of the samples with the MSFAs were smaller than that of the plain sample at the same compressive strength.
Conclusions
The conclusions of this study can be summarized as follows.
(1) The slumps of all mixtures were similar (200 to 210 mm), regardless of the replacement ratio of MSFA. The WRA dosage decreased with the increase in replacement ratio of MSFA.
(2) The compressive strength of the plain sample was approximately 23.9 MPa, while those of the samples with the MSFAs were in the range of 21.2 to 23.5 MPa after seven days. After 56 days, the highest compressive strength (approximately 38.8 MPa) was observed for the B/F100 sample. The increase in compressive strength could be explained as the particle size distribution of the MSFA was similar to that of the NS and the formation of the secondary CSH gel was initiated.
(3) After seven days, the charge passed through B/F100 was the smallest, approximately 37%
Conclusions
The conclusions of this study can be summarized as follows. | 2020-02-26T14:04:35.957Z | 2020-02-01T00:00:00.000 | {
"year": 2020,
"sha1": "23755f21bac9f466851536a13004654adb8f6d59",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1996-1944/13/4/940/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3c3dad0a2950dbb90352152cccdde2841ec40d6",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine",
"Materials Science"
]
} |
21141592 | pes2o/s2orc | v3-fos-license | Whispering-Gallery Mode Resonators for Detecting Cancer
Optical resonators are sensors well known for their high sensitivity and fast response time. These sensors have a wide range of applications, including in the biomedical fields, and cancer detection is one such promising application. Sensor diagnosis currently has many limitations, such as being expensive, highly invasive, and time-consuming. New developments are welcomed to overcome these limitations. Optical resonators have high sensitivity, which enable medical testing to detect disease in the early stage. Herein, we describe the principle of whispering-gallery mode and ring optical resonators. We also add to the knowledge of cancer biomarker diagnosis, where we discuss the application of optical resonators for specific biomarkers. Lastly, we discuss advancements in optical resonators for detecting cancer in terms of their ability to detect small amounts of cancer biomarkers.
Introduction
Cancer, a hazardous non-communicable disease, is currently the main challenge in healthcare. Cancer Research UK shows that more than 14.1 million people had cancer in 2012 [1]. Despite the growing fatal rate of the disease, cancer develops in stages attributed to different hazard levels; the faster the cancer is detected, the higher the chance it can be cured. Figure 1 shows the survival rates for ovarian stromal cancer and cervical cancer visualized from the 5-year survival rates, which predict the chance of survival for those years [2]. Cancer is usually divided into four stages: Stage I, cancer is small and contained within the organ of origin; Stage II, cancer has grown larger but has not spread to other organs; Stage III, cancer has spread to nearby tissues and can reach the lymph nodes; and Stage IV, metastatic cancer; cancer has spread to other organs in the body. A, B, and C are used to indicate the substage, e.g., lung carcinoid tumor stage IIA [3]. However, the number staging system is an abstraction that describes the disease progression. Healthcare professionals typically describe the disease stage using the tumor-node-metastasis (TNM) system. T evaluates the size of the cancer and its area of spread to nearby tissue on a scale of 1-4; N defines whether the cancer reaches a lymph node on a scale of 0-3; M indicates whether the cancer has spread to another organ, and the value is binary: either 0 or 1 [4]. Table 1 shows the relations between the two systems. [5], (b) Cervical cancer [6]. Table 1. The relationship between the number system and TNM system of cancer for cervical cancer [7]. Stage IVB T1-T3, N0-N3, M1 Even now, early diagnoses for cancer are scarce. A qualitative study in 2015 asserted that late diagnosis is the result of difficulty in making appointments, worry regarding doctor availability, and unwillingness to learn of the development of cancer [8]. Apart from the emotional concern of scarring from unfortunate discovery, the findings reflect the difficulty in accessing diagnostic technologies even in developed countries. Currently, cancer detection is still based on highly invasive, time-consuming, and costly processes.
Number System TNM System
When point-of-care (POC) diagnosis was introduced, the concept of real-time, or at least shorter diagnosis time was heralded as the future of healthcare. The actual definition of POC diagnosis is testing at or near the site of patient care whenever medical care is needed [9]. The first biosensor was a glucose meter that became popular in the late 1980s [10,11]. However, POC diagnosis is not a new concept. At the dawn of civilization, doctors visited patients' homes and performed diagnosis and treatment without today's centralized medical centers. The centralized medical complex was introduced in the early 17th century [12], enhancing the mobility of technology and knowledge. Over time, and as the world population increased exponentially, more patients have become dependent on this system. The demographic growth has resulted in an overwhelming demand for healthcare services. The diagnostic and treatment capability are then limited by the capacity of the available technology.
Diagnosis requires novel tools and instruments. Based on the concept of reducing diagnostic times and steps, modern biosensors have come to play an important role in fulfilling the ideology of POC. Some cancers now can be detected at an early stage using less invasive and lower-cost [5], (b) Cervical cancer [6]. Table 1. The relationship between the number system and TNM system of cancer for cervical cancer [7].
When point-of-care (POC) diagnosis was introduced, the concept of real-time, or at least shorter diagnosis time was heralded as the future of healthcare. The actual definition of POC diagnosis is testing at or near the site of patient care whenever medical care is needed [9]. The first biosensor was a glucose meter that became popular in the late 1980s [10,11]. However, POC diagnosis is not a new concept. At the dawn of civilization, doctors visited patients' homes and performed diagnosis and treatment without today's centralized medical centers. The centralized medical complex was introduced in the early 17th century [12], enhancing the mobility of technology and knowledge. Over time, and as the world population increased exponentially, more patients have become dependent on this system. The demographic growth has resulted in an overwhelming demand for healthcare services. The diagnostic and treatment capability are then limited by the capacity of the available technology.
Diagnosis requires novel tools and instruments. Based on the concept of reducing diagnostic times and steps, modern biosensors have come to play an important role in fulfilling the ideology of POC. Some cancers now can be detected at an early stage using less invasive and lower-cost procedures [13][14][15] through advancements in sensor technologies such as electrochemical sensor [16], The graph shows increasing trends in this area of study. Remark: The data were gathered from the Web of Science database (www.webofknowledge.com) with the keyword "Optical Resonator", and then the category filter "Optics" was applied [19].
Optical resonators enhance light guide properties for detection in the environment. The sensors are based on light confinement in waveguide structures, such as ring resonators, Fabry-Perot resonators, whispering-gallery mode (WGM) resonators (WGR) and high-contrast gratings. Currently, there is high growth in optical resonator research because of their promising properties, such as high sensitivity, low response time, compactness, and immunity to electromagnetic interference [17,18,20,21], which are unlike other waveguide sensors or fiber sensors, in which sensor dimension limits light-matter interaction. Optical resonators determine the interaction length by the quality (Q) factor [22], the dimensionless quantity of temporal confinement of light resonating inside the sensor [23]. WGR and ring resonators are emerging cancer detection technologies known for their easy fabrication and high performance. These aspects allow the sensors to perform early detection when cancer biomarker concentrations are still low. This is the first review of such technologies for cancer detection.
Unlike other sensors, such as electrochemical sensors, which usually require probe labeling or analyte modification [24], optical resonators require no chemical modification of the analyte [21]. Optical resonators can diagnose different cancer biomarkers in a short time [17,[25][26][27]. These properties also match the concept of POC diagnosis. Optical resonators have wide-ranging applications. As label-free biosensors, most optical resonators eliminate the pre-detection procedures by removing, or at least shortening, sample preparation time. For label-based sensing, the biological sample must undergo analyte marking, for example, with a fluorescent dye or radio marker. This not only requires extra cost for the labeling material, but also causes undesirable delay between sample collection and analysis. This delay and labeling approach can result in changes in both the physical and chemical properties of the analyte [17,25,[28][29][30].
In this review, we introduce the fundamentals of optical resonators and evaluate the various sensors, and describe WGR and ring resonators in more detail. Then, we provide an overview of sensor fabrication and preparation. We briefly introduce cancer biomarkers and discuss their detection. We then review recent works on the application of optical resonators in cancer biomarker detection focused on early stage detection. We also discuss how the high sensitivity of the optical resonators plays an important role in the early stage cancer diagnosis and how these optical technologies can potentially save lots of lives.
Performance Parameters of Optical Resonators
Similar to other biomedical sensors, optical resonators are defined by sensitivity, limit of detection (LOD), resolution, dynamic range, and selectivity. Sensitivity describes the change in output upon a change in the physical properties of the sensor. For optical resonators, sensitivity is the ability to transduce the change (binding) on the resonator surface into an output, i.e., spectral shift. This can be described as the shift of resonant wavelength (δλ) or the difference of light intensity (δI) at a particular wavelength when the analyte is bound to the surface. The units are usually given as nm/RIU (wavelength shift over refractive index unit) and W/(m 2· RIU) [31].
LOD is the minimum quantity of analyte to be detected by the sensor in the actual detection environment. The LOD is limited by noise source, efficiency of the optical detector in the optical system, and the amount of light that can be detected [32]. Optical system noise can come from: (1) thermal noise, (2) microphone vibrations (3) dark current of the detector and (4) shot noise of the optical detector and loss due to material defects. These amounts of noise will introduce losses in the optical resonator [33][34][35]. The LOD for optical resonators is normally defined by two parameters: First, the RIU and surface coverage, given as pg/mm 2 , or it can be described as sample concentration in the molar unit. Label-free sensors are normally described by RIU, while surface coverage/sample concentration are used in both labeled and label-free sensors [36]. RIU can be converted to other physical parameter of analyte by using response unit. In optical sensors, it has been well established and validated over a wide range of different proteins that one response unit of 10 −6 RIU is equivalent to 1 pg/mm 2 . By knowing the sensing area size, the mass on the sensor can be determined. Similarly, with a known molecular weight, one can determine the number of molecules on the sensor [32].
Selectivity describes the ability to distinguish between the desired analyte and others in the environment. Whenever the analyte does not specifically bind to the sensor, it results in errors to the output signal. Biosensors are now equipped with biologically selective species such as antibody-antigen [37] or enzyme-substrate [38]. However, synthetic substances have also been developed; aptamer is one such example [39]. The process of introducing the binding material to the resonator surface to optimize selectivity is termed surface functionalization.
Unlike other sensors, biomedical sensor development aims to reduce the sample volume, especially when the analyte requires invasive extraction procedures from a patient, such as a blood sample. The other parameter is the Q-factor, introduced earlier as the basic parameter for determination of the lifetime of light resonating in the waveguided resonator. The Q-factor can be defined as: where ω 0 is the angular frequency. The linear frequency of the mode is described by v 0 . τ is the time for the field intensity to decay by the factor of e, the so-called cavity ring down lifetime. According to the equation, ∆w FWHM is inversely proportional to τ and determines the line width: the uncertainty of the frequency of resonance in angular frequency; FWHM stands for full width at half-maximum. The highest Q-value reported so far is 2 × 10 10 on the spheroidal crystalline WGMR with a resonant wavelength of 1300 nm [40]. For amorphous material, it is 8 × 10 9 with a 633 nm resonant wavelength [41].
Light Coupling
Light coupling is commonly referred to as evanescent wave coupling in optical resonator research. It can be described as inducing light on another medium without contact. As two optical guide components are placed within evanescent zone distance, light is coupled into the resonant structure aided by a phase-matched evanescent field. A tunable laser source is commonly used for input; the wavelength of the source can be adjusted. At the matching wavelength, the intensity dip can be observed using a photodetector. As mentioned earlier, the optical resonator interaction length is the effect of a Q-factor. The length is described by: From Equation (1), L eff is the effective interaction length and n is the refractive index of the resonator. Q-factors of typical ring resonators range from 10 4 to 10 9 . λ is resonant wavelength; the matching wavelength is determined by the resonant condition: where r describes the outer radius of the ring resonator; n eff refers to the effective refractive index, which is sensitive to the binding event on the surface; m is the mode number; λ m is the resonant free-space wavelength of the tunable laser.
Optical resonators mostly consist of two optical waveguide structures: the first serves the system as an input and output waveguide, where light enters and the signal is detected at the other end. The other structure is the resonator structure, which confines the light propagated from the first structure. There are various techniques for coupling the light, the most common being tapered coupling. Examples of methods for illuminating resonator structures ( Figure 3) include tapered coupling [42], prism coupling [43], angled fiber coupling [44], planar waveguide side coupling [39,45,46], free-space coupling or direct illumination [47,48], and polished half-block coupler [49].
Light Coupling
Light coupling is commonly referred to as evanescent wave coupling in optical resonator research. It can be described as inducing light on another medium without contact. As two optical guide components are placed within evanescent zone distance, light is coupled into the resonant structure aided by a phase-matched evanescent field. A tunable laser source is commonly used for input; the wavelength of the source can be adjusted. At the matching wavelength, the intensity dip can be observed using a photodetector. As mentioned earlier, the optical resonator interaction length is the effect of a Q-factor. The length is described by: From Equation (1), Leff is the effective interaction length and n is the refractive index of the resonator. Q-factors of typical ring resonators range from 10 4 to 10 9 . λ is resonant wavelength; the matching wavelength is determined by the resonant condition: where r describes the outer radius of the ring resonator; neff refers to the effective refractive index, which is sensitive to the binding event on the surface; m is the mode number; λm is the resonant freespace wavelength of the tunable laser.
Optical resonators mostly consist of two optical waveguide structures: the first serves the system as an input and output waveguide, where light enters and the signal is detected at the other end. The other structure is the resonator structure, which confines the light propagated from the first structure. There are various techniques for coupling the light, the most common being tapered coupling. Examples of methods for illuminating resonator structures ( Figure 3) include tapered coupling [42], prism coupling [43], angled fiber coupling [44], planar waveguide side coupling [39,45,46], free-space coupling or direct illumination [47,48], and polished half-block coupler [49]. Tapered coupling has 99.8% coupling efficiency [50]; the losses are the result of material absorption, scattering, and bending losses from fiber stress. Comparable efficiency has been reported for the half-block coupler [49]. Experimental studies have shown that prism coupling has approximately 80% efficiency [51,52]. Angle-polished fiber coupling, also known as "pigtailing," has Tapered coupling has 99.8% coupling efficiency [50]; the losses are the result of material absorption, scattering, and bending losses from fiber stress. Comparable efficiency has been reported for the half-block coupler [49]. Experimental studies have shown that prism coupling has approximately 80% efficiency [51,52]. Angle-polished fiber coupling, also known as "pigtailing," has 60% efficiency [44]. Tapered coupling has the advantages of not only higher coupling efficiency but also ease of fabrication and preparation. Despite their lower coupling efficiency, the other coupling methods are of research interest, as they can provide more robustness [18].
Whispering-Gallery Mode
Lord Rayleigh discovered WGM in 1878, as he whispered on one side of the curved wall inside St. Paul's Cathedral (London, UK). His voice could be heard 40 m away [53,54]. This demonstrated the phenomena of the acoustic wave (sound), which could travel along the edge of the gallery hall with negligible loss. As he observed this phenomenon, he also proposed that electromagnetic waves could travel with this mode. In 1961, the WGM of optical light was first reported in a spherical microstructure [55]. Instead of the curved wall-guided whispered voice, the light was entirely internally reflected in a confined cavity. Later on, WGMs in liquid were also studied. Following Lord Rayleigh's discovery, Debye and Mie published two important theoretical works. Debye determined the resonant eigenfrequencies for dielectric and metallic spheres in 1909 [56]. Mie studied electromagnetic wave scattering in microspheres [57]. These later became widely discussed in both theoretical and experimental works [53].
Optical WGM, as mentioned earlier, was discovered in a microsphere resonator. Crystalline calcium fluoride (CaF 2 ) was fabricated as the resonator. A pulsed laser was illuminated in a tangential direction to the sphere. The detected output laser showed transient oscillation instead of spikes from the input, confirming the presence of WGM [55]. In 1981, WGM in liquid resonators was observed for the first time. In the experiment, a liquid droplet was optically levitated by a laser beam, and the scattered light was detected [58]. WGR have many optics and optoelectronics applications and are studied in both passive and active mode [59]. The previous applications for passive mode include optical and photonic single resonator filters [60,61], high-order filter or cascade resonators [62,63], tunable filters [64], WGM filters in optoelectronic oscillator (OEO) [65,66], and sensors, which can be biological, chemical, or mechanical [67][68][69][70]. The active-mode WGM is commonly utilized as a laser source, and involves wave mixing devices such as continuous-wave (CW)-WGM laser, i.e., the miniature laser [55,71], resonator-modified scattering [72], switches and modulators [73], OEO [74], pulse propagation and generation, and wave-mixing oscillator [59]. The comparison of WGM of sound and optical light is shown in Figure 4. 60% efficiency [44]. Tapered coupling has the advantages of not only higher coupling efficiency but also ease of fabrication and preparation. Despite their lower coupling efficiency, the other coupling methods are of research interest, as they can provide more robustness [18].
Whispering-Gallery Mode
Lord Rayleigh discovered WGM in 1878, as he whispered on one side of the curved wall inside St. Paul's Cathedral (London, UK). His voice could be heard 40 m away [53,54]. This demonstrated the phenomena of the acoustic wave (sound), which could travel along the edge of the gallery hall with negligible loss. As he observed this phenomenon, he also proposed that electromagnetic waves could travel with this mode. In 1961, the WGM of optical light was first reported in a spherical microstructure [55]. Instead of the curved wall-guided whispered voice, the light was entirely internally reflected in a confined cavity. Later on, WGMs in liquid were also studied. Following Lord Rayleigh's discovery, Debye and Mie published two important theoretical works. Debye determined the resonant eigenfrequencies for dielectric and metallic spheres in 1909 [56]. Mie studied electromagnetic wave scattering in microspheres [57]. These later became widely discussed in both theoretical and experimental works [53].
Optical WGM, as mentioned earlier, was discovered in a microsphere resonator. Crystalline calcium fluoride (CaF2) was fabricated as the resonator. A pulsed laser was illuminated in a tangential direction to the sphere. The detected output laser showed transient oscillation instead of spikes from the input, confirming the presence of WGM [55]. In 1981, WGM in liquid resonators was observed for the first time. In the experiment, a liquid droplet was optically levitated by a laser beam, and the scattered light was detected [58]. WGR have many optics and optoelectronics applications and are studied in both passive and active mode [59]. The previous applications for passive mode include optical and photonic single resonator filters [60,61], high-order filter or cascade resonators [62,63], tunable filters [64], WGM filters in optoelectronic oscillator (OEO) [65,66], and sensors, which can be biological, chemical, or mechanical [67][68][69][70]. The active-mode WGM is commonly utilized as a laser source, and involves wave mixing devices such as continuous-wave (CW)-WGM laser, i.e., the miniature laser [55,71], resonator-modified scattering [72], switches and modulators [73], OEO [74], pulse propagation and generation, and wave-mixing oscillator [59]. The comparison of WGM of sound and optical light is shown in Figure 4. [54,75]. The phenomenon can be observed as light trapped inside the spherical structure [76]. [54,75]. The phenomenon can be observed as light trapped inside the spherical structure [76].
Detection Mechanism
Total internal reflection generates an evanescent wave on the resonator surface. This allows the resonator to detect any analyte in the environment bound with the resonator. Fundamentally, optical resonators are sensitive to the refractive index. The analyte molecules binding to the resonator surface cause a shift in the effective refractive index [33,77]. The measurement is processed by plotting the graph between light intensity versus wavelength, i.e., the so-called spectral shift. The surface molecule density is related to the spectral shift and can be described by the first-order perturbation theory [68,78,79].
Equation (3) reveals the relationship between molecule surface density (σ) and spectral of microring resonator, where λ is resonant wavelength and δλ is the shift of resonant wavelength; ε 0 is constant vacuum permittivity, α ex is excess polarizability for molecules; n ring and n buffer are the refractive index of the microsphere resonator and buffer solution, respectively, while r is the ring radius. Figure 5 depicts the optical resonator system.
Detection Mechanism
Total internal reflection generates an evanescent wave on the resonator surface. This allows the resonator to detect any analyte in the environment bound with the resonator. Fundamentally, optical resonators are sensitive to the refractive index. The analyte molecules binding to the resonator surface cause a shift in the effective refractive index [33,77]. The measurement is processed by plotting the graph between light intensity versus wavelength, i.e., the so-called spectral shift. The surface molecule density is related to the spectral shift and can be described by the first-order perturbation theory [68,78,79].
Equation (3) reveals the relationship between molecule surface density (σ) and spectral of microring resonator, where λ is resonant wavelength and δλ is the shift of resonant wavelength; ε0 is constant vacuum permittivity, αex is excess polarizability for molecules; nring and nbuffer are the refractive index of the microsphere resonator and buffer solution, respectively, while r is the ring radius. Figure 5 depicts the optical resonator system.
Optical resonators or optical resonant sensors are evanescent wave-based sensors [36], which can be fabricated as a microstructure with different geometries. The sensors trap light inside their microcavities, allowing the optical rays to resonate in the confined space aided by light coupling via optical waveguides. Optical resonators are used not only for biomedical detection [21,96,97] but also for gas detection [98], environmental control [87], toxin detection [99], temperature detection [70,100], microforce detection [67], single protein detection [90] and even nanoparticle detection [101,102].
Optical resonators or optical resonant sensors are evanescent wave-based sensors [36], which can be fabricated as a microstructure with different geometries. The sensors trap light inside their microcavities, allowing the optical rays to resonate in the confined space aided by light coupling via optical waveguides. Optical resonators are used not only for biomedical detection [21,96,97] but also for gas detection [98], environmental control [87], toxin detection [99], temperature detection [70,100], microforce detection [67], single protein detection [90] and even nanoparticle detection [101,102].
WGM Microsphere Resonators
These three-dimensional (3D) resonators are typically fabricated by introducing heat to one end of the optical fiber, which will melt the fiber. The molten material soon forms a spherical shape with the aid of surface tension. The sphere has low surface roughness, helping the sensor achieve dramatically high Q-factors in the range of 10 6 to 10 9 . The sensor has a very low LOD: 10 −8 to 10 −9 RIU [68,78]. The configuration is utilized in miniature scale detection, down to the single molecule. Figure 6 depicts the setup.
of the optical fiber, which will melt the fiber. The molten material soon forms a spherical shape with the aid of surface tension. The sphere has low surface roughness, helping the sensor achieve dramatically high Q-factors in the range of 10 6 to 10 9 . The sensor has a very low LOD: 10 −8 to 10 −9 RIU [68,78]. The configuration is utilized in miniature scale detection, down to the single molecule. Figure 6 depicts the setup.
Unlike ring resonators, the 3D geometry of a microsphere resonator means it has various methods for coupling, i.e., tapered coupling, prism coupling, angled fiber coupling or even direct illumination. Prism coupling and half-block coupling have the advantage of illuminating multiple microsphere resonators simultaneously [27].
Microsphere resonators are also considered easy to fabricate. As mentioned above, the principle is based on melting a waveguide fiber and allowing the surface tension to reform the material into a sphere. The heat applied to the tip of fiber can be flame or an electric arc. The melted fiber, e.g., silica (SiO2), will need to retain a minimum value of surface energy, forming a sphere. Then, the material solidifies after the heat source is removed. Despite the simplicity of the fabrication, microsphere formation has low reproducibility. The fused fiber is very sensitive to the environment and contamination. Although the sphere size can be adjusted by selecting the desired size of the preheated fiber, some errors still occur during experiments [103].
Microring and Microtoroid Optical Resonators
The term "microring resonator" often refers to a planar ring resonator, the configuration being a microscale waveguide in a circular geometry (Figure 7). This resonator has the advantage of ease of fabrication [82]. Silicon or silicon nitride is commonly utilized as the resonating structure. The resonator and coupling waveguide can also be fabricated on the same substrate, meaning all structures are on the same chip. The ring diameters range 10 μm to 10 mm [78,87,92,99,104]. Microring resonator arrays can also be fabricated and have been commercialized. Genalyte (San Diego, CA, USA) manufactures microring resonators that can perform 128 tests within 15 min [105]. There are three main approaches to the fabrication process. The first is deep ultraviolet (DUV) lithography, the main fabrication technique for complementary metal-oxide semiconductor (CMOS). The resolution is comparably rough to other techniques (100 nm). The UV wavelength is 248 nm or 193 nm. The second method, electron beam lithography (EBL), has higher resolution in the range of 10 nm. This method creates fewer flaws than DUV, the trade-off being the longer fabrication time. Lastly, Figure 6. Example of microsphere configuration. Shown is an example of a microsphere with tapered coupling. The microsphere utilizes the WGM principle to resonate light inside its cavity. The analyte binding on the microsphere surface causes the change in the refractive index. The resonant wavelength is also shifted as a result.
Unlike ring resonators, the 3D geometry of a microsphere resonator means it has various methods for coupling, i.e., tapered coupling, prism coupling, angled fiber coupling or even direct illumination. Prism coupling and half-block coupling have the advantage of illuminating multiple microsphere resonators simultaneously [27].
Microsphere resonators are also considered easy to fabricate. As mentioned above, the principle is based on melting a waveguide fiber and allowing the surface tension to reform the material into a sphere. The heat applied to the tip of fiber can be flame or an electric arc. The melted fiber, e.g., silica (SiO 2 ), will need to retain a minimum value of surface energy, forming a sphere. Then, the material solidifies after the heat source is removed. Despite the simplicity of the fabrication, microsphere formation has low reproducibility. The fused fiber is very sensitive to the environment and contamination. Although the sphere size can be adjusted by selecting the desired size of the preheated fiber, some errors still occur during experiments [103].
Microring and Microtoroid Optical Resonators
The term "microring resonator" often refers to a planar ring resonator, the configuration being a microscale waveguide in a circular geometry (Figure 7). This resonator has the advantage of ease of fabrication [82]. Silicon or silicon nitride is commonly utilized as the resonating structure. The resonator and coupling waveguide can also be fabricated on the same substrate, meaning all structures are on the same chip. The ring diameters range 10 µm to 10 mm [78,87,92,99,104]. Microring resonator arrays can also be fabricated and have been commercialized. Genalyte (San Diego, CA, USA) manufactures microring resonators that can perform 128 tests within 15 min [105]. There are three main approaches to the fabrication process. The first is deep ultraviolet (DUV) lithography, the main fabrication technique for complementary metal-oxide semiconductor (CMOS). The resolution is comparably rough to other techniques (100 nm). The UV wavelength is 248 nm or 193 nm. The second method, electron beam lithography (EBL), has higher resolution in the range of 10 nm. This method creates fewer flaws than DUV, the trade-off being the longer fabrication time. Lastly, nanoimprinting lithography (NIL) requires pre-processing from the earlier two techniques. The polymer is applied to a mold, and then cured to solidify. The mold is later utilized to create the replica of the structure with waveguide materials. The polymer mold itself can also be the resonator [106]. nanoimprinting lithography (NIL) requires pre-processing from the earlier two techniques. The polymer is applied to a mold, and then cured to solidify. The mold is later utilized to create the replica of the structure with waveguide materials. The polymer mold itself can also be the resonator [106]. The resulting Q-factor of ring resonators use to be comparably low (10 4 ) [62,107] against microsphere and microtoroid configurations. This is the result of residual flaws during the microfabrication. The other reason is because optical wave leakage occurs as the resonator is connected to the substrate [58]. The surface roughness of the device is often generated during the fabrication process. This considered as defect which lead generate noise, resulting in lowering the quantity of Q-factor as discuss in previous session. However, ultra-high Q ring resonator was discovered later. The modification of the direction coupling waveguide to the resonator instead of a traditional single straight bus waveguide [108,109].
Microtoroids, on the other hand, were invented to address the signal lost problems of ring resonator. Even though, microtoroids are WGM-based optical resonator, the device is fabricated on a chip like in microring resonator fabrication. The microtoroid structure is raised above the substrate by the post, preventing any leak from evanescent scattering to the substrate [110]. As a result, a high Q-factor can be observed, with values up to 10 8 [23]. Microtoroids can be fabricated using lithography, reactive ion etching, and xenon difluoride (XeF2) etching. Hence, an array of microtoroids can be fabricated. The fabrication process is as follows: The resonator substrate, SiO2, is deposited on a silicon substrate. Then, etching is used to create the SiO2 disk. The post is created by removing the substrate below the disk via XeF2 isotropic etching. Finally, the CO2 laser illuminates the structure, and the edge of the disk melts and forms a smooth toroid aided by surface tension [29,111,112] (Figure 8). The resulting Q-factor of ring resonators use to be comparably low (10 4 ) [62,107] against microsphere and microtoroid configurations. This is the result of residual flaws during the microfabrication. The other reason is because optical wave leakage occurs as the resonator is connected to the substrate [58]. The surface roughness of the device is often generated during the fabrication process. This considered as defect which lead generate noise, resulting in lowering the quantity of Q-factor as discuss in previous session. However, ultra-high Q ring resonator was discovered later. The modification of the direction coupling waveguide to the resonator instead of a traditional single straight bus waveguide [108,109].
Microtoroids, on the other hand, were invented to address the signal lost problems of ring resonator. Even though, microtoroids are WGM-based optical resonator, the device is fabricated on a chip like in microring resonator fabrication. The microtoroid structure is raised above the substrate by the post, preventing any leak from evanescent scattering to the substrate [110]. As a result, a high Q-factor can be observed, with values up to 10 8 [23]. Microtoroids can be fabricated using lithography, reactive ion etching, and xenon difluoride (XeF 2 ) etching. Hence, an array of microtoroids can be fabricated. The fabrication process is as follows: The resonator substrate, SiO 2 , is deposited on a silicon substrate. Then, etching is used to create the SiO 2 disk. The post is created by removing the substrate below the disk via XeF 2 isotropic etching. Finally, the CO 2 laser illuminates the structure, and the edge of the disk melts and forms a smooth toroid aided by surface tension [29,111,112] (Figure 8).
The smoother surface generates less noise due to the surface roughness. However, microtoroid has a difficulty of coupling since the resonators are perched atop of silicon pillar, resulting in difficulty of coupling alignment [113,114]. In addition, during the process of CO 2 laser illumination melt the resonators, the diameter of microtoroid is shrink. This lead to difficulty of monolithically integration for a micortoroid resonant cavity with an on-chip waveguide [113,115]. reactive ion etching, and xenon difluoride (XeF2) etching. Hence, an array of microtoroids can be fabricated. The fabrication process is as follows: The resonator substrate, SiO2, is deposited on a silicon substrate. Then, etching is used to create the SiO2 disk. The post is created by removing the substrate below the disk via XeF2 isotropic etching. Finally, the CO2 laser illuminates the structure, and the edge of the disk melts and forms a smooth toroid aided by surface tension [29,111,112] (Figure 8). (2) Hydrofluoric acid etching is applied to create the disk structure on top of the wafer; (3) XeF 2 etching is used to create a post structure; (4) CO 2 laser illuminates the structure to smoothen the toroid structure.
Optofluidic Ring Resonator (OFRR)
OFRR is also a ring resonator configuration. The resonator is used to overcome the disadvantage of low Q-factor in microring resonators and the low reproducibility of microsphere resonators [116]. The OFRR uses a microscale SiO 2 capillary with a diameter in the range of hundreds of micrometers. However, the wall thickness can be thin, i.e., <4 µm. This can be considered a parallel microring resonator combined with the fluidic channel. Fiber manufacturing methods are equipped to fabricate such resonators, and reproducibility is enhanced due to the qualified manufacturing process [117], such as capillary pulling or fiber pulling tower. The resonator structure is then conjugated with the coupling device. The Q-factor of such a device is in the 10 6 range, with a detection limit of 10 −7 RIU.
Sensor Surface Functionalization
To prepare sensors for detecting the analyte, the resonator surface must be positioned on the specific recognition area to achieve high selectivity [18]. Receptors for a specific analyte are introduced to the system to convert only specific recognition events into the signal. For biomedical application, biological or chemical receptors are immobilized on the sensor surface. Receptor immobilization is a critical step in fabrication for achieving high-performance detection. The crucial characteristics for immobilizing biomolecules are high selectivity, long-term stability, and efficient functionality.
There are various, well-defined methods of surface functionalization, such as physical adsorption [118], covalent binding [119], non-covalent binding [38], or His tagging [120,121]. Physical adsorption means the interaction is based on hydrophobic and electrostatic properties. Although this is the easiest process, its disadvantages are some desorption of the receptor under specific conditions, and low reproducibility.
Covalent binding introduces molecular chemical groups to the resonator surface. Herein, linkers are used to immobilize the receptor. The process is more reproducible; for example, the binding of proteins can utilize thiol, amino, and carboxylic groups. Non-covalent binding requires an active layer on the surface, such as biological affinity binding. Such surfaces are equipped with biological/ chemical-specific affinity pairs [21], for example, antigen and antibody [27].
Surface functionalization begins with surface activation. Silanization, involving several silanes (methoxy-and ethoxysilanes) with different functional groups, is the common method of chemically activating silicon, SiO 2 , or silicon nitride. Silanes assist the process by forming strong bonds between organic and inorganic molecules. A coupling agent stabilizes the hydroxy group on the resonator surface by turning them into oxane bonds.
Sample Preparation
For biomedical detection, sample collection, preparation, and preservation are crucial steps prior to detection. Delay can cause changes in physical and chemical characteristics. Beginning with extraction, the desired analyte is isolated from the buffer solution. Isolation efficiency is mainly dependent on the solubility of the analyte and the matrix effect. Examples of such processes are Soxhlet extraction [122], ultrasonic extraction [123], supercritical fluid [124,125], accelerated solvent [126], and microwave-assisted methods [127]. Sample preparation is considered the major bottleneck of the analysis. Most extraction methods require long operation times. Moreover, there is also the high risk of contamination, resulting in errors in analysis. However, in microscale, fluids always behave as laminar flow, which cannot occur on the macroscopic scale. Pressure-driven flow, capillary-driven flow, osmotic flow, and Marangoni flow enhance the transport phenomena. In the microchannels, different fluids flow separately in a more orderly manner, resulting in the fluids being more difficult to fuse to one another.
Microfluidic systems have been introduced and combined with optical resonator systems. Microfluidics are mostly chip-based technology for manipulating and controlling small volumes of fluids, which enables the use of small amounts of patient sample [103,128]. Given the high Q-factor of the resonator, the sensors can perform well with low volumes of analyte. The sensor can be placed in the microfluidic channel designed specifically for the analyte [129]. Thus, sample preparation and analysis are integrated, enhancing significantly higher throughput than a process requiring separate sample preparation, e.g., micro total analysis system (µTAS) and LOC.
Gene-related detection often uses polymerase chain reaction (PCR) to replicate analytes to detectable levels, as most of the detection in this application is known for extremely low target concentrations compared to background molecules. The challenge of optical resonator detection also focuses on how the process can be speeded-up by reducing the gene amplification time by eliminating PCR from the system or using other on-chip methods [38,130,131].
Cancer Biomarkers
According to the Food and Drug Administration (FDA), a biomarker is a characteristic that is objectively measured and evaluated as an indicator of normal biological processes, pathogenic processes, or biological responses to a therapeutic intervention [132]. Different biomarkers can identify various cancers [15] (Table 2). Cancer is a complex disease; its biomarkers may be multiple parameters. Biomarkers enable doctors to define clinical problems at the early stage with precise prognosis and are less invasive for the patient [133]. Body fluids are more promising as biomarkers; however, they always contain other background molecules, or even cells.
Cancer biomarkers are usually obtained from the primary tumor or body fluids. Extracting a marker from the primary tumor might involve a complex and invasive process, particularly when the cancer has metastasized. Other than optical resonators, cancer biomarkers can also be detected through various other processes. The most common tests are enzyme-linked immunosorbent assay (ELISA), multiplex ELISA, and multiplex arrays. ELISA is the most stable technology at present, and was first developed in the 1970's [134] as a radioimmunoassay (RIA). The main disadvantage of ELISA is that performance depends heavily on antibody quality, the manufacturer, and requires a skillful operator.
In Table 2, cancer biomarkers can be classified into various categories. They can be a gene, antigen, enzyme, or even a cell physical dysfunction. Optical resonators for cancer biomarker detection use three configurations, as discussed earlier: microsphere, microring, and OFRR. There are both single resonator and array resonators. The system is usually a hybrid system involving microfluidics and LOCs. WGRs and ring resonators have better sensitivity and shorter response time than the other optical resonator configurations. Such advantages lead to the possibility of early detection for cancer diagnosis. Table 3 shows the relevant concentration of the example biomarkers which detected by optical resonator.
Optical Resonators for Nucleic Acid Testing (NAT)
Cancer can be detected by genetic biomarkers that can be produced from the development of the disease or the malfunctioning gene that causes the disease. The conventional methods of detecting nucleic acid-based molecules are heavily involved PCR to amplify the concentration of an analyte, resulting in better detection [28,129]. Such processes are time-and resource-consuming. Here, there is a trend for minimizing the PCR step or replacing it entirely [38,130]. The domain of interest is mainly the method for detecting the nucleic acid-based analyte in small volumes [17,20] extracted from both tumor cells and other body fluids. Researchers mainly compare analysis time with conventional PCR-based techniques as one of the dimensions of sensor quality. In fact, the goal of Nucleic Acid testing using optical resonator are to overcome the performance of PCR-based technology. Apart from seeking an advantage in sample preparation, there is also interest in developing less complex devices. The detected nucleic acid-based biomarkers are HER2 (human epidermal growth factor receptor [neu or ErbB2]) [138], HRAS (Harvey RAS) [131], FGFR3 [25], and gene abnormality, methylated genes [26,28], and mutated genes [129].
HER2 is overexpressed in the development of breast cancer. Presently, gene analysis involves a highly invasive method and labeled detection [11]. Therefore, less invasive and label-free analysis is gaining attention. HER2 is a transmembrane protein; its extracellular domain can also be detected in blood [15]. In 2010, an experiment was performed using an OFRR [138]. The resonator was a pulled SiO 2 capillary with an outer diameter of 150 µm. The wall was chemically polished to reduce its thickness and to enhance sensitivity, and was reduced from 5 µm to 3 µm using hydrofluoric acid (HF). The resonator was coupled with a tapered SMF-28. The input laser had a wavelength of 1550 nm, with the photodetector on the other end of the tapered fiber. The sample was injected using a syringe pump at the rate of 1 µL/min. The inner core was functionalized first by a layer of aminosilane. Then, double mismatched primer (DMP) crosslinker bound protein G with the layer. HER2 antibody was introduced as the bioreceptor. The HER2 sample was diluted in phosphate-buffered saline (PBS) to 0.1 mg/mL. After the analysis, a low concentration of HF was applied to the inner surface to remove the activated surface component. The sensor was then ready to perform further diagnosis. Figure 9 shows the setting and the results.
Optical resonators were recently combined with the microfluidic technique to overcome the sample preparation process. Isothermal solid-phase amplification/detection (ISAD) was introduced. The concept usually involves a microring resonator combined with fluidic channel arrays. The technique is known for its high sensitivity, low LOD, and can be operated as a label-free sensor. The operation time is also reasonably short, and real-time analysis is possible. A 2013 study used ISAD on the HRAS and FGR3 genes, which are bladder cancer biomarkers [131]. The performance of the ISAD device was compared with other multiplex analysis methods: isothermal recombinase polymerase amplification (RPA), conventional PCR, and real-time PCR (RT-PCR). Table 4 shows the results. Optical resonators were recently combined with the microfluidic technique to overcome the sample preparation process. Isothermal solid-phase amplification/detection (ISAD) was introduced. The concept usually involves a microring resonator combined with fluidic channel arrays. The technique is known for its high sensitivity, low LOD, and can be operated as a label-free sensor. The operation time is also reasonably short, and real-time analysis is possible. A 2013 study used ISAD on the HRAS and FGR3 genes, which are bladder cancer biomarkers [131]. The performance of the ISAD device was compared with other multiplex analysis methods: isothermal recombinase polymerase amplification (RPA), conventional PCR, and real-time PCR (RT-PCR). Table 4 shows the results. In 2014, Shin et al. [25] developed the ISAD system with DMP to improve specificity. Their aim was to detect mutated epidermal growth factor receptor (EGFR), the biomarker of non-small cell lung cancers (NSCLCs). The sensor was fabricated similarly to a typical silicon microring resonator. Then, DMP primer was immobilized on the surface. The microring was incubated overnight in the amine-modified DMP primer in 1 PBS. A small acrylic chamber (1.5 cm × 0.7 cm × 2 cm) was used to confine the detection area ( Figure 10).
In 2014, Shin et al. [25] developed the ISAD system with DMP to improve specificity. Their aim was to detect mutated epidermal growth factor receptor (EGFR), the biomarker of non-small cell lung cancers (NSCLCs). The sensor was fabricated similarly to a typical silicon microring resonator. Then, DMP primer was immobilized on the surface. The microring was incubated overnight in the aminemodified DMP primer in 1 PBS. A small acrylic chamber (1.5 cm × 0.7 cm × 2 cm) was used to confine the detection area ( Figure 10).
Optical Resonators for Detecting Antigens
Antigen detection by optical resonators has focused on carcinoembryonic antigen (CEA) [24] and various carcinoma antigens (CAs). An antigen is the most detectable cancer biomarker ( Table 2).
Optical Resonators for Detecting Antigens
Antigen detection by optical resonators has focused on carcinoembryonic antigen (CEA) [24] and various carcinoma antigens (CAs). An antigen is the most detectable cancer biomarker ( Table 2). In this field, microring resonators, OFRR, and microsphere resonators are utilized [27,137]. The existing methods for detecting antigen biomarkers are mainly based on commercialized ELISA platforms, which, as mentioned earlier, involve label-based detection. Antigens can be extracted from various biological sources: tumor cells, blood, and other fluids. Even though detecting an individual antigen yields insufficient information for diagnosing the type of cancer, it can benefit the prognosis for accurate treatment and early screening. For example, CEA is a biomarker of various cancers.
In 2009, an OFRR was fabricated for detecting CA15-3 [30] (Figure 11), a breast cancer biomarker obtained from patient serum. The OFRR was pulled glass under high temperature from CO 2 laser. A syringe pump and Tygon tubing were connected to the OFRR, which was then washed with HF solution to reduce the wall thickness. The tapered fiber was illuminated with a 980-nm laser. Then, anti-CA15-3 antibody was applied to the inner surface by amine coupling. First, the inner surface was treated with 50:50 hydrochloric acid (HCl)/methanol solution for 10 min and rinsed with DI water. The surface was aminated using 3-APS in ethanol. Next, the inner core was activated with 5% glutaraldehyde in PBS for 30 min Anti-CA15-3 (50 µg/mL) was introduced to the inner surface at a flow rate of 5 µL/min. To improve specificity, surface blocking of non-specific binding was crucial. In that regard, 1 mg/mL amine-PEG-amine in PBS was reacted with the surface for 30 min. The antibody-functionalized OFRR was then ready to detect the various concentrations of CA15-3 in PBS. In this field, microring resonators, OFRR, and microsphere resonators are utilized [27,137]. The existing methods for detecting antigen biomarkers are mainly based on commercialized ELISA platforms, which, as mentioned earlier, involve label-based detection. Antigens can be extracted from various biological sources: tumor cells, blood, and other fluids. Even though detecting an individual antigen yields insufficient information for diagnosing the type of cancer, it can benefit the prognosis for accurate treatment and early screening. For example, CEA is a biomarker of various cancers. In 2009, an OFRR was fabricated for detecting CA15-3 [30] (Figure 11), a breast cancer biomarker obtained from patient serum. The OFRR was pulled glass under high temperature from CO2 laser. A syringe pump and Tygon tubing were connected to the OFRR, which was then washed with HF solution to reduce the wall thickness. The tapered fiber was illuminated with a 980-nm laser. Then, anti-CA15-3 antibody was applied to the inner surface by amine coupling. First, the inner surface was treated with 50:50 hydrochloric acid (HCl)/methanol solution for 10 min and rinsed with DI water. The surface was aminated using 3-APS in ethanol. Next, the inner core was activated with 5% glutaraldehyde in PBS for 30 min Anti-CA15-3 (50 μg/mL) was introduced to the inner surface at a flow rate of 5 μL/min. To improve specificity, surface blocking of non-specific binding was crucial. In that regard, 1 mg/mL amine-PEG-amine in PBS was reacted with the surface for 30 min. The antibody-functionalized OFRR was then ready to detect the various concentrations of CA15-3 in PBS. Figure 11. Schematic shows the surface functionalization of OFRR for detecting CA15-3; the system shows that the sample is drawn by the syringe pump while the OFRR performs the analysis. Reprinted with permission from [30].
A microsphere configuration was used to detect CA125 and TNF-α [136]. The configuration was designed for detecting different analytes simultaneously. The experiment utilized the WGM Imaging (WGMI) technique. Sensitive fluorescent dye aided the imaging of the resonated microsphere, fluorescing only when the resonance condition was achieved. Instead of measuring the output light at the end of the coupling waveguide, the system detected the fluorescent ring on the microsphere Figure 11. Schematic shows the surface functionalization of OFRR for detecting CA15-3; the system shows that the sample is drawn by the syringe pump while the OFRR performs the analysis. Reprinted with permission from [30].
A microsphere configuration was used to detect CA125 and TNF-α [136]. The configuration was designed for detecting different analytes simultaneously. The experiment utilized the WGM Imaging (WGMI) technique. Sensitive fluorescent dye aided the imaging of the resonated microsphere, fluorescing only when the resonance condition was achieved. Instead of measuring the output light at the end of the coupling waveguide, the system detected the fluorescent ring on the microsphere surface as input tunable laser varying with wavelength. An image was obtained from a microscope above the microspheres. Figure 12 depicts the procedure. Microspheres with two diameters were fabricated: 38 μm and 53 μm. The smaller sphere was designated as CA125 detector. Hence, it was incubated with 2 μg/mL anti-CA125. The bigger sphere was incubated with 2 μg/mL anti-TNF-α for detecting TNF-α. Both microspheres were fluoresced with commercial Alexa 633 dye. Then, surface blocking antigen was applied regularly. The experiment tested commercial CA-125 and TNF-α of known concentration. A high NA objective lens provided total internal reflection from the tunable laser source for evanescent coupling to occur and to enhance WGM. The system was then tested with known concentrations of samples; Figure 13 shows the results. Microspheres with two diameters were fabricated: 38 μm and 53 μm. The smaller sphere was designated as CA125 detector. Hence, it was incubated with 2 μg/mL anti-CA125. The bigger sphere was incubated with 2 μg/mL anti-TNF-α for detecting TNF-α. Both microspheres were fluoresced with commercial Alexa 633 dye. Then, surface blocking antigen was applied regularly. The experiment tested commercial CA-125 and TNF-α of known concentration. A high NA objective lens provided total internal reflection from the tunable laser source for evanescent coupling to occur and to enhance WGM. The system was then tested with known concentrations of samples; Figure 13 shows the results. Later on, the same research group examined the addition assay. Prism coupling was equipped as a WGM coupling waveguide for microspheres of three different diameters, which were designed as previously done for detecting different analytes ( Figure 14) [27]. Three ovarian cancer biomarkers Microspheres with two diameters were fabricated: 38 µm and 53 µm. The smaller sphere was designated as CA125 detector. Hence, it was incubated with 2 µg/mL anti-CA125. The bigger sphere was incubated with 2 µg/mL anti-TNF-α for detecting TNF-α. Both microspheres were fluoresced with commercial Alexa 633 dye. Then, surface blocking antigen was applied regularly. The experiment tested commercial CA-125 and TNF-α of known concentration. A high NA objective lens provided total internal reflection from the tunable laser source for evanescent coupling to occur and to enhance WGM. The system was then tested with known concentrations of samples; Figure 13 shows the results.
Later on, the same research group examined the addition assay. Prism coupling was equipped as a WGM coupling waveguide for microspheres of three different diameters, which were designed as previously done for detecting different analytes ( Figure 14) [27]. Three ovarian cancer biomarkers were investigated: osteopontin (38-µm microsphere), CA-125 (53-µm microsphere), and prolactin (63-µm microsphere). This allowed for 120 microspheres to be excited with WGM simultaneously, enabling the detection and real-time analysis of three components in the same assay.
Optical Resonators for Detecting Other Proteins
Apart from nucleic acid-based and antigen analytes, other biomarkers include proteins, enzymes, or other byproduct particles of cancer [31,139,140]. Optical resonators are also applied in many configurations to achieve the most suitable specification.
The enzyme telomerase is also a bladder cancer biomarker [135]. It is extracted from the cancer cell using heat shock. Urinary telomerase activity can lead to cancer detection. The conventional methodology is the telomerase repeat amplification protocol (TRAP), which is PCR-based and is time-consuming and costly.
In 2013, an experiment analyzing telomerase activity with silicon microring resonators was performed [38]. A silicon microring of 4-μm diameter was fabricated with a 220 nm × 500 nm waveguide. The coupling length was 220 nm and the coupling waveguide-resonator gap was 220 nm (Figure 15). Figure 15. Diagram shows the functionalized surface of a microring resonator. DNA oligomer is immobilized on the surface. Telomerase extracted from a cancer cell is introduced into the system along with dNTPs. The result shows that the system can detect telomerase activity. Reprinted with permission from [38].
Optical Resonators for Detecting Other Proteins
Apart from nucleic acid-based and antigen analytes, other biomarkers include proteins, enzymes, or other byproduct particles of cancer [31,139,140]. Optical resonators are also applied in many configurations to achieve the most suitable specification.
The enzyme telomerase is also a bladder cancer biomarker [135]. It is extracted from the cancer cell using heat shock. Urinary telomerase activity can lead to cancer detection. The conventional methodology is the telomerase repeat amplification protocol (TRAP), which is PCR-based and is time-consuming and costly.
In 2013, an experiment analyzing telomerase activity with silicon microring resonators was performed [38]. A silicon microring of 4-µm diameter was fabricated with a 220 nm × 500 nm waveguide. The coupling length was 220 nm and the coupling waveguide-resonator gap was 220 nm (Figure 15).
The resonator was treated with oxygen plasma and soaked in 2% APTES (3-aminopropyltriethoxysilane) solution (in ethanol/DI water mixture) for 2 h in ambient conditions. The resonator was then heated and a mixture of GAD (glutamate decarboxylase) solution, borate buffer, and sodium cyanoborohydride was applied. The steps were repeated several times to prepare the surface. After surface activation, 50 µL telomerase oligomers solution was applied to the resonators and left overnight at 4 • C. Finally, the chip was placed in an acrylic chamber (6 × 2 × 1 mm 3 ), and rinsed with 10 mM PBS to block the surface. Figure 16 shows the functionalized surface and binding mechanism. Detecting a single protein is also one of the challenges [29,111,112]. With nanotechnology, the detection of a single thyroid cancer biomarker has been reported for thyroglobulin protein (Tg) and bovine serum albumin (BSA). A dielectric microsphere was prepared with a single gold nanoshell bound at the equator ( Figure 16). WGM inside the microsphere enhanced surface plasmon on the gold nanoshell, forming the hybrid system. many configurations to achieve the most suitable specification.
The enzyme telomerase is also a bladder cancer biomarker [135]. It is extracted from the cancer cell using heat shock. Urinary telomerase activity can lead to cancer detection. The conventional methodology is the telomerase repeat amplification protocol (TRAP), which is PCR-based and is time-consuming and costly.
In 2013, an experiment analyzing telomerase activity with silicon microring resonators was performed [38]. A silicon microring of 4-μm diameter was fabricated with a 220 nm × 500 nm waveguide. The coupling length was 220 nm and the coupling waveguide-resonator gap was 220 nm (Figure 15). Figure 15. Diagram shows the functionalized surface of a microring resonator. DNA oligomer is immobilized on the surface. Telomerase extracted from a cancer cell is introduced into the system along with dNTPs. The result shows that the system can detect telomerase activity. Reprinted with permission from [38].
The resonator was treated with oxygen plasma and soaked in 2% APTES (3aminopropyltriethoxysilane) solution (in ethanol/DI water mixture) for 2 hours in ambient conditions. The resonator was then heated and a mixture of GAD (glutamate decarboxylase) solution, borate buffer, and sodium cyanoborohydride was applied. The steps were repeated several times to prepare the surface. After surface activation, 50 μL telomerase oligomers solution was applied to the resonators and left overnight at 4 °C. Finally, the chip was placed in an acrylic chamber (6 × 2 × 1 mm 3 ), and rinsed with 10 mM PBS to block the surface. Figure 16 shows the functionalized surface Figure 15. Diagram shows the functionalized surface of a microring resonator. DNA oligomer is immobilized on the surface. Telomerase extracted from a cancer cell is introduced into the system along with dNTPs. The result shows that the system can detect telomerase activity. Reprinted with permission from [38]. and binding mechanism. Detecting a single protein is also one of the challenges [29,111,112]. With nanotechnology, the detection of a single thyroid cancer biomarker has been reported for thyroglobulin protein (Tg) and bovine serum albumin (BSA). A dielectric microsphere was prepared with a single gold nanoshell bound at the equator ( Figure 16). WGM inside the microsphere enhanced surface plasmon on the gold nanoshell, forming the hybrid system.
Conclusions
In this review, we discuss the principle of the optical resonator, provide a brief history thereof, and describe the application of WGM and ring resonators. Including the application of our interest, the biosensor, we also discuss the resonators and study in more detail the resonator types currently used in cancer detection. Lastly, we discuss cancer biomarkers, which mainly involve antigen or genetic components, and study and evaluate promising experiments and settings.
Optical resonators have the potential to be the future of cancer detection, as the technology is suitable for early-stage detection and effective prognosis. With early detection, a game-changer in healthcare industry, patients can be cured at the stage where cancer has not propagated to another site, which means more chances of successful treatment and less risk. With regard to effective prognosis, cancer prognostics at present are costly, time-consuming, and highly invasive. Optical resonators can detect analytes precisely in a shorter time, and as it involves biomarker testing, it requires less invasive procedures. These two benefits have resulted in the improvement of POC diagnosis, which will be the next generation of healthcare. In fact, microring resonators have been commercialized and their performance is impressive. As the manufacturer claims that the sensor can perform 128 tests simultaneously from a small sample, it involves the invasive extraction of blood from the patient.
In the present review, we divide the analysis into three categories based on biomarker group: Nucleic acid-based, antigen, and other protein components. Nucleic acid-based analytes are mainly genetic components or byproducts. The challenge is in improving throughput. Genetic samples are usually obtained in small amounts. PCR is mainly needed to generate sufficient quantities of analyte; Figure 16. 3D model shows the mechanism of BSA binding with gold nanoshell particle at the equator of a dielectric microsphere with WGM. The WGM-h enhancement and adsorption of the biomolecule caused the resonant wavelength shift. Reprinted with permission from [141].
Conclusions
In this review, we discuss the principle of the optical resonator, provide a brief history thereof, and describe the application of WGM and ring resonators. Including the application of our interest, the biosensor, we also discuss the resonators and study in more detail the resonator types currently used in cancer detection. Lastly, we discuss cancer biomarkers, which mainly involve antigen or genetic components, and study and evaluate promising experiments and settings.
Optical resonators have the potential to be the future of cancer detection, as the technology is suitable for early-stage detection and effective prognosis. With early detection, a game-changer in healthcare industry, patients can be cured at the stage where cancer has not propagated to another site, which means more chances of successful treatment and less risk. With regard to effective prognosis, cancer prognostics at present are costly, time-consuming, and highly invasive. Optical resonators can detect analytes precisely in a shorter time, and as it involves biomarker testing, it requires less invasive procedures. These two benefits have resulted in the improvement of POC diagnosis, which will be the next generation of healthcare. In fact, microring resonators have been commercialized and their performance is impressive. As the manufacturer claims that the sensor can perform 128 tests simultaneously from a small sample, it involves the invasive extraction of blood from the patient.
In the present review, we divide the analysis into three categories based on biomarker group: Nucleic acid-based, antigen, and other protein components. Nucleic acid-based analytes are mainly genetic components or byproducts. The challenge is in improving throughput. Genetic samples are usually obtained in small amounts. PCR is mainly needed to generate sufficient quantities of analyte; however, it is time-consuming and expensive. Promising research has explored solutions for decreasing the reliance on PCR.
Antigens are the main cancer biomarkers; however, one antigen can be a biomarker of various cancers. In this application, specificity is the key factor, where one sample might contain many different antigen types. Nevertheless, the presence of a specific antigen can lead to a specific diagnosis. Hence, developing resonator quality for commercialization will be the new challenge. We also discuss protein and enzyme detection, with notable mention of an ultra-sensitive system capable of detecting individual proteins.
Acknowledgments: The authors acknowledge the Research Institute of Rangsit University (RiR). The authors also thank Darren Albutt for the discussion and proofreading.
Author Contributions: Weeratouch Pongruengkiat and Suejit Pechprasarn have contributed equally on the manuscript, reviewing the literature, writing the manuscript and making corrections.
Conflicts of Interest:
The authors declare no conflicts of interest. | 2017-09-16T16:46:03.870Z | 2017-09-01T00:00:00.000 | {
"year": 2017,
"sha1": "aa05b3c926dcc814aad7bad95de218c901060940",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/17/9/2095/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa05b3c926dcc814aad7bad95de218c901060940",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
} |
261728404 | pes2o/s2orc | v3-fos-license | Optimization of real-time PCR protocols from lymph node bovine tissue for direct detection of Mycobacterium tuberculosis complex
ABSTRACT Bovine tuberculosis (bTB) is a zoonotic disease and a global health problem that is subjected to obligatory eradication programs in the European Union. Microbiological culture is an imperfect technique for bTB diagnosis. This study aims to compare and validate two DNA isolation protocols and three different specific DNA targets, IS6110, IS4, and mpb70, to confirm Mycobacterium tuberculosis complex (MTC) infection by real-time PCR directly from fresh tissue samples. Fresh lymph node samples were collected from 81 cattle carcasses at the slaughterhouse. A comparison of both extraction protocols was performed with IS6110-real-time PCR, showing an adjusted sensitivity (SE) of 78.34% and 95.9% for protocols 1 and 2, respectively, while the specificity (SP) was 100% in both cases. Afterward, the comparison between IS4 and mpb70 targets was performed from the samples extracted with protocol 2, obtaining an adjusted SE of 90.87% and 83.3%, respectively, and an SP of 100% in both cases. The positive likelihood ratio was ∞ for the three targets, and the negative likelihood ratio was 0.04, 0.091, and 0.16 for IS6110, IS4, and mpb70, respectively. Negative predictive values were ≥90%, ≥85%, and ≥80% for real-time PCR targeting IS6110, IS4, and mpb70, respectively, when the true prevalence is ≤60%, and the positive predictive value is 100% in any scenario of true prevalence. According to these results, the DNA extraction protocol 2 and real-time PCR targeting IS6110 or IS4 could be potential first-choice molecular assays to detect MTC directly in fresh bovine tissue samples. Importance Bovine tuberculosis (bTB), a chronic infectious and zoonotic disease caused by Mycobacterium tuberculosis complex (MTC), is considered a neglected disease of global importance, causing a detrimental impact on public health, particularly in developing countries where tuberculosis remains a major health problem. However, debate around the efficacy of control measures is still an ongoing matter of concern, with poor diagnostic performance being considered one of the most relevant factors involved in the failure to eradicate the disease since many truly infected animals will be misclassified as bTB-free. This study highlights a DNA extraction protocol and real-time PCR targeting IS6110 or IS4 as potential first-choice molecular assays to detect MTC directly in fresh bovine tissue samples, providing rapid, highly sensitive, and specific diagnostic tools as an alternative to microbiology, which could take up to 3 months to complete, shortening the turnaround time for decision makers to be promptly informed.
highly conserved regions found in IS6110 (26), represent potential DNA targets for MTC diagnosis by using real-time PCR in fresh tissue samples.Consequently, the goal of the present study was, first, to optimize and compare the performance of two DNA extraction protocols for MTC detection by real-time PCR using bovine fresh lymph nodes as input samples and, second, to evaluate and validate a real-time PCR targeting IS6110, mpb70, IS4, and/or a combination of these specific regions in comparison with microbiological culture in bTB eradication programs.
Sample selection and processing
This study was part of a larger project focused on improving rapid and accurate diagnostic tools in the framework of the bTB surveillance and control program in Spain.In brief, fresh lymph node (LN) tissue samples were collected from cattle carcasses at the slaughterhouse from 2018 to 2019 in the context of the Spanish bTB eradication program.All samples were collected during routine post-mortem veterinary examina tions within an official context and according to national and European regulations.No purposeful killing of animals was performed for this study, so no ethical or farmer's consent approval was required.
In order to perform this study, LN fresh tissue samples were collected from 81 animals, verifying the presence of either visible TBLs or non-visible lesions (NVLs).Individual homogenization of each LN was run using a tissue homogenizer (Fisherbrand, Fisher Scientific, Madrid, Spain) to obtain a uniform mixture.Tissue homogenate was split up into paired samples that were used for DNA isolation and selective microbiological culture, respectively.
Mycobacterium tuberculosis complex microbiological culture
The samples were analyzed by the reference technique, microbiological culture, followed by PCR confirmation according to the previously described protocol (28).Briefly, the homogenate was decontaminated with an equal volume of 0.75% (wt/vol: 1/1) hexadecylpyridinium chloride solution in agitation for 30 min.Samples were centrifuged for 30 min at 1,500 × g (28).The pellets were collected with swabs and cultured in liquid media (MGIT 960, Becton Dickinson, Madrid, Spain) using an automated BD Bacter MGIT System (Becton Dickinson).The culture was considered positive when colonies were confirmed as MTC by real-time PCR (29).
DNA extraction using a commercial kit (protocol 1)
DNA extraction using a commercial kit (protocol 1) from homogenized tissue sam ples was conducted using DNA Extract VK (Vacunek, Bizkaia, Spain) according to the manufacturer's guidelines with several modifications.Briefly, a mix of 300 mg of homogenized tissue was submitted to mechanical disruption (30 Hz/20 min) together with 250 µL of sterile distilled water, 250 µL of sample lysis buffer VK-SB, and 300 mg of 0.5 mm glass beads.After that, tissue samples were centrifuged, the supernatant was discarded, and the sediment was subjected to an enzymatic digestion with proteinase K at 56°C in a thermo-shaker (750 rpm/12 h).Next, lysis buffer VK-LB3 was added, and the mixture was incubated for 10 min at 70°C.Finally, 210 µL ethanol (96%-100%) was added to the sample, which was applied in a spin column following the manufacturer's guidelines.
Mechanical lysis, proteinase K digestion and DNA extraction using a commercial kit (protocol 2)
Protocol 2, which consists of mechanical lysis, proteinase K digestion, and DNA extraction using a commercial kit (Fig. 1), was performed according to Lorente-Leal et al. (16) with several modifications by using NucleoSpin Tissue Kit (Macherey-Nagel, Düren, Germany).In brief, 1 mL of homogenized tissue (1,000 mg) was centrifuged for 5 min at 9,000 × g.The resulting tissue pellet was transferred into a tube together with 250 µL of sample buffer T1 and 150 mg of 0.5 mm and 50 mg of 0.1 mm glass beads.Then, samples were subjected to mechanical disruption using Scientific Industries SI Disruptor Genie (2,850 rpm/50 Hz/20 min).After an overnight enzymatic digestion at 56°C with 30 µL proteinase K in a thermo-shaker (750 rpm/12 h), a new mechanical disruption step was conducted.Samples were centrifuged for 2 min at 9,000 × g, and then the supernatant was transferred to a new tube, preserved to be processed afterward, while sediments were treated again with a new cycle of mechanical disruption and enzymatic digestion according to the steps of DNA extraction performed as outlined above in this protocol 2.
Then, both the sediment and supernatant of each sample were processed independ ently and mixed with 200 µL of Buffer T3, incubating the mixture for 10 min at 70°C.The lysate was transferred to a silica-based nucleic acid purification column and managed according to the manufacturer's instructions.Positive and negative extraction controls were included for both protocols 1 and 2. All DNA extraction products were stored at −20°C until use.
QuantiFast Pathogen PCR +IC Kit (Qiagen, Hilden, Germany) was used to conduct the real-time PCR evaluating each sample in duplicate in the MyiQ2 Two-Color Real-Time PCR Detection System (Bio-Rad, Hercules, CA, USA) under the following cycling conditions: 95°C for 5 min to activate the DNA polymerase followed by 42 amplification cycles that consisted of a denaturation step at 95°C for 15 s and an annealing-extension step at 60°C for 30 s.Moreover, as the manufacturer's guidelines described, an exogenous inhibition heterologous control [internal control assay (ICA)] was included to evaluate the presence of certain inhibitors in the samples.The setup for 10 µL of reaction volume was 4 µL of ultrapure distilled water, 1 µL of sample, 2 µL of Quantifast pathogen master mix, 1 µL of ICA, 1 µL of internal control DNA, and for IS6110, 0.4 µL of forward (10 pmol/µL), 0.4 µL of reverse (10 pmol/µL), and 0.2 µL of probe (10 pmol/µL), or for IS4 and mpb70, 0.6 µL of forward (10 pmol/µL), 0.3 µL of reverse (10 pmol/µL), and 0.1 µL of probe (10 pmol/µL).The IAC Cq should be 30 ± 3.
Complete inhibition of amplification was considered when IAC did not amplify and partial inhibition of amplification when it showed a Cq value >33.When inhibition was detected, the samples were diluted 1:2, and the real-time PCR was run again.An inter-run calibrator with a known Cq value of 32.0 was introduced in each assay to self-control intra-assay repeatability and accuracy.In the case of culture-positive and real-time PCR-negative samples with a Cq value >38, DNA extraction and real-time PCR were repeated to verify the results.In the case of protocol 2, for every sample, PCR was first conducted from the DNA extraction obtained from the sediment.When a negative result was obtained, amplification was carried out using the DNA isolated from the supernatant as template.
Limit of detection
The analytical sensitivity, or limit of detection (LOD), was estimated for the proposed primers and probes.LOD is defined as the lowest concentration at which 95% of replicates are positive, according to the Clinical and Laboratory Standard Institute guidelines.A serial 10-fold dilution series of M. bovis genomic DNA with known quantities ranging from 10 6 to 10 0 was used.The reactions were performed in triplicate for each dilution in three different assays.
Statistical analysis
The performance of both DNA extraction protocols and real-time PCR targeting IS6110, IS4, and mpb70 was evaluated by comparing them with microbiological culture, which is considered an imperfect reference technique for bTB diagnosis (4,5,17).Using statistical Epidat 3.1 software (Galician Health Service, Spain), the adjusted SE and SP, false positive rate (FPR), and false negative rate (FNR) were estimated.In addition, positive and negative likelihood ratio (PLR and NLR) results were determined and interpreted following the criteria published below by ( 31): (i) PLR ≥10 or NLR ≤0.1, a technique of high diagnostic value that will normally allow discrimination between healthy and diseased animals; (ii) PLR = 5-10 or NLR = 0.1-0.2, a technique involving moderate changes in probability and whose diagnostic utility will depend on prevalence; (iii) PLR = 2-5 or NLR = 0.2-0.5, involving small changes in probability and whose diagnostic utility will depend on prior probability; and (iv) PLR = 1-2 and NLR ≤0.5, rarely discernible changes.Positive and negative predictive values (PPV and NPV) for different infection prevalences were subsequently estimated using a Bayesian approach (EPIDAT 3.1, Galician Health Service, Spain).Results were plotted using GraphPad Prism 9 (GraphPad Software, La Jolla, CA, USA).
Mycobacterium tuberculosis complex microbiological culture results
Results of MTC microbiological culture were used as the reference assay for comparing extraction protocols 1 and 2. Consequently, a sample was considered a culture-positive sample when an obtained colony from a grown culture was confirmed by real-time PCR-IS6110.Thus, 49.38% of animals (n = 81) were disclosed as culture-positive (n = 40), whereas 50.62% were tested as culture-negative (n = 41).Besides, in order to disclose TBL, every LN was subjected to a gross evaluation, revealing 19.75% of animals with TBLs (n = 16), with 87.50% of them being culture-positive (n = 14) (Table 1).
Optimization of DNA extraction protocols from homogenized fresh tissue lymph nodes by using IS6110 target
The DNA isolation protocol is a critical first step in the real-time PCR pipeline.In order to identify a DNA extraction method that yields suitable genomic DNA in terms of quality and amount, we tested two different DNA extraction protocols.The performance of both extraction protocols was evaluated by running a real-time PCR targeting IS6110, as previously described by our laboratory, with LOD ranges from 10 to 100 genomic equivalents and the cut-off set at Cq <38.
MTC real-time PCR-IS6110 results according to DNA extraction by protocol 1
Eighty-one animals were evaluated by protocol 1, with Cq values ranging from 24.60 to 37.50 (average = 30.20)after real-time PCR-IS6110 (Table 1).Thus, 40.74% of animals were tested as MTC-positive (n = 33), and 59.26% were tested as MTC-negative (n = 48).The IAC amplified in most of the samples without partial inhibition, but when complete inhibition was observed because of a high yield of DNA (over 1,000 ng/µL), samples (n = 6) were diluted up to a final concentration of 450 ng/µL and re-evaluated by real-time PCR, keeping a negative result for MTC but with IAC amplification.Taking into consideration TBLs, 100% of LN samples with TBLs that were evaluated by protocol 1 were tested as MTC-positive (n = 16 animals) by real-time PCR-IS6110 (Table 1).
MTC real-time PCR-IS6110 results according to DNA extraction by protocol 2
In the case of extraction protocol 2, 55.56% of all analyzed animals were tested as MTC-positive (n = 45), and 44.44% were tested as MTC-negative (n = 36) (Table 1).Whereas 86.67% of MTC-positive samples (n = 39) came from the sediment, 13.30% of MTC-positive samples (n = 6) were finally detected from the supernatant after obtaining negative or inconclusive results from the sediment (data not shown).The Cq values ranged from 21.00 to 37.50 (average = 32.91)for sediments and from 27.00 to 37.00 (average = 32.67) in the case of supernatants.Although IAC amplification was observed in most of the samples, several partial (n = 10) or complete inhibitions (n = 14) were found.Then, samples were diluted 1:2 and re-evaluated by real-time PCR.Five samples were detected as positive, and the remainder kept negative results for MTC but with IAC amplification.All the samples with TBLs that were evaluated by protocol 2 were tested as MTC-positive (n = 16 animals) by real-time PCR-IS6110 (Table 1).
DNA extraction protocols 1 and 2: validation and comparison
Taking into consideration that microbiological culture is considered an imperfect assay for performing bTB diagnosis (4,5,17), the diagnostic performance results, summarized in Table 2, were calculated for an imperfect gold standard using EPIDAT 3.1 software.In the case of DNA extraction protocol 1, 31 out of 40 animals positive to microbiological culture were also classified as MTC-positive by real-time PCR-IS6110 [adjusted SE = 78% (95% CI: 65%-91%)], while 39 out of 41 culture-negative animals resulted in being negative by real-time PCR-IS6110 [adjusted SP = 100% (95% CI: 100%)], with an adjusted FNR and FPR of 21.70% (95% CI: 9%-35%) and 0% (95% CI: 0%), respectively.Based on the positive and negative likelihood ratios (PLR = ∞ and NLR = 0.22) (Table 2), protocol 1 would have a high diagnostic value for the positive results, normally allowing discrimi nation between healthy and diseased animals, while the diagnostic utility of negative results would be dependent on the prior probability of TB in the area.The agreement with microbiological culture was substantial (κ = 0.72).By contrast, in the case of DNA extraction protocol 2, 38 out of the 40 animals positive to culture were classified as MTC-positive [adjusted SE = 96% (95% CI: 90%-100%)], while 34 out of the 41 culture-negative animals were classified as MTC-negative [adjusted SP = 100% (95% CI: 100%)].It is important to mention that 7 animals negative to culture were disclosed as positive (2 out of 7 were confirmed to present TBLs), while we found 2 culture-positive samples without TBLs but PCR-IS6110-negative. Consequently, the adjusted FNR of 4% (95% CI: 0%-10%) and FPR of 0% (95% CI: 0%) were estimated (see Table 2).According to the values estimated for LR (PLR >10 and NLR = 0.04), this protocol demonstrated a high diagnostic utility to confirm and discard bTB regardless of its true prevalence.The agreement with microbiological culture was substantial (κ = 0.77).
The Mcnemar test was used to compare both protocols, finding a statistically significant difference (P = 0.016) between the number of culture-positive animals detected using DNA extraction protocol 2 compared with protocol 1.No differences were found for culture-negative animals between both protocols.
Validation and diagnostic performance of real-time PCR targeting IS4 and mpb70
According to these results, DNA from samples processed by protocol 2 was further analyzed by real-time PCR targeting IS4 or mpb70, comparing these results with microbiological culture to validate these targets for MTC detection by real-time PCR from fresh cattle tissue samples.
Real-time PCR targeting IS4
Forty-three out of 81 animals (53.10%) were disclosed as positive by real-time PCR-IS4, and 38 were tested as MTC-negative (46.90%).All the samples with TBLs (n = 16 animals) were tested as PCR-IS4-positive (Table 2).The analysis of the sediment disclosed 86.05% of MTC-positive samples (n = 37), while 13.95% of MTC-positive samples (n = 6) yielded a positive result from the supernatant.The Cq values ranged from 22.96 to 38.10 (average = 32.06)for the sediment, and, in the case of the supernatant, the Cq values ranged from 26.00 to 37.00 (average = 32.87).A partial inhibition of IAC was found in 5 out of 81 samples, probably due to the presence of some inhibitors, with 2 out of 5 samples disclosing a positive result after dilution 1:2 and re-evaluation.The LOD for this real-time PCR-IS4 ranged from 50 to 100 genomic equivalents, and the cut-off was established at Cq <39.
Comparing with the microbiological culture as the reference assay, 36 out of 40 animals positive to culture were also found to be positive to real-time PCR-IS4 [SE adjusted = 91% (95% CI: 82.00%-99.80%)],and 34 out of 41 culture-negative animals were also tested as real-time PCR-IS4-negative [SP adjusted = 100% (95% CI: 100%)].Noteworthy, seven culture-negative animals that were PCR-IS6110-positive were also disclosed as positive by real-time PCR targeting IS4.According to these results, the protocol would have an adjusted FNR of 9% (95% CI: 0.20%-18.00%)and an adjusted FPR of 0% (95 CI: 0%).Regardless of the true prevalence, real-time PCR-IS4 was found to have a high diagnostic utility to confirm and discard MTC (PLR >10 and NLR = 0.09) (Table 3), with similar values to real-time PCR-IS6110.The concordance with microbiological culture (κ = 0.72) was substantial.
Real-time PCR targeting mpb70
In the case of real-time PCR-mpb70, 39 of 81 tested animals were detected as MTCpositive (48.14%) and 42 as negative (51.86%).Thus, 35 samples positive for real-time PCR-mpb70 (89.74%) were disclosed from the sediment and 4 from the supernatant (10.25%).Besides, 100% of lymph nodes with TBLs (n = 16 animals) were tested to be PCR-mpb70 positive (Table 2).The Cq values ranged from 21.83 to 36.80 (average = 31.5)for the sediment and from 27.90 to 35.0 (average = 30.36)for the supernatant.As mentioned above for the other targets, because of some inhibitors, amplifications were found to be partially inhibited in 5 out of 81 samples, which were diluted 1:2 and re-evaluated, allowing the detection of 3 inhibited samples as positive.The LOD for real-time PCR-mpb70 was determined to be lower than 100 genome equivalents, and the cut-off was set to a Cq value of <38.
Comparison between real-time PCR targets to detect MTC
In order to statistically evaluate the differences observed in the SE and SP of the real-time PCR depending on the DNA target, the 95% CI and correlated proportions of McNemar's test were estimated (Table 4).No significant differences were found between DNA targets (P > 0.05).Cohen's kappa coefficient (κ) showed an almost perfect agreement between all the targets (data not shown): IS6110-IS4 (κ = 0.95), IS6110-mpb70 (κ = 0.90), and IS4-mpb70 (κ = 0.93).
In addition, the diagnostic utility of the positive and negative results obtained with each probe was compared based on true prevalence (predictive values).When the TP ranges from 0% to 60% (Fig. 2), the NPV for real-time PCR targeting IS6110 is ≥90%, NPV ≥85% for IS4, and NPV ≥80% for mpb70.According to PPV (100%), real-time PCR targeting IS6110, IS4, and mpb70 is able to confirm bTB infection in any scenario of true prevalence.
DISCUSSION
bTB is one of the most relevant animal diseases worldwide and, hence, a major concern for public health.Even though eradication is the main goal for the EU, this neglected zoonosis is still present in dairy and cattle herds, especially in several European regions.Nowadays, the approved surveillance systems, which are mainly based on cellular immune reactions, have facilitated the diagnosis of infected cattle at the early stage of the disease; therefore, animals with clinical signs or gross post-mortem TBLs are lacking or rarely found at the slaughterhouse (34,35).This success of surveillance systems has challenged direct detection of MTC or its DNA using microbiological culture and/or PCR since in-vivo immune responses have to be confirmed post-mortem (36).The selective microbiological culture is the gold standard technique for confirming bTB diagnosis, although it is considered an imperfect assay (4,5,17).Therefore, the development and validation of sensitive and specific real-time PCR protocols, including sample processing and DNA extraction steps, as an alternative to microbiological culture is likely to pave the way to bTB diagnosis.The present study aimed to evaluate two different DNA isolation protocols and three different specific DNA targets, IS6110, IS4, and mpb70, to confirm MTC infection by real-time PCR directly from fresh LN tissue samples.DNA isolation is an essential step in the real-time PCR pipeline; therefore there are no standardized protocols to isolate mycobacterial DNA from fresh post-mortem samples, and different approaches can be found among studies (4,16,23,37,38).Mycobacteria belonging to MTC have some distinctive features that could potentially hinder diagnostic performance, including the characteristics of the MTC cell wall, a non-homogeneous distribution and intracellular location of MTC-bacillus, the well-known paucibacillary nature of this complex, and early infected animals with NVLs at the slaughterhouse (9)(10)(11)(12).In order to solve these issues, a novel DNA extraction protocol (protocol 2) was compared with a traditional one (protocol 1) and microbiological culture, revealing promising results with the real-time PCR-IS6110.To do so, protocol 2 used a larger amount of tissue sample (300 mg for protocol 1 vs 1,000 mg for protocol 2), as well as additional mechanical disruption and enzymatic digestion steps.Accordingly, protocol 2 disclosed 38 out of 40 culture-positive samples (95%) and 7 culture-negative samples (2 out of 7 showed TBLs) as PCR-IS6110-positive.Furthermore, several animals with a negative result in protocol 1 turned out to be positive (n = 12) when protocol 2 was performed.Consequently, and according to the literature (16,38), these results point out that the use of a small amount of tissue sample seems to decrease the chances of targeting mycobacterial DNA, leading to false-negative results, particularly in the case of infected animals with NVL.
Second, the incubation time before conducting the mechanical lysis step and the kind of disruption procedure have been reported to potentially impact downstream real-time PCR applications after DNA isolation, reducing the number of discordant results (16,38,39).Therefore, real-time PCR-IS6110 showed higher diagnostic efficiency in our study after conducting protocol 2 than protocol 1 because tissue samples were submitted to a more intense lysis during the DNA extraction process, including four mechanical disruptions and two overnight chemical lysis steps with proteinase K.In addition, protocol 2 allows analyzing both the sediment and the supernatant, increas ing the diagnostic sensitivity of the technique.As a result, in our study, 39 animals were detected as positive from the sediment samples, and the re-analysis of negative and inconclusive samples allowed the detection of 6 additional PCR-positive animals from the supernatant.To optimize protocol 2, the probability of pooling sediment and supernatant from the same sample was taken into consideration, thereby avoiding the need for two separate reactions.However, this step may have the drawback of diluting the samples.Consequently, samples with a Cq close to the detection limit may yield negative results.
Another significant point to consider is the presence of several inhibitors associated with the extraction protocol or an excess of host DNA that could lead to false negative results and therefore decrease sensitivity as a consequence of a partial or total inhibi tion of amplification, impacting negatively on the performance of the real-time PCR diagnosis.An exogenous heterologous IAC supplied by the manufacturer, which enjoys several advantages over endogenous or exogenous homologous IACs, was used to identify inhibitors (16) and allowed the detection of partial or total inhibition phenom ena.Thus, 6 samples were found to be completely or partially inhibited for protocol 1, whereas 24 were for protocol 2. After diluting these samples, 5 out of the 30 inhibi ted samples became positive, underscoring the relevance of including an exogenous heterologous IAC.
Previous studies targeting IS6110 (14,16) have reported quite similar SE and SP results than those herein reported for protocol 2. In addition, it is noteworthy to remark that in those studies, there was a high proportion of the evaluated samples with TBL (from 39% to 57.81% of the total samples vs 19.80% in our study), and hence, these animals should undergo an advanced stage of bTB infection.These results mean that both protocols work as a suitable tool to confirm bTB in reactor animals with either TBLs or NVLs, but especially for bTB diagnosis under current field conditions in the case of protocol 2, which presented a higher sensitivity to diagnose reactor animals without evident gross lesions at the slaughterhouse.In addition, our results show that the extraction protocol is a definitely relevant step directly influencing the diagnostic SE of real-time PCR from fresh tissue samples.
On the other hand, not only does the DNA extraction method play a critical role in the detection of MTC but also the selection of the genetic target (12).The MTC-specific IS6110 transposon is reported as one of the main targets of election for the diagnosis of the MTC complex by real-time PCR (40), providing a tool capable of differentiating between MTC and other bacteria, including non-tuberculous mycobacteria (NTM).However, an IS6110-like element has been recently found in the genome of other NTM, which may also potentially cross-react with certain IS6110 primer pairs or probes (7,13,33,41).Although this finding is expected to have a minimal impact on the specificity of the real-time PCR-IS6110, cross-reactivity cannot be ruled out, and the use of additional MTC-specific targets would be desirable to improve the diagnostic performance of direct real-time PCR from tissue samples.Thus, DNA isolated by protocol 2 was evaluated in the present study using two additional different genetic targets, IS4 and mpb70.IS4 DNA target, described by Wang et al. (26) on human clinical samples but never tested for veterinary diagnostics, is a highly conserved region inside IS6110 found exclusively within the MTC.The Mpb70 gene encodes an antigenic protein that is highly expressed by all members of the MTC, but it is a single-copy gene.The IS4 and mpb70 real-time PCR, in combination with the obtained DNA with the extraction protocol 2, resulted in a rapid technique with good diagnostic performance [SE, 91% (82%-99.80%),and SP, for IS4; SE, 83.30% (71.60%-95%), and SP, 100% for mpb70] compared to microbiological culture.Considering previous real-time PCR studies (4,14,16,24,26), the estimated diagnostic SE for both DNA targets, IS4 and mpb70, was significantly higher or similar to that previously reported, along with a higher estimated SP.Nevertheless, an indi vidual comparison between studies could be problematic and definitely biased due to differences in molecular targets, DNA extraction protocols, and validation methods (42).Moreover, we found a substantial agreement between microbiological culture and real-time PCR-targeting IS6110 (κ = 0.77), IS4 (κ = 0.72), or mpb70 (κ = 0.67).
Even if SE and SP are the main diagnostic performance indicators used, there are other factors that could influence the final utility of a technique, such as the disease's true prevalence in the region, the available laboratory resources, or its acceptability by veterinary professionals.The predictive values allow to estimate the variations in the diagnostic utility of a test based on its prevalence, so these values have great importance in allowing veterinary practitioners to interpret their results (43).Based on the obtained PLR and NLR, a real-time PCR targeting IS6110, IS4, or mpb70 positive results would confirm the bTB infection in any true prevalence scenario with 100% security (PPV).Regarding the credibility of the negative results, it was estimated an NPV of 90%, 85%, and 80% for real-time PCR targeting IS6110, IS4, and mpb70 respectively, when the true prevalence ranged from 0% to 60%.These results highlight the usefulness of IS6110, IS4, and mpb70 as targets of interest in the direct molecular diagnosis of bTB infection.
Despite the described results, several contradictory results between microbiological cultures and real-time PCR assays were observed.Of the 41 culture-negative animals, MTC DNA was detected in 5 animals with PCR targeting both IS6110 and IS4, whereas PCR targeting mpb70 detected 4 animals as positive.These findings could be associ ated with different factors impacting the sensitivity and/or specificity of microbiological culture and real-time PCR.Among the limitations of microbiological culture impacting MTC detection, a lack of analytical specificity due to the growth of other microorgan isms, including NTM (4, 5), or a decontamination process reducing the cell viability of a slow-growing bacteria with a paubacillary presentation (5,9,10,12) should be taken into account.Furthermore, cross-reactivity of the probes with other bacteria should be considered a limitation of real-time PCR.As mentioned above, although IS6110 is commonly used for the diagnosis of bTB, it may present cross-reactivity with NTM.To address this issue, IS4 and mpb70 primer pairs, which have been reported to be MTC-specific (7,44), were used in our study, proving that non-cross-reaction was found in culture-negative but real-time PCR-IS6110-positive animals.Besides, 2 out of 41 culture-negative animals and real-time PCR targeting IS6110, IS4, and mpb70-positive animals presented TBL confirmed by histopathology (data not shown).These results suggest that DNA extraction protocol 2 working together with real-time PCR targeting IS6110, IS4, or mpb70 could be a faster and more efficient assay for MTC detection in TBL or NVL samples during official post-mortem inspection compared to microbiological culture.Nevertheless, microbiological culture has actually been an essential technique for mycobacterial isolation and molecular epidemiology studies so far (4).
On the other hand, a small number of animals positive to microbiological culture were found to be PCR-negative targeting IS6110 (2 out of 41), IS4 (4 out of 41), or mpb70 (7 out of 41).The presence of inhibitors impairs real-time PCR performance; nevertheless, this factor was ruled out since IAC amplifications were observed in all of these PCR-nega tive samples.Another reason that could explain these negative results could be a low mycobacterial load that would be beyond the LOD, resulting in undetectable results for these assays, especially in the case of mpb70, which is a single-copy gene (16).In addition, although it seems to be an uncommon case, several isolates lacking the IS6110 element have been reported not only for M. tuberculosis (45)(46)(47) but also for M. bovis (48).
The present study describes a complete protocol, including sample pre-processing, DNA purification, and real-time PCR analysis.According to our results, DNA extraction protocol 2 and real-time PCR targeting IS6110 or IS4 could be potential first-choice molecular assays to detect MTC directly in fresh bovine tissue samples.These protocols proved to be rapid, highly sensitive, and specific diagnostic tools as an alternative to microbiological culture, which could take up to 3 months to complete, shortening the turnaround time for decision makers to be promptly informed.Furthermore, the low proportion of animals tested with TBL (16 out of 81) in our study highlights the diagnostic potential of this real-time PCR protocol to detect MTC directly from fresh LN tissue samples at early stages of infection and, therefore, when the mycobacterial load is low.This would be essential in the current framework, in which successful eradication schemes have reduced the number of reactors with TBL at the slaughterhouse; thus, the implementation of these cost-effective molecular tools in surveillance and control programs would pave the way to eradicate bTB.However, the major limitation of this study that hinders its wide application in the present setting is the lack of a ring trial that would allow its validation in different laboratories as well as different epidemiological scenarios.Therefore, further work on the re-validation of the present protocol should be performed in the future.
FIG 2
FIG 2 Graphical representation of the estimated NPV based on the validity of the real-time PCR targeting IS6110 (blue line), IS4 (green line), and mpb70 (orange line) and the true prevalence of bTB.
TABLE 1
Description of the obtained results by microbiological culture, real-time PCR-IS6110 (protocols 1 and 2), IS4, and mpb70 in relation to the presence of tuberculosis-like lesions
Overall comparison in relation to the presence or absence of TBL
a +, positive.b −, negative.c TBLs, tuberculosis-like lesions.d NVLs, non-visible lesions.
TABLE 2
Diagnostic performance summary of real-time PCR-IS6110 by comparing with microbiological culture assays under two different extraction protocols, 1 or 2 f a FPR: false positive ratio.b FNR: false negative ratio.c PLR: positive likelihood ratio.d NLR: negative likelihood ratio.e 95% CI: 95% confidence interval.f Estimates were calculated for an imperfect gold standard assay using EPIDAT 3.1 software (Software for Epidemiologic Analysis of Tabulated Data).September/October 2023 Volume 11 Issue 5 10.1128/spectrum.00348-237
TABLE 3
Diagnostic performance summary of real-time PCR-IS6110, IS4, and Mpb70 compared with reference microbiological culture assays (imperfect gold standard assays) under the same cycling conditions f
TABLE 4
McNemar test a Pairwise frequencies of real-time PCR results for IS6110, IS4, and mpb70 in 81 animals according to microbiological culture results (positive n = 40 and negative n = 41).
a September/October 2023 Volume 11 Issue 5 | 2023-09-14T20:16:49.654Z | 2023-09-14T00:00:00.000 | {
"year": 2023,
"sha1": "2f25bf8ac82485d6bc24e4ebd914bf3bb798830f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/spectrum.00348-23",
"oa_status": "GOLD",
"pdf_src": "ASMUSA",
"pdf_hash": "2f25bf8ac82485d6bc24e4ebd914bf3bb798830f",
"s2fieldsofstudy": [
"Biology",
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
} |
220843988 | pes2o/s2orc | v3-fos-license | Legacy Effects of Hydrologic Alteration in Playa Wetland Responses to Droughts
Wetland conservation increasingly must account for climate change and legacies of previous land-use practices. Playa wetlands provide critical wildlife habitat, but may be impacted by intensifying droughts and previous hydrologic modifications. To inform playa restoration planning, we asked: (1) what are the trends in playa inundation? (2) what are the factors influencing inundation? (3) how is playa inundation affected by increasingly severe drought? (4) do certain playas provide hydrologic refugia during droughts, and (5) if so, how are refugia patterns related to historical modifications? Using remotely sensed surface-water data, we evaluated a 30-year time series (1985–2015) of inundation for 153 playas of the Great Basin, USA. Inundation likelihood and duration increased with wetter weather conditions and were greater in modified playas. Inundation probability was projected to decrease from 22% under average conditions to 11% under extreme drought, with respective annual inundation decreasing from 1.7 to 0.9 months. Only 4% of playas were inundated for at least 2 months in each of the 5 driest years, suggesting their potential as drought refugia. Refugial playas were larger and more likely to have been modified, possibly because previous land managers selected refugial playas for modification. These inundation patterns can inform efforts to restore wetland functions and to conserve playa habitats as climate conditions change.
Introduction
Seasonal and ephemeral wetlands are ecologically important across a variety of landscapes, providing habitat to aquatic species and food and water resources to terrestrial species (Leibowitz 2003;Tiner 2003;Bolpagni et al. 2019). These wetlands are typically inundated for only part of each year, with hydroperiods (i.e., durations of inundation) that can vary substantially from one wetland to another across small geographic scales (Calhoun et al. 2017;Davis et al. 2019). Climate-change impacts on biodiversity in upland embedded (Mushet et al. 2015), seasonal wetlands will likely be driven largely by changes in wetland hydroperiod and may be most readily discernible during periods of climatic extremes, such as droughts (Walls et al. 2013;Davis et al. 2019). In some cases, climatechange effects on wetland inundation may interact with legacy effects from past land-management practices, ranging from unintentional effects (e.g., wetland soil compaction from livestock trampling) to direct, intentional hydrologic and geomorphic alteration (e.g., ditching, dredging, Micah T. Russell and Jennifer M. Cartwright contributed equally to this work. or filling). As climate conditions change, some localized areas may change more gradually and thus serve as climatic refugia (Morelli et al. 2016), while sites of persistent wetness despite climatic drying may provide hydrologic refugia (McLaughlin et al. 2017). However, few studies have sought to identify potential hydrologic refugia for upland embedded, seasonal wetlands. Identification of such refugia could improve wetland management and climate adaptation, and potentially inform wetland restoration efforts.
In the semi-arid sagebrush-steppe ecosystems of the northern Great Basin, USA (in southern Oregon, southern Idaho, northern Nevada and northern Utah, USA), snowmelt, as well as direct precipitation and surface runoff from summer thunderstorms, collects in terminal wetlands, salt lakes, and playas. Playas are seasonal (ephemeral) wetlands that form in closed basins with a negative annual water balance and remain dry throughout much of the year (Rosen 1994). Playas are often associated with surface evaporites and concentrations of clay minerals that impede infiltration; thus, they may become flooded after small amounts of precipitation (Rosen 1994). Playas in the northern Great Basin typically retain shallow water (from a thin film of water to tens of centimeters) from late winter through early summer, after which evaporation dries them out ( fig. 1). The degree of groundwater connectivity, if any, for most playas in the region is unknown (Dlugolecki 2010), but a 3-year study in which a playa was equipped with piezometers did not detect any subsurface soil saturation (Clausnitzer et al. 2003). In the northern Great Basin, playas exhibit considerable seasonal and inter-annual variability in inundation extent and timing and unlike more southern Great Basin playas, they often support diverse vegetation and are not saline (Dlugolecki 2010). According to J. Moffitt with the Natural Resources Conservation Service in Redmond, OR (personal communication, May 28, 2019), while some large "lakebed" playas in the northern Great Basin are characterized by seasonally inundated hydric plant communities, many playas are inundated less predictably and are ringed by encroaching upland shrubs. Furthermore, playas in the northern Great Basin differ from those in agricultural regions (e.g., Great Plains) in that they are largely embedded in sage brush steppe with uniform land use (i.e., grazing or range conservation). Widespread declines in playa inundation in Great Plains playas have been linked to agricultural conversion, road building, and even conservation easements (Cariveau et al. 2011;Bartuszevige et al. 2012;Tang et al. 2015;Tang et al. 2016;Tang et al. 2018), whereas trends in inundation for northern Great Basin playasand the role of livestock operations in these trendsare largely unknown (Dlugolecki 2010).
When inundated, some of these seasonal wetlands teem with aquatic invertebrates that provide a rich food source for migrating birds (O'Neill 2014). Cumulatively, hundreds of small playas throughout the northern Great Basin may be important spring migration habitats for shorebirds, providing resting and foraging opportunities as stepping stones between large marsh complexes (Dlugolecki 2010;Oring et al. 2013). Later in the season, some moist playa soils support grasses, sedges, and forbs that provide forage for wildlife including pronghorn (Antilocapra americana), mule deer (Odocoileus hemionus), and Greater sage grouse (Centrocercus urophasianus). Greater sage grouse, a federally-listed species of concern, sometimes use playas as leks (strutting grounds) and depend on the diverse forage and associated insects that grow in some playas when upland communities have already desiccated (Dlugolecki 2010;Hagen 2011). In a survey of 70 central Oregon playas, Bureau of Land Management (BLM) technicians identified 159 Fig. 1 Dry playa viewed (a) from the ground and (b) in aerial imagery. In (b), the playa to the north has not been modified, whereas the playa to the south has a berm and pit ("dugout"), indicated by the black arrow, that were constructed to provide water to livestock. Image in (a) by M. Russell; aerial imagery (b) courtesy of Esri world imagery basemap vascular plants, 51 bird species, 13 non-bat mammal species, 12 bat species, and 62 species of aquatic macroinvertebrates (J. Moffitt, personal communication, May 28, 2019). Although little research has been conducted on playas in the northern Great Basin, studies from other regions suggest that playa wetlands are important to biodiversity across much larger areas than the playas themselves (Haukos and Smith 2003).
Because playa inundation is likely driven largely by precipitation, snowmelt, and evaporation, the water and food resources playas provide to wildlife may be vulnerable to droughts. Drought conditions in the northern Great Basin are projected to intensify under certain climatechange scenarios (Ahmadalipour et al. 2017), which may exacerbate the ecological consequences of drought (Crausbay et al. 2017), including shifting the status of many playas from seasonal to ephemeral, or ephemeral to dry. Additional consequences of drought include the degradation of ecosystem services associated with wildlife and migratory bird habitats. In this context, variable inundation patterns among playas could imply that a small subset of playas might provide important localized refugia from droughts, i.e., isolated patches of viable habitat and resources during droughts that might help sustain wildlife populations under increasingly dry conditions (Dickman et al. 2011;Hermoso et al. 2013;Selwood et al. 2015;McLaughlin et al. 2017). If so, identifying which playas potentially function as drought refugia could help managers anticipate and potentially mitigate drought impacts on playa ecosystem services.
Strategies for mitigating drought impacts on playa habitats may include addressing the ecological impacts of previous land-use practices. In the northern Great Basin, many playas have been hydrologically modified by constructing berms and digging pits (referred to as "dugouts") to retain water for livestock later into the summer and fall ( fig. 1b). Many playas on public and private lands with the potential for holding water were modified between 1950 and 1970, some with multiple dugouts, concentrating livestock impacts in sensitive playa habitats. This form of hydrologic modification may concentrate water in a small area in and around the dugout, preventing the playa basin from filling to capacity and altering the playa hydroperiod, with possible consequences for wetland productivity (Dlugolecki 2010). Subsequent desiccation of portions of the playa may lead to encroachment of invasive exotic grasses and silver sage (Artemisia cana) (Bureau of Land Management 2013), as well as a general reduction in water quality for any remaining ponded areas (Wyland 2013). Though dugouts allow for enhanced summertime water retention by reducing losses to evaporation, the deeper water, steep bathymetry and reduction of playa-inundated surface area may limit their functionality as habitat for all but a small number, and few species, of shorebirds.
The BLM Prineville District in central Oregon has implemented an experimental playa restoration program that involves filling dugouts, excluding livestock from playas, mowing silver sage to create opportunities for native grass recolonization, and removing encroaching juniper (Bureau of Land Management 2013). BLM hydrologic models suggest the resulting increases in wetted playa surface area will depend on the relationship between playa volumetric capacity and dugout volumetric capacity. For example, restoring a 5.99-ha (ha) playa with a 15.45% ratio of dugout-to-playa capacity was predicted to increase playa areal inundation by 20.98%, whereas restoring a 88.27ha playa with a 0.45% ratio of dugout-to-playa capacity was predicted to increase playa area inundation by only 0.50% (Bureau of Land Management 2013). Restoration may also decrease water availability late in the season due to increased shallow-water surface area and associated losses to evapotranspiration. Restoration may therefore represent a trade-off between improved playa conditions and forage for wildlife like sage grouse, versus negative impacts to other species that may have extended their ranges with artificial late-season water sources (J. Moffitt, personal communication, May 28, 2019). Researchers using remotely sensed soil conductivity found preliminary evidence of successful rewetting of playa basins following restoration (Reuter et al. 2013), but long-term effectiveness monitoring data are not yet available.
U.S. Fish and Wildlife Service (USFWS) land managers at the Sheldon-Hart Mountain National Wildlife Refuge (NWR) Complex in southern Oregon and northern Nevada (hereafter 'the Refuge'), which no longer supports livestock operations and manages the landscape for conservation of a variety of wildlife and habitats (U.S. Fish and Wildlife Service 2013), are similarly interested in restoring some playas to more natural hydrologic conditions. Information is currently limited to help land managers understand where and why this highly intermittent water resource is available from year to year, such as drivers of playa inundation or spatial-temporal trends in playa inundation. Furthermore, land managers are tasked with conserving the key wildlife habitat features of playas despite projections of increasing summer drought severity across the northern Great Basin due to climate change (Ahmadalipour et al. 2017). Some future projections of climate variables for the Refuge for the years 2055 and 2085 suggest drier summer climate conditions for the Great Basin, primarily due to increased evapotranspiration ( fig. 2) (ClimateWNA Map 2019). Though peak playa inundation occurs in the spring, increasingly dry summers may further constrain the hydroperiod and reduce the number of playas that do retain water into the summeran extremely valuable ecosystem service in a water-scarce landscape.
To better understand playa hydrology and inform Refuge restoration planning efforts, we addressed the following questions: (1) What are the trends in playa inundation? (2) What are some of the hydrological, land use, and landscape factors that influence inundation? (3) How is playa inundation affected by increasingly severe drought? (4) Are there particular playas that remain wet under meteorologically dry conditions and thus could provide hydrologic refugia during droughts? and (5) How are these refugial patterns related to playa modification? In addition to contributing to the body of knowledge on upland embedded wetlands in the region (Comer 2005), addressing these questions may assist land managers in considering potential restoration options and in managing playas so they continue to provide critical habitat for animal and plant species in a drier climate.
Study Area
The Sheldon-Hart Mountain National Wildlife Refuge Complex ( fig. 3) consists of two co-managed NWRs: Hart Mountain NWR (1093 km 2 ) in southeastern Oregon and Sheldon NWR in northern Nevada (2321 km 2 ). Hart Mountain NWR ranges in elevation from 1097 to 2458 m, consisting of a fault block ridge rising steeply from the west, with low hills and ridges descending gradually to the east. Sheldon NWR consists of rimrock tablelands, rolling hills, and gorges ranging in elevation from 1250 to 2195 m. Annual precipitation primarily in the form of winter snow and spring rain averages 305 mm for Hart Mountain NWR, and is slightly less for Sheldon NWR, with surface water in both refuges limited to springs, intermittent streams, and shallow playas. Soils and plants are typical of a high desert sagebrush-steppe ecosystem, and the land is managed for conservation of over 340 species of wildlife (U.S. Fish and Wildlife Service 2013). Playas on the Refuge (153 total) vary greatly in size, from approximately 0.5 ha to >1000 ha (mean 51.3 ha), although the majority (roughly two-thirds) of playas were < 20 ha. There is little variation in the playas' soils (generally composed of silty clay or silt loam) or slopes (0-2%). The surrounding terrain commonly consists of very stony or cobbly loam, with small and subtly sloping (2-15%) catchments (Natural Resources Conservation Service 2020).
Modeling Factors that Influence Playa Inundation and the Effects of Drought
We evaluated time series of inundation patterns for 153 playas on the Refuge, roughly half of which were modified Climate variables represented are: mean summer precipitation (MSP), annual heat-moisture index (AHM), summer heat-moisture index (SHM), reference evaporation (Eref), and climatic moisture deficit (CMD). All climate variables were obtained from ClimateWNA (2019) (contained dugouts), using remotely sensed presence or absence of surface water. Monthly surface-water data (30-m resolution) for the period February 1985 -October 2015 and covering all areas within the refuge boundaries were obtained from the Global Surface Water Explorer (GSWE) API (Pekel et al. 2016), a tool for visualizing water presence, seasonality, and persistence based on calibrated Landsat 5, 7, and 8. GSWE relies on an expert systems procedural decision tree to classify pixels as water, land, or non-valid observations using Landsat-derived multispectral and multitemporal attributes. Equations in the decision tree were determined by visual analytics derived from a spectral library of the three classes across a wide variety of conditions, as well as images enriched by Normalized Difference Vegetation Index and Hue-Saturation-Value transformations. For pixels that could not be assigned to a class because of spectral overlap, evidential reasoning was used, taking into consideration geographic location and temporal trajectory in establishing likelihood of water presence. Validation by Pekel et al. (2016) integrated visual confirmation of over 40,000 randomly selected points distributed geographically, temporally, and across sensors. Overall errors of omission were reported as less than 5%, with overall errors of commission less than 1% (Pekel et al. 2016). Given that no playas were smaller than a 900-m 2 Landsat pixelthe smallest (0.5 ha) is covered by five pixelsthe spatial resolution of the surface-water data was adequate. We hypothesized that playas would be responsive to local climatic conditions, and that inundation of an individual playa may also be related to its size and modification history. We thus modeled playa inundation with the following covariates: playa size (m 2 ), modification status (dugout presence/absence), and Standardized Evapotranspiration Precipitation Index (SPEI). We obtained monthly SPEI data (4-km resolution) from the West Wide Drought Tracker (Abatzoglou et al. 2017). SPEI subtracts monthly evapotranspiration (based on average monthly air temperature) from monthly precipitation to create a simple water balance. SPEI ranges from −5 to 5 and is standardized such that a value of 0 represents the long-term average conditions for a site, negative values indicate conditions drier than the long-term average, and positive values indicate wetter-than-average conditions. We represented climatic moisture conditions for each year using October 12-month SPEI (SPEI-12), which integrates climate conditions from November of the previous year through October of the year in question. Shapefiles representing playa borders, area, and modification status were provided by the USFWS.
After re-projecting and stacking the data in R (R Core Team 2018), we calculated the areal percentage of each playa that was wet at each monthly time step (February through October, 1985 through 2015; an example is shown in Appendix A). Data from November through January were commonly not available due to cloud cover and were not used. To examine annual wetted duration, we calculated the number of months (zero to 9) in each year that each playa held any amount of water. Data exploration revealed high frequencies of zero values for both monthly percent wet and annual wetted duration. We therefore fit Generalized Linear Mixed-Effects Models (GLMMs) to Fig. 3 The study area contains 153 playas managed by the U.S. Fish and Wildlife Service on the two wildlife refuges that comprise Sheldon-Hart Mountain National Wildlife Refuge Complex, in southern Oregon and Northern Nevada, USA. Hillshade basemap courtesy of Esri both datasets using the lme4 package (Bates et al. 2015) in R, with SPEI-12, playa area, and playa modification status as covariates. GLMM modeling was performed in the following sequence: (1) determination of optimal random and fixed-effects structure based on computed Akaike Information Criterion (AIC) values; (2) model averaging to produce parameter estimates (necessary when no combination of fixed effects results in a model with substantially lower AIC); and (3) estimation of marginal and conditional R 2 to quantify predictive power of the best model (lowest AIC) in each of the two model sets.
Overdispersion, or variance larger than the mean, in the monthly percent wet data due to high frequencies of zero values was addressed by converting percentages to a binomial distribution (i.e., water presence/absence) (Zuur et al. 2009). We reasoned that this binomial representation of water availability was ecologically justified because even a small amount of observed inundation could potentially represent valuable habitat given the minimum surfacewater detection size of 900 m 2 (i.e., a single 30-m × 30m Landsat pixel) and the general aridity of the landscape. In addition, conversion to a binomial distribution eliminated concerns that "percent wet" would be less accurate for the smaller playas represented by as few as five Landsat pixels. We used a Poisson distribution to model annual wetted duration (measured as a count of total months wet); no modification was necessary to address overdispersion. We rescaled continuous predictor variables (SPEI-12 and playa area) by subtracting the mean and dividing by the standard deviation to facilitate direct comparison of model coefficients and to aid in model convergence.
GLMMs account for autocorrelation in time-series data by explicitly modeling correlation of subsamples within sampling units (i.e., groups) in the context of a user-specified random-effects structure. We used a unique identifier for each playa nested within Refuge units (i.e., Hart Mountain NWR or Sheldon NWR) as the grouping variable in our analyses. Iterative inclusion of random slopes for each predictor variable to allow for heterogeneity among groups in the influence of fixed effects did not improve model fit based on AIC values, indicating that the "random intercept-only" model represented the optimal random-effects structure. Accordingly, we fit random intercept-only models with all combinations of fixed effects to determine the optimal fixed-effects structure (Zuur et al. 2009). Because no combination of fixed effects resulted in substantially lower AIC for either model set, we then used model-averaged parameter estimates and associated 95% confidence intervals (estimated from weighted unconditional standard errors) as our basis for inference; if the 95% confidence interval for a fixed effect overlapped zero, we concluded a non-significant effect. Next, we estimated values of marginal and conditional R 2 to quantify predictive power of the best model (lowest AIC) in each of the two model sets (Nakagawa and Schielzeth 2013).
To examine how playa inundation is affected by increasingly severe drought, the GLMM models for water presence (binomial) and water duration (Poisson) were fitted with the 12-month SPEI values representing historical average conditions (SPEI = 0), moderate drought (SPEI = −1.0), severe drought (12-month SPEI = −1.5), and extreme drought (12month SPEI = −2.0), following thresholds on drought severity used by Yu et al. (2014) and Ahmadalipour et al. (2017). The binomial model was used to determine the mean probability of predicted wetness in each scenario, and the Poisson model was used to determine the mean predicted months wet per year in each scenario.
Identification of Hydrologic Refugia during Droughts
To identify playas that might serve as hydrologic refugia during droughts (i.e., a small subset of playas that might hold water even under the driest conditions in our dataset), we began by identifying the 5 years (from 1985 through 2015) that had the lowest observed playa inundation (fewest numbers of wet playas) in the study area. Although we selected these years based solely on observed playa inundation patterns, we also used October SPEI-12 to confirm that these years adequately represented meteorological drought conditions for the study area.
We reasoned that in the 5 years of scarcest playa inundation, the few playas that did retain water might serve as potential refugia (water and/or food sources) for wildlife during droughts. In particular, we sought to identify refugia that demonstrated temporal stability across multiple drought years. For each playa, we calculated the number of years wet (from zero to 5) and the average number of months wet during each of the 5 driest years. We classified playas as potential drought refugia if they remained wet in all 5 of the driest years and held water for at least 2 months on average during the 5 driest years. After identifying playas that served as possible drought refugia, we asked whether these playas were generally represented by playas that consistently held water each year across a range of climate conditions, i.e., whether playas that were generally wet under average weather conditions could be used to identify refugial playas during droughts. To that end, we calculated the total number of years each playa was wet (defined as any amount of wetness for any length of time) from 1985 through 2015, excluding the five dry years. We then compared these patterns ("all other years") to the playa inundation patterns for the 5 driest years to determine if inundated playas in average years explain the drought response. Finally, we examined relationships between drought-refugia metrics (number of years wet and average number of months wet during the 5 dry years) and playa size using Spearman correlations, and relationships of these metrics with modification status using a one-way analysis of variance (ANOVA).
Results
The number of wet playas across the Refuge varied by month and year, with playa wetness generally peaking in spring (March through May) as a result of rainfall and snowmelt and declining over the course of the summer ( fig. 4). At several time points in the dataset, no playas were observed inundated, i.e., all were dry (Table 1). On average (across the months of February through October, from 1985 through 2015), approximately 19% of the 153 playas across the study area were inundated. Notably, a maximum of only approximately 62% of playas were wet at any given time point (in April 2006), meaning that slightly more than 1/3 of the playas were dry even in wet climatic conditions. A total of 49 out of 153 playas (32%) had no observed inundation at any time point in the dataset. The remaining 104 playas were inundated an average of 41% of years during April and 20% of years during September.
Factors that Influence Playa Inundation
AIC values and associated AIC weights (w i ) used for model averaging of both the water presence (binomial) and water duration (Poisson) model sets are presented in Table 2. Model convergence was achieved for 79% of models. In both models sets, model-averaged parameter estimates were statistically significant (i.e., confidence intervals did not overlap zero) for SPEI and modification status, but not playa area ( fig. 5a). SPEI was positively related to water presence (fig. 5a) and water duration ( fig. 5b), indicating that wetter climate conditions increased the probability and seasonal length of playa inundation. Negative parameter estimates for modification status indicated that the probability of a playa holding water and its duration of inundation were significantly lower if it was unmodified.
The best models (lowest AIC values in Table 2) for water presence and duration produced low marginal R 2 (0.11 and 0.10, respectively) but higher conditional R 2 (0.78 and 0.82, respectively), indicating that there was considerable variation among playas in the presence and duration of water, and that accounting for that variation through the inclusion of a random effect substantially improved model fit.
Using the water presence (binomial) model with drought scenarios (i.e., SPEI-12 values indicating various levels of drought severity), the mean probability of a playa being wet across the full range of all other predictor variables declined from 22% (historical average), to 15% (moderate drought), to 13% (severe drought), to 11% (extreme drought) ( fig. 6a). Similarly, using the water duration (Poisson) model with drought scenarios, the predicted mean number of months wet per year for a playa across the full range of all other predictor variables declined from 1.69 (historical average), to 1.20 (moderate drought), to 1.01 (severe drought), to 0.85 (extreme drought) ( fig. 6b)
Refugial Playas during Droughts
We selected the years 1987, 1992, 2012, 2014, and 2015 to represent drought conditions based on playa inundation patterns ( fig. 7, see dashed vertical lines in (a) and solid circles in (b)). Specifically, these were the years with the lowest average numbers of wet playas across the study area: from 9.1 to 13.1 playas were wet (averaged from February through October), compared to a mean value of 29.1 wet playas across all years, 1985-2015. These 5 years also had the lowest annual maximum number of wet playas (maximum in each year from February through October), ranging from 22 to 28, compared to a mean of 58 from 1985 to 2015. In general, playa inundation (represented both as mean annual and annual maximum number of wet playas) was positively associated with October SPEI-12 ( fig. 7b). The 5 years that exhibited minimum playa wetness in the study area were characterized by moderate to severe drought conditions (all had October SPEI-12 < −0.8). Indeed, one of the selected years (1992) had the lowest October SPEI-12 value between 1985 and 2015 (−1.66, indicating severe drought), and three of the selected years (1992, 2012, and 2014) were among the 4 driest years in the study (all had October SPEI-12 < −1.4).
Nearly half of all playas (71 out of 153 playas; 46%) had no water in any of the 5 dry years selected for drought-refugia analysis. Of the remaining 54% that contained water in at least one dry year, 27% held water in at least 3 years, 15% held water in at least 4 years, and only 6% (nine playas total) held water in all 5 of the dry years. Notably, the nine playas that were wet in all 5 years were (by definition) wet in 1992, a year of severe drought in which the vast majority of playas on the refuge became dry ( fig. 7). The average number of months wet during the 5 dry years varied among playas, ranging from 0 to 6.8 months ( fig. 8a). The majority of playas (105 out of 153; 69%) were wet in fewer than 3 of the 5 dry years and for less than 1 month per year on average during those years.
Of the nine playas that held water during all 5 of the dry years, 6 of them (4% of all playas) also held water for at least 2 months on average during those years, meeting our criteria for drought refugia. These 6 playas appear exceptional in their history of providing water Fig. 6 Using drought scenarios (i.e., Standardized Precipitation-Evapotranspiration Index (SPEI-12) values indicating various levels of drought severity), Generalized Linear Mixed-Effects Model (GLMM)-derived mean and standard deviation for (a) percent probability of predicted wetness for a playa, and (b) predicted months wet per year for a playa (out of 9 months), across the full range of all other predictor variables. Drought scenarios include: long-term, historical average conditions (SPEI = 0), moderate drought (SPEI = -1), severe drought (SPEI = -1.5), and extreme drought (SPEI = -2) during drought conditions, and specifically during times when playa inundation is exceedingly scarce across the landscape.
Playas that were wet during most or all of the 5 driest years (i.e., with higher values on the horizontal axes of fig. 8) also tended to be wet during most other years (note absence of points in the lower right quadrant of fig. 8b). Thus, any playas identified as drought refugia based on dry-year analysis were also likely to be wet in other (non-drought) years. However, the reverse was not necessarily true: consistent wetness in all other years (i.e., higher values on the vertical axis of fig. 8b) did not always imply consistent wetness during dry years (note presence of points in the upper left portion of fig. 8b, representing playas that were generally wet in most years but often dried out during the 5 driest years). These results indicate that playas that were consistently inundated during nondrought years did not necessarily serve as drought refugia.
Based on Spearman correlation, larger playas were more likely to hold water during droughts: ρ(151) = 0.35, P < 0.001 for the relationship between playa size and number of years wet during the 5 driest years and ρ(151) = 0.36, P < 0.001 for the relationship between playa size and average months wet during those 5 years. In addition, modified playas held water for a greater number of years during the 5 driest years (F = 14.96, P < 0.001) and for more months on average during those driest years (F = 9.58, P < 0.01) compared to unmodified playas. Modified playas also were more likely to hold water during all years (1985 through 2015) across a range of climate conditions (F = 10.4, P < 0.01). Of the nine playas that held water in all 5 dry years, seven (78%) were modified. Of the 6 refugial playas (wet in all 5 dry years for at least 2 months on average), 5 (83%) were modified. Notably, modified playas also tended to be larger in size than unmodified playas (F = 9.83, P < 0.01). Collectively, these results suggest strong positive associations between playa size, modification status, and functional status as drought refugia.
Discussion
Interannual variation in weather conditions is a clear driver of playa inundation in this study area. Although groundwater connectivity of playas has not been rigorously studied in this region, we found that playa inundation is closely tied to local precipitation and evapotranspiration patterns, suggesting that playas in our study area may not be receiving large groundwater subsidies. Analysis of piezometer data from a playa roughly 100 km north of the study area similarly indicated lack of groundwater connectivity (Clausnitzer et al. 2003).
Our results confirm expectations that under drought conditions, managers can anticipate fewer playas to hold water and for those playas to be inundated for shorter seasonal periods. For example, during the dry years 2012 through 2015, in which October SPEI-12 ranged from −0.33 to −1.45, the number of inundated playas in our study area in July ranged from 1 to 9, compared to a July average from 1985 to 2011 of 26 inundated playas. Playa responses to droughts may have important implications in the context of regional concerns about drought intensification under climate change, i.e., droughts that may become longer, more frequent, and/or more severe. For example, Ahmadalipour et al. (2017) projected long-term changes in summer 3-month SPEI in the northern Great Basin under an RCP8.5 emissions scenario averaging a 0.02 unit decrease per year, equivalent to a decrease of 0.5 SPEI units over 25 years. In a future climate with drier summers and greater evaporative demand (see fig. 2), periodic droughts will likely exacerbate loss of ecosystem services and aquatic habitat provided by playas and may highlight the importance of the few playas that can provide drought refugia to wildlife. The ability of playa wetlands that have historically functioned as drought refugia to continue doing so in a drier future climate is unknown. Thus, conservation of playa-dependent plant and animal species may necessitate further study of the hydrogeologic and hydrogeomorphic properties of playas identified as potential drought refugia, including improved understanding of catchment elevations and slopes, catchment area to wetland ratio, surface area to volume ratio, playa bathymetry and soil characteristics, and possible shallow groundwater interactions (Parker et al. 2010;Bartuszevige et al. 2012). Such studies may also further explain the strong positive associations we observed between playa size, modification status, and functional status as drought refugia. In addition to intensifying droughts, climate change may also increase the likelihood of extreme precipitation events (Monier and Gao 2015), prompting researchers to consider the interactions between intensifying droughts, extreme precipitation events, and playa inundation.
Although playas of the northern Great Basin differ substantially from playas in other regions such as the Great Plains-in terms of climate, types of hydrologic alteration, and surrounding land use and vegetation-our findings suggest that some inundation dynamics in northern Great Basin playas may be similar to those of playas in other regions. Studies from Spain (Castaneda and Herrero 2005) and from the Rio Grande plains of Texas (Parker et al. 2010) have demonstrated substantial variability in inundation patterns both among playas and across years, which we also observed. Playa responsiveness to precipitation patterns and an overall tendency for larger playas to have greater likelihood of inundation were observed by Bartuszevige et al. (2012) and Cariveau et al. (2011) in the Great Plains as well as in this study. In addition, our finding that roughly one third of playas had no observed inundation at any time point in the dataset suggests that the hydrology of playas, or at least a subset of playas, may have been altered relative to historical conditions. In Great Plains playas, Tang et al. (2016Tang et al. ( , 2018 similarly found evidence that only a minority of playa wetland footprints actually experienced regular inundation and supported hydrophytic vegetation. Disentangling the hydrologic consequences of climate change (e.g., drought intensification) and legacy effects of land-management practices is challenging for playas in a number of regions, including in the northern Great Basin.
As managers plan for climate-change impacts to playa wetlands, they may simultaneously be considering hydrologic and ecological restoration efforts. Although we found that playas with dugouts were more likely to hold water, to retain water for longer periods, and to serve as drought refugia, we stress that correlation between modification status and observed playa hydrology does not reveal the nature or direction of causation. Indeed, far from being a randomly assigned treatment, playa modification may have been guided by natural variability in playa hydrology that was observed by previous generations of land managers. Although documentation of historical playa modification decisions by land management agencies and individual ranchers is scarce, we speculate that in many cases, land managers may have chosen to modify those playas that they noticed were consistently holding water from one year to another, and perhaps also playas that were observed to hold water in dry years. Such targeted modification could have benefited livestock operations by optimizing water retention on the landscape for a given amount of investment. If so, then the positive association we observed between modification status and playa inundation may be due at least in part to the historical selection of drought refugia as sites for modification. Fig. 9 One example playa (a and b) showed inundation concentrated near the dugout location in (a) April and (b) July, averaged over 1985 through 2015. In this playa, the dugout is located near the lowest-elevation zone of the playa. By contrast, in another example (c and d), the dugout is not located in the areas of greatest wetness in (c) April or (d) July and is instead located almost 3 m higher than the lowestelevation zone of the playa Because water modifications for livestock are no longer needed to support grazing operations, Refuge managers are considering hydrologic restoration of some playas, which could include leveling berms and filling dugouts to create a smoother playa surface that more closely approximates the geomorphology of unmodified playas. Such restoration efforts would be aimed, in part, at increasing the geographic extent of playa surface inundation by preventing water from draining into dugouts. Although we did not perform a comprehensive assessment of dugout locations relative to playa wetness patterns, we noticed that not all dugouts are located within the most-inundated zones of their respective playas. Examination of two playas that held water in all of the 5 driest years ( fig. 9) helps to illustrate several management considerations that are relevant to restoration planning. One playa ( fig. 9a and b) has a dugout located within the zone of greatest wetness in the lowest-elevation area of the playa. In such a case, filling the dugout might reduce drainage of water from the playa surface and enlarge the inundated extent within the playa. However, in another example ( fig. 9c and d), the dugout is located almost 3 m above the lowest area of the playa and is not within the most frequently inundated zone. In this case, it is unclear how dugout filling might affect playa hydroperiod and areal extent of inundation. This comparison underscores the importance of site-specific restoration planning and suggests how remotesensing inundation analysis could help inform such planning efforts. Furthermore, restoration planners may need to consider playa area and the ratio of dugout-to-basin volumes. Large "lakebed playas" may provide more consistent water, emergent vegetation, and aquatic invertebrates than small "ponded clay playas," which often have encroaching sagebrush (J. Moffitt, personal communication, May 28, 2019). However, relative gains in inundated area resulting from filling dugouts may be limited in large playas due to a low ratio of dugout volumetric capacity to playa volumetric capacity.
Because playa modification was not a randomly assigned experimental treatment, our study was by nature observational, and hence it does not resolve the mechanisms by which modification may increase or decrease the ecosystem services provided by playas as water and food sources within the landscape, or as potential drought refugia. One future approach to addressing this question could involve incorporating a randomized experimental design into future playa restoration efforts. For example, if a randomly chosen subset of modified playas were assigned to undergo restoration as an experimental treatment (withholding all other modified playas as a control), then the effects of restoration on playa hydrology and ecosystem services could be rigorously assessed. Such considerations in the design of ecological restoration programs can increase the knowledge and insights gained from subsequent monitoring programs (Block et al. 2001). In addition, pre-and post-restoration field monitoring of ecosystem services provided by playas (e.g., migratory bird use, aquatic habitat and invertebrate food resources provided, late-season forage for terrestrial wildlife) could help ascertain the ecological consequences of attempts to restore playas to more natural hydrologic conditions. Finally, although we only calculated one-pixel (900 m 2 ) inundation of playas from fewer than 1.4% of our total observations, we acknowledge we were not able to separate dugouts containing water from wet playas. Future studies incorporating data with higher spatiotemporal resolution could further refine our understanding of the relationships between playa inundation seasonality, hydrologic refugia, and modification status.
Conclusions
Playa wetlands are an understudied but important seasonal water and food resource for migrating birds and other wildlife that may be negatively impacted by climate drying and drought intensification. This study identified factors that influence playa inundation in the northern Great Basin, simulated the effects of intensifying droughts on playas, and identified a subset of playas that appear to function as hydrologic refugia during droughts. Historically, larger playas and playas with dugouts were more likely to provide drought refugia; however, the ability of these playas to function as refugia under climate change and in the context of hydrologic restoration efforts is unknown. To adequately prepare for climatechange impacts and assess possible implications of restoration, more research is needed on playa geomorphology and hydrogeology, potentially coupled with rigorously controlled experimental restoration studies and long-term monitoring of restoration effectiveness. playa at each monthly time step. The year 1997 (a through d) was wetter than average, whereas 2015 (e through h) was drier than average based on Standardized Evapotranspiration Precipitation Index (SPEI). Percent wetness was subsequently converted to a binary value indicating water / no water for each playa in each time step.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. | 2020-07-29T14:57:01.668Z | 2020-07-28T00:00:00.000 | {
"year": 2020,
"sha1": "582cf0c401361cd7c2aa5b4de19a90217150d57e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s13157-020-01334-0.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "d7e512e11acad1ea62256fedb7d330747b30cac8",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
} |
230701244 | pes2o/s2orc | v3-fos-license | DEVELOPMENT OF WAQF LAND FOR ECONOMIC DEVELOPMENT: IS A HOTEL A VIABLE PROJECT?
This article examines waqf (pious endowment) hotel projects developed by Yayasan Waqaf Malaysia (YWM) for the economic development of Malaysian communities. Hotel projects were selected as the scope of this study because of their function as large-scale commercial projects and their impact on the development of Malaysian communities. The study involved three waqf hotel projects in Peninsular Malaysia. This study used interviews as primary sources and relevant documents as secondary sources. Data were analysed using a thematic approach. This study found that waqf hotel projects contribute to the economy through the optimal use of land resources, income generation and employment opportunities. Thus, the implementation of waqf hotel projects have the ability to contribute to the socio-economic development of society and the state.
Introduction
Waqf (pious endowment) is an Islamic form of dedicated property which is physically retained and used beneficially for charitable purposes, either generally or specifically. From a religious perspective, Muslims perform waqf to be closed to their God, Allah. Waqf plays a significant role in financing the needs of society through social participation. In addition, history records waqf as a source of effective aid to Islamic governments for providing public amenities for their people and easing government spending. Two examples are the establishment of Al-Nuri Hospital in Damascus and Al-'Adudi Hospital in Baghdad. In education, Al-Azhar University depends on waqf to pay for its maintenance and expenditures, its teachers' allowances, and to provide living allowances for its students.
In Malaysia, specific institutions have the authority to manage, develop and regulate waqf matters, such as the Department of Awqaf, Zakat and Hajj (JAWHAR), the Yayasan Waqaf Malaysia (YWM) and the State Islamic Religious Councils (SIRCs). There are fourteen states in Malaysia, and each state has its own SIRC. The involvement and cooperation of the Federal and state governments are important for the optimal utilisation of waqf properties. At the state level, the SIRCs own valuable assets such as land; however, they have limited funding to develop these lands in their respective states. Thus, in order to promote the development of waqf properties in Malaysia, the Federal Government has allocated RM256.5 million, through JAWHAR, to develop these waqf assets at strategic locations across the country, particularly Waqf Hotel Projects.
Waqf hotels are erected on waqf land at strategic locations in either town centres or tourist areas. YWM acts as both project manager and executor of these waqf hotels. The SIRCs provide land waqf for the construction of hotels, and act as trustees of the waqf in their respective states. Four hotels were built on waqf land between 2010 and 2013: Klana Beach Resort (Port Dickson), Princess Beach Hotel (Tanjung Kling), Hotel Seri Warisan (Taiping) and Grand Princess (Kuala Terengganu). These hotels were developed on waqf lots which are strategically located to optimize their potential. They will generate income for the beneficiaries and boost the socio-economic wellbeing of the communities around the hotels.
This study investigates the role of charitable projects in developing the economy of local communities in Malaysia. Historically, waqf has played a vital role in providing waqf assets and financing for the community, such as backing hospitals, educational institutions and other infrastructure. However, in Malaysia, waqf land used to build hotel projects has also received financial assistance from the Federal Government. Thus, appropriate assistance is optimally utilized to contribute to the economic development of local communities.
The Role of Waqf in Financing Community's Need
Our research has found that waqf plays a vital role as a source of financing that can meet the needs of society. The function of waqf in the economy is seen in the use of resources and assets, such as land or buildings. Resources that can be used or exploited for the benefit of the community should be given the best available economic opportunities and thus outweigh assets which are left undeveloped, unused or wasted. According to Kahf (1999), waqf assets are used either for constructing mosques, schools, hospitals and orphanages, or as investments which produce goods and services for the market. The income from the investments is given to the waqf beneficiaries.
Moreover, waqf in the economy generates revenue, especially waqf that is productive. This productive waqf income is normally used to finance its beneficiaries or other waqf, thus ensuring sustainable returns and benefits. A cash waqf, for example, may be used in the long term to purchase fixed assets such as houses or land. Waqf assets are then rented out to generate income and finance the expenses of the beneficiaries or other waqf. Income from public bath fees and factory rentals may also generate waqf income (Toraman & Tuncsiper, 2007). According to Mandaville (1979), in 1423 in Edirne in Turkey, a man donated akҫe 10,000 cash to shops along Ağaҫpazari. Income derived from the rental revenue and loan capital of this waqf was then used to pay the wages of three people to recite the Quran in Kilise Mosque. In 1442, a government official, Paşa, endowed four shops, a public bath, and a total of 30,000 akҫe to finance the construction and maintenance of a mosque, soup kitchen and school in Gallipoli. According Nour (2012), Sultan Al-Mansour Qalawoon in Egypt endowed shops, apartment buildings for middle and low income tenants, warehouses, residential buildings and agricultural land to finance a complex of hospitals, schools, and a public shower. Al-Ashraf Qunsuh Al-Ghuri endowed his assets to finance his waqf complex known as Al-Ghuria. Abdul-Rahman Katkhuda endowed residential buildings around the Al-Azhar Mosque, fifteen shops, a coffee shop, a tailor's workshop, a residential place with gardens in the Boulaq which was leased to ambassadors and high officials who visited Egypt (usually from Istanbul), and agricultural land. The income generated from these waqf assets was used to develop the Al-Azhar and maintain the Qalawoon Hospital and other mosques. Kahf (1999) mentioned that hefty amounts of waqf income derived from an orchard and building were used to pay the salaries of the library staff, supervisors, and screenwriters.
The role of waqf in the economy is also to create jobs. Cizacka, Toraman and Tuncsiper cited waqf as significantly contributing to the economy by creating jobs. Ozturk (1995b) found that the institution of waqf under the Ottoman government provided employment and generated income for a large number of people, including nearly 16% of the Ottoman economy in the seventeenth century; 27% in the eighteenth century; and 16% in the nineteenth century. Cizacka cited Bilici that jobs created through waqf in Turkey were 12.68% in 1931 and 0.76% in 1990. Although there has been a decline in the proportion of funded by the waqf sector, the new percentages do not include the 30,000 businesses and small-scale private providers operating in waqf premises, as well as the 10,000 people who work in newly developed waqf institutions. Waqf not only provide employment in the economic sector but also benefit other sectors such as education and healthcare. In the health sector, the development of a waqf hospital creates jobs such as doctors, surgeons, ophthalmologists, pharmacists, nurses, cooks, cleaners and guards. For example, Al-Nuri Hospital in Damascus has 19 staff members, consisting of doctors, surgeons, ophthalmologists, pharmacists, nurses, cooks and assistants, janitors and a controller funded by waqf (Abattouy & Al-Hassani, 2014). Similarly, Hospital Dawud also has a trainee doctor, resident doctor and 24 consultants in various professional fields (Abouleish, 1979). In the educational sector, the development of educational institutions funded by waqf sources employs faculty such as lecturers, teachers, readers and librarians. Shatzmiller cites Al-Azhar University having 6,154 teaching staff in 1986 with their salaries paid through the investment of waqf assets.
Waqf has a significant impact on the economy by promoting the use of resources and assets, generating income and creating jobs. Projects or other activities can be potentially developed through income generation from waqf assets. With governments obliged to develop the economy through human development and provide infrastructure necessary for such development, we posit that waqf can play a role similar to what it has in the past. The role of waqf in the economy would also allow governments to reduce spending and involvement in the economy, avoid financial deficits, and lower interest rates. This would restore the distribution of income and wealth in society, eradicate poverty and encourage economic activity (Budiman & Kusuma, 2011).
Nik Hassan (1999), Ab Rahman (2009) and Suhaimi et al. (2014) found that waqf can stimulate economic activity in the local community. For example, in Malaysia, waqf provide business premises and commercial buildings through land or through the purchase of assets from the donated waqf. Through the provision of business facilities, the community has the opportunity of a more affordable rental rate, and Muslim businesses can expand and reduce their dependence on other financial resources. Even the SIRCS acquire income from such economic activities, as with the Islamic councils of Johor, Penang, Selangor and Wilayah Persekutuan. The objective of this study is to investigate the role of waqf hotels in developing the economy of a country, especially in Malaysia.
Research Methodology
The data for this research were obtained through interviews and observation. A total of eight respondents were interviewed. Three were waqf property managers, two were employees of the waqf hotel projects, and the remainder were trustees. The interviews were conducted face-to-face with the informants. The interview questions were semi-structured, allowing some flexibility, balance and better quality data. The authors supplied the questions to the informants before the structured interviews were conducted to prepare the informants. During the interviews, open-ended questions were posed in order to obtain the required information. Responses from informants were recorded using MP3 voice recording (Media Player 3), so that all data and information provided by informants could be stored for use by the author. All data were analysed using the content analysis method.
The researchers also observed the hotels, and especially management, facilities and hospitality. Because these hotels are located on waqf land, they are more careful to maintain the religious image of the hotel. They are therefore shariah-compliant, even though they are not officially certified as such. These are three and four-star hotels and provide facilities complementary with other such hotels. Their specialty is providing halal food and drinks, prayer mats and kiblah direction. The hotel staff also maintain a decent dress code. The hospitality provided throughout a stay there is the same as in other hotels.
Functions of the Development of Economic Community Waqf Hotel Project in Malaysia
From an economic standpoint, the implementation of commercial assets such as land ownership not only takes advantage of donated land that was previously not developed, whether large or small projects capable of generating revenue. Various parties also benefit from development of the donated land, such as the land trustee, the SIRC, companies or traders doing business on waqf land, or the community. The project found that similar hotels contribute to the economy in the following ways.
1. Use of resources; waqf land A waqf hotel project attempts to use resources -in this case donated land -optimally when building. This is because prior to the implementation of this project, most SIRCs, who are the sole trustees for all waqf assets in the respective states, have encountered problems in collecting sufficiently large funds for the joint development of large-scale waqf property, let alone positioning such donated land located in areas of high value such as in the middle of a city or in a tourist area. More funds are needed as the waqf land project is developed and should be comparable with other developments at the donated land site concerned. Thus, capital injection from JAWHAR and YWM enabled the project to seek and develop the potential of waqf properties in strategic locations. The development of waqf property is important, because the value of real estate in Malaysia is increasing; therefore, if donated land is left undeveloped and idle, then it not only does harm in not utilizing the resources of the land, but is even harmful because of the cost of land maintenance. If donated land is abandoned, it is feared that it will be encroached upon by the unscrupulous. Construction of a hotel on waqf land ensures that previously developed and abandoned waqf land can be optimally used to generate revenue. Table 1 shows the use of donated land before and after the hotel waqf project is executed. The development of real estate assets such as buildings will increase the future value of a property, as opposed to abandonment of the waqf land. Furthermore, the strategic location of the land in the city centre or in tourist areas, for example, is also a factor that will increase the land value. Investments in non-financial assets such as these are longer term investments, in contrast to financial assets such as money, which are subject to inflation (Hotel Manager B, 2014).
Generating revenue
Through their ongoing business, waqf hotel projects have generated high revenues for hotel operators and also for the waqf trustees, the SIRCs. Hotel operators generate income from business carried out in the hotel, while waqf trustees gain through the lease or rental of waqf land through the building of the hotel.
One hotel operator generated an income from rental revenue of RM3.96 million in 2012 and RM3.84 million in 2013. According to Manager A, the management is targeting revenue from the rental of rooms at Hotel A of between RM3 to RM 4 million a year; for Hotels B and C, revenue could not be obtained from the companies. However, based on the rental rate per hotel room, the income from average room rentals is shown in Table 2. Estimated income excludes that earned through other rentals such as banquet halls, seminar rooms, meeting rooms and other amenities which are rented out for various events and seminars. Furthermore, according to Hotel Manager A, there was an increase in room reservations and hotel occupancy due to a high demand area within the vicinity, especially during weekends and school holidays. The area is a popular tourist spot. In addition to generating income and creating jobs, Hotel A also benefitted the community when, in 2013, it contributed zakat of RM 10,000 (Hotel Manager A, 2014).
The SIRC waqf trustees have strategically secured the waqf income from hotel operators through longterm lease agreements on the development and operation of hotels on waqf land. Based on the agreement between both parties, after five years of operation there is a certain amount that must be delivered by Hotel A to JAWHAR. There are currently no fees paid by Hotel A to JAWHAR during its initial five years of operation. This is due to the fact that this hotel is managed by a subsidiary of the SIRC, in respect of which a trustee of waqf land is involved. For Hotel B, in contrast, the SIRC leased the donated land to the waqf foundation for Waqf Malaysia (YWM) for RM 20,000 per month. In the case of Hotel C, the SIRC, through the lease of donated land for its construction, has generated revenue of RM 100,000 per year. Under this agreement, waqf land was leased for 20 years by the SIRC to YWM.
Job creation
Waqf hotel projects can create many jobs. In order to achieve a high-star level of service and facilities, a hotel should follow the standards prepared by the Ministry of Tourism Malaysia, including utilising human resources to provide services to customers throughout the property. The higher the star level of performance, the more human resources are needed to provide service. Jobs offered in the hotel industry include hotel managers, clerks, waiters, chefs or cooks, technicians and cleaners. To ensure that waqf property development projects benefit the local community, the latter are given priority in filling job vacancies.
Hotel A provides a total of 60 to 74 types of work, with minimum employee income of RM 900. The hotel management also prioritises local labourers who are Muslims. They are welcome to apply to work there, but at the moment, no job applications have been received from these waqf beneficiaries. The hotel management had also recruited two to three workers with disabilities. However, due to long working hours and the challenges of working with customers, they did not remain in the hotel's employment. Hotel B has approximately 50 to 60 workers with a minimum wage of RM 900. In 2013, waqf Hotel C created 100 to 150 jobs. Employment is expected to increase because the waqf hotel is still in its early stages of operation. In selecting employees, priority is also given to recipients in the local area, according to YWM guidelines.
Other functions of waqf hotel projects
The study also found that earnings from rental or lease of waqf land acquired by the waqf trustee, the SIRCs, are also used to develop other waqf. This is because most of the revenue derived from the rental or lease of waqf land is profitable. This approach also helps waqf trustees, especially SIRCs, to strengthen the institution of waqf. Revenues obtained from the leasing of waqf land are used to develop other donated properties for the state. The earnings of the lease of land ownership by Hotel C are used to fund maintenance of the mosque, which is the beneficiary of the waqf land. Moreover, these earnings are also allocated for the development of other waqf and for emergency purposes.
Conclusion
In general, these three similar hotels projects surveyed have contributed to the economy through the use of waqf land resources, income generation and job creation. Table 3 shows the functions of waqf as a complementary financing project in the economy. Waqf hotel projects have indirectly boosted the government's role in the economy. Land resources have been developed and utilized to build the hotels at strategic locations and at a high value. In addition to generating revenue for the hotel managements and beneficiaries of the waqf, waqf hotel projects also increase employment by opening up job opportunities in hospitality. Moreover, local communities are given priority when filling vacancies in such hotels. This provides good employment opportunities for the locals without having to migrate from their home communities. The waqf trustee can also generate income from these waqf hotel projects which will then be utilized to develop other waqf assets. The economic impact of these waqf hotel projects reveals the success of the Federal Government in assisting and developing waqf properties in Malaysia. Such aid is analogous to seeds being planted, growing into trees and bearing fruit. The seed of these fruit are then used to plant other trees. Such assistance helps waqf trustees, the SIRCs, to generate lucrative income with the aim of developing other waqf assets in each state for the benefit of society. | 2020-10-28T19:13:06.113Z | 2020-06-01T00:00:00.000 | {
"year": 2020,
"sha1": "d38e9b0b327bacbd2bb50004f2b12c2a8f9c18ed",
"oa_license": "CCBYSA",
"oa_url": "https://mjsl.usim.edu.my/index.php/jurnalmjsl/article/download/190/142",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "d78999fb4bce74b5499668b792cb5f7b3db04100",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
7222630 | pes2o/s2orc | v3-fos-license | Reduced-Intensity Allogeneic Stem Cell Transplantation for Co-Emergence of Chemotherapy-Refractory Follicular Lymphoma and Therapy-Related Myelodysplastic Syndrome
A 54-year-old male was diagnosed with follicular lymphoma in September 2003. Despite multiple chemotherapies, including autologous hematopoietic stem cell transplantation (HSCT) with high-dose chemotherapy, the disease eventually relapsed. Additionally, bone marrow analysis revealed the co-emergence of therapy-related myelodysplastic syndrome (t-MDS) in February 2012. In March 2012, we performed related allogeneic HSCT for the treatment of both malignancies. This strategy was successful and the patient has remained free from both malignancies for 23 months. Allogeneic HSCT is a potent curative therapeutic option for both t-MDS and refractory follicular lymphoma.
Introduction
Therapy-related myelodysplastic syndromes and acute myelogenous leukemia (t-MDS/AML) constitute the most serious late-onset complications caused by chemotherapy and/or radiotherapy. Recent advances in anticancer chemotherapy using various classical and novel anticancer agents have led to a higher rate of treatment success and longer survival for patients with cancerous diseases, including hematologic malignancies, so that the number of the so-called cancer survivors has increased during the past decade. This was also accompanied by an increase in the number of patients suffering from t-MDS/AML [1]. Because of chemorefractoriness due to frequent poor-risk cytogenetics and therapy-related organ damage caused by previous use of cytotoxic therapies, the treatment outcome for t-MDS/AML has remained dismal. The median survival for t-MDS/AML has been reported to be 6-8 months when treated by conventional chemotherapy, which is significantly shorter than that for de novo MDS and AML [2]. While allogeneic hematopoietic stem cell transplantation (HSCT) remains the only curative therapeutic strategy for t-MDA/AML, this strategy cannot be used for all patients due to the high frequency of therapy-related mortality [3,4]. In particular, for patients exhibiting co-emergence of relapsed disease and t-MDS/AML, the life expectancy has been unfavorable due to the lack of a promising therapeutic modality [4].
The therapeutic outcome of non-Hodgkin lymphomas (NHLs) has been greatly improved over the past two decades as a result of the development of immunochemotherapy. While these advances in therapeutic strategies have greatly enhanced the cure rate for aggressive NHLs, such as diffuse large B-cell lymphoma, it remains difficult to achieve a cure for indolent lymphomas, such as follicular lymphoma (FL), even though survival has been prolonged. The repeated use of various kinds of immunochemotherapies over a long period has been generally required for patients with FL. The increase in anticancer agents and the longer survival period are associated with a higher occurrence rate of t-MDS/AML in FL [5]. Therefore, current and future topics regarding the treatment of FL include the therapeutic strategy for refractory/relapsed disease complicated by t-MDS/AML.
In this report, we describe the case of a patient with FL who relapsed after a series of treatments with a variety of chemotherapeutic agents. In addition, his condition was eventually complicated by the co-emergence of t-MDS.
Case Report
A 54-year-old male was admitted to hospital in September 2003 with systemic lymphadenopathy, fatigue and night sweating. Computed tomography scans confirmed cervical, axillary, mediastinal, para-aortic, mesenteric and inguinal lymphadenopathies. Laboratory tests showed an elevated soluble interleukin-2 receptor (14,800 U/ml) and elevated WBC (113.3 × 10 9 /l), comprised of 89% of abnormal lymphoid cells. The bone marrow (BM) was also massively invaded by abnormal lymphoid cells. Biopsy of the cervical lymph node disclosed the diagnosis of FL grade 3A. In immunohistochemical examinations, lymphoma cells were positive for CD10, Bcl-2, and CD20, and t(14;18)(q32;q21) translocation was detected by fluorescence in situ hybridization. Eight cycles of R-CHOP consisting of rituximab, cyclophosphamide, adriamycin, vincristine and prednisolone and subsequent maintenance therapy with rituximab for 8 months induced complete remission (CR), but the disease relapsed 8 months after the cessation of rituximab. Salvage chemotherapy consisting of rituximab and fludarabine induced a partial response, but the disease relapsed again after 14 months. For the second salvage therapy, the patient was enrolled in a clinical trial for Shimura et al.: Reduced-Intensity Allogeneic Stem Cell Transplantation for Co-Emergence of Chemotherapy-Refractory Follicular Lymphoma and Therapy-Related Myelodysplastic Syndrome treatment with everolimus, a new inhibitor of mammalian target of rapamycin. The drug induced CR, which sustained for 23 months, but lymphoma eventually relapsed ( fig. 1a). A second biopsy of the cervical lymph node confirmed the diagnosis of relapsed FL without the evidence of transformation into a more aggressive lymphoma. Then, the patient underwent intensified chemotherapy using cyclophosphamide, high-dose Ara-C, dexamethasone, and etoposide (CHASE), radioimmunotherapy with 90 Y ibritumomab tiuxetan and myeloablative chemotherapy with melphalan, cyclophosphamide, etoposide, and dexamethasone (LEED) supported by autologous HSCT. The patient achieved CR, which was confirmed by 18 F-FDG-PET (PET) examination and BM analysis (no invasion and normal karyotype). However, 7 months after the transplantation, PET examination revealed generalized lymphadenopathy and uptake in the bilateral femoral nerve (suspected to be neurolymphomatosis). BM infiltration with abnormal lymphoid cells was also identified.
At this time, 81 months from the initial diagnosis of FL, we planned allogeneic HSCT using reduced-intensity conditioning (RIC). To reduce the tumor burden prior to RIC-HSCT, we administered three cycles of salvage therapy, comprised of rituximab and bendamustine, after which only minor cervical lymphadenopathy was identifiable by PET examination. However, myelosuppression accompanied by grade 4 neutropenia lasted for several weeks after the last dose of bendamustine, and BM analysis revealed hypocellularity with an increase in myeloblasts (5.2% of all nucleated cells) and multilineage dysplastic changes, which was documented in this case for the first time ( fig. 2a). Chromosomal analysis revealed the presence of complex karyotypic abnormalities, including chromosome 7 deletion ( fig. 2b). From these results, the diagnosis of t-MDS, refractory anemia with excess blasts-1, was made [6]. Regarding the MDS International Prognostic Scoring System, the patient was classified as intermediate-2 risk category [7]. Furthermore, the patient's condition was complicated by cytomegalovirus retinitis and required antiviral therapy for 4 weeks.
For simultaneous treatment of both chemotherapy-resistant FL and t-MDS, we performed RIC-HSCT with peripheral hematopoietic stem cells from his HLA-matched sibling (sister) 4 weeks after the diagnosis of t-MDS. At this time, the FL remained in partial remission. The conditioning regimen consisted of fludarabine (25 mg/m 2 × 5 days), melphalan (80 mg/m 2 × 1 day) and 4 Gy of total body irradiation (TBI). Cyclosporine A and short-term methotrexate were utilized as prophylaxis against graft-versus-host disease (GVHD). Neutrophil engraftment was confirmed on day 14 and platelet engraftment on day 22. There was no acute GVHD, and infectious complications, such as febrile neutropenia, Clostridium difficile-associated colitis, and reactivated cytomegalovirus retinitis, were treated successfully with antibiotics and antiviral agents. BM analysis on day 30 showed complete donor chimerism with a normal karyotype and without any myelodysplastic changes. The patient was discharged on day 56 and his condition was later complicated by chronic GVHD of the hepatic type that was alleviated successfully by re-dosing with cyclosporine A. PET examination on day 80 also showed no abnormal uptake ( fig. 1b). Since then, CR of both FL and t-MDS has been maintained for 672 days at the timing of writing.
Discussion
According to the WHO classification, t-MDS is classified as one of the therapy-related myeloid neoplasms (t-MN) that can occur subsequent to exposure to cytotoxic agents or irradiation [6]. The latency period from first exposure to the development of t-MN varies by the dose intensity and the type of cytotoxic agents. For example, alkylating agents or
Shimura et al.: Reduced-Intensity Allogeneic Stem Cell Transplantation for Co-Emergence of Chemotherapy-Refractory Follicular Lymphoma and Therapy-Related
Myelodysplastic Syndrome radiation therapies typically cause t-MDS 5-7 years after the exposure, and often manifest as complex chromosomal abnormalities, including monosomy 5 or 7. Topoisomerase II inhibitors are other leukemogenic agents that are potent inducers of t-AML, with 11q23 or 21q22 abnormalities occurring months to 3 years after the use. However, because most patients receive multiple drugs within the course of several years, it is difficult to determine the drug responsible for leukemogenesis in individuals [1,6].
Fludarabine is a purine analog (antimetabolite) that is highly effective in the treatment of indolent malignant lymphoma or chronic lymphocytic leukemia. Fludarabine impairs DNA repair and induces cytopenia (lymphopenia), which can account for the increase of the incidence of secondary malignancy. Indeed, treatment with fludarabine, especially in combination with another cytotoxic drug, has been reported to increase the risk of t-MN [8]. High-dose chemotherapy supported by autologous HSCT may also contribute to the increased occurrence of t-MN [9]. In our case, various alkylating agents, topoisomerase II inhibitors and fludarabine were administered and high-dose chemotherapy was also administered within a short period. Thus, it is impossible to determine the drug or regimen responsible for the development of t-MDS.
Bendamustine is one of the promising cytotoxic agents among the alkylating agents that has been widely used for the treatment of indolent lymphoma. The drug has both alkylating and antimetabolite properties, and it has demonstrated significant efficacy for patients with indolent lymphoma [10]. Considering its mechanisms of action, bendamustine could be a more powerful inducer for the emergence of t-MN. However, there is no evidence for any relationship between administration of this drug and the development of leukemogenesis. We administered bendamustine 8 months after autologous HSCT, and the possible association between bendamustine and t-MDS could not be refuted in our case.
The therapeutic options for t-MDS/AML have been limited. Recently, the hypomethylating agents 5-azacitidine and decitabine have demonstrated equivalent efficacy for t-MDS and de novo MDS in terms of hematologic responses and have improved the overall survival for t-MDS. However, the overall survival for t-MDS still remains shorter than that for de novo MDS because of the poor preexisting risk factor, i.e., poor performance status, complex karyotypic abnormalities, or chemoresistance [11]. Thus, allogeneic RIC-HSCT is a potential curative treatment option. t-MDS patients who have received more than two chemotherapeutic regimens or have undergone allograft more than 6 months after the diagnosis showed a significantly higher nonrelapse mortality [3,4]. Thus, an earlier decision for the choice of therapies with allogeneic HSCT is necessary, once the diagnosis of t-MDS is made.
FL is the most common indolent NHL of the germinal center B cells. Progress in immunochemotherapy using the anti-CD20 antibody rituximab combined with conventional and/or new cytotoxic agents, including bendamustine or fludarabine, has produced significant improvements in the response rate and survival [12][13][14]. However, most patients with FL eventually relapse and require subsequent therapies. The use of autologous or allogeneic HSCT for FL has not proven to have any significant survival advantage over chemotherapy alone [12]. However, for selected patients, especially those who relapse shortly after immunochemotherapy or those with histologic transformation to a more aggressive lymphoma, high-dose chemotherapy followed by autologous HSCT has prolonged the progression-free survival in cases of chemosensitive relapse [15]. Allogeneic HSCT holds promise for the cure of advanced stage FL, but has been associated with relatively high treatment-related mortality. Therefore, the indication of FL for allogeneic HSCT is further restricted and is generally considered at relapse after autologous HSCT. As a preparative regimen for allogeneic transplantation, RIC has been increasing because of its feasibility for elderly or heavily pretreated patients. Through retrospective analysis, prolonged overall survival and lower treatment-related mortality have been reported with RIC for allogeneic HSCT. Thomson et al. [16] reported the outcome of related or unrelated allogeneic HSCT for FL with a conditioning regimen of fludarabine, melphalan, and alemtuzumab. With a median follow-up of 43 months, nonrelapse mortality and progression-free survival at 4 years were 15 and 76%, respectively.
Combing these, in our case with concurrent t-MDS and refractory FL, allogeneic HSCT was the only curative therapeutic strategy, despite the fact that the clinical outcome of allogeneic transplantation for patients with concurrent MDS and lymphoid malignancy remains quite unfavorable [17]. In our case, the less toxic RIC-HSCT from an HLA-matched sibling donor, the effective disease control of FL by bendamustine just before HSCT, and the short time between the diagnosis of t-MDS and allo-HSCT might all contribute to the success of the treatment.
Since there is no established common conditioning regimen that is effective for both malignant lymphoma and MDS, we used the RIC regimen, consisting of fludarabine, melphalan, and low-dose TBI, in consideration of the superior clinical results from Thomson et al. [16] and Yuji et al. [18]. In the previous reports and our experience, this regimen is most likely to be feasible and safe, even for the elderly or those who have previously received multiple chemotherapies, including high-dose chemotherapy, when efficacy sufficient to eradicate residual disease is warranted. The addition of TBI in the conditioning regimen might contribute to lowering the relapse rate [19].
In summary, t-MN is impairing instead of improving the clinical outcome with the advance of therapies for hematological malignancies. Therefore, the number of patients simultaneously suffering from t-MN and lymphoid malignancy will inevitably increase. Intervention with allogeneic HSCT should be considered as early as possible in such situations. | 2016-05-12T22:15:10.714Z | 2014-03-13T00:00:00.000 | {
"year": 2014,
"sha1": "aa2cd3dcbe89ae81e3f0a00e664a9e2282f79069",
"oa_license": "CCBYNC",
"oa_url": "https://www.karger.com/Article/Pdf/360905",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "aa2cd3dcbe89ae81e3f0a00e664a9e2282f79069",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
14494773 | pes2o/s2orc | v3-fos-license | DNA methylation profile in diffuse type gastric cancer : evidence for hypermethylation of the BRCA 1 promoter region in early-onset gastric carcinogenesis
Diffuse type gastric carcinoma is the most aggressive type of gastric cancer. This type of tumor is not preceded by precancerous changes and is associated with early-onset and hereditary syndromes. To test the hypothesis that DNA methylation profile would be useful for molecular classification of the diffuse type gastric carcinoma, DNA methylation patterns of the CpG Island of 17 genes were studied in 104 cases and 47 normal adjacent gastric mucosa by Methylation-specific PCR, Immunohistochemistry and Hierarchical clustering analysis. The most frequent methylated genes were FHIT, E-cadherin, BRCA1 and APC (>50%), followed by p14, p16, p15, p73, MGMT and SEMA3B (20-49%). Hierarchical clustering analysis reveals four groups with different clinical features. The first was characterized by hypermethylation of BRCA1 and younger age (<45 years old), and the second by hypermethylation of p14 and p16 genes, male predominance and Epstein-Barr virus infection. The third group was characterized by hypermethylation of FHIT and antrum located tumors and the fourth was not associated with any clinical variables. In normal adjacent mucosa only the p73 gene was significantly less methylated in comparison to tumor mucosa. DNA methylation identified subgroups of diffuse type gastric cancer. Hypermethylation of BRCA1 associated with young age suggests a role in early-onset gastric carcinoma. Key terms: gastric cancer, diffuse type, early-onset, BRCA-1 Corresponding Author: Dr. Alejandro Corvalan, Laboratorio Patología Molecular y Epidemiología, Centro Investigaciones Médicas, Pontificia Universidad Católica de Chile, Marcoleta 391 Santiago 8330074 Chile, Phone: 56(2) 3548289, E-mail:
INTRODUCTION
Gastric cancer is the fourth most common cancer and the second leading cause of cancerrelated death worldwide (Pisani et al., 2002).Although its incidence and mortality have fallen dramatically, gastric cancer remains a worldwide public health problem (Pisani et al., 2002;Rastogi et al., 2004).According to Lauren's classification, there are two major histological types of gastric cancer, intestinal and diffuse (Lauren 1965).The diffuse type is the most aggressive form of gastric cancer and the mortality rate is increasing in spite of the decline of the intestinal type (Henson et al., 2004;Crew and Neugut 2006).In addition, the diffuse type is not preceded by sequential stages of precancerous changes, and tends to arise "de novo" (Vauhkonen et al., 2006).Furthermore, early-onset gastric cancer and hereditary diffuse gastric cancer (HDGC) are both associated with diffuse type histology (Dunbier and Guilford 2001;Milne et al., 2007).
Particular alterations at the genetic and epigenetic levels in oncogenes and tumorsuppressor genes have been associated with the multistage process of gastric cancer (Tahara 2004;Yasui et al., 2005).In the diffuse type, the best characterized genetic alteration is the loss of heterozygosity (LOH) of chromosome 17p and mutations of p53 and E-cadherin genes (Tamura 2006).In addition, amplification of K-sam and c-met, LOH at 1p and reduced p27 and nm23 expression have been associated with advanced stage disease and low survival rates (Yasui et al., 2005;Vauhkonen et al., 2006).In spite of these findings, no consistent gene alterations have been detected in diffuse type gastric cancer.
The identification and characterization of genes selectively hypermethylated in cancer may improve our understanding of gastric carcinoma (Esteller 2003).Several reports have shown frequent hypermethylation of tumor suppressor genes in the intestinal type of gastric cancer (Tamura 2006;Vauhkonen et al., 2006).However, besides hypermetylation of the promoter region of E-cadherin gene (Graziano et al., 2004), and more recently, PGP9.5 (Yamashita et al., 2006), no consistent information of the role of epigenetics in the diffuse type is currently available.In this study we used a candidate gene approach of 17 genes, covering all cellular pathways, to test the hypothesis that a hypermethylation profile would be useful for molecular classification of diffuse type gastric cancer.We also assessed the role of specific genes as molecular markers for the "de novo" precancerous changes in normal adjacent mucosa of paired tumor samples.
Clinical Samples
We studied 104 formalin-fixed paraffinembedded archival specimens of diffuse type gastric cancer.All cases were selected using the histological criteria according to the Lauren's classification (Lauren 1965).In 47 of these cases, normal gastric mucosa adjacent to the tumor was also available.Clinical characteristics of these cases are shown in Table 1.In this series, 66 (63.5%) cases were males with an age average of 58 years old and 17 (16.3%)patients were under 45 years of age.Forty-three (41.3%) tumors were located in cardia and 27 (26%) in the antrum.In 3 cases (2.9%) the location was not consigned.Fourteen (13.5%) cases were at early stage.Seventy-one (68.3%) of these cases were lymph node positive and 37 (35.6%) had signet-ring cell features.In this series, 30 (28.8%) cases were positive for EBV infection and had been reported previously (Corvalan et al., 2001).The date of the last follow-up and status (alive or dead) was available in 100 cases.The Institutional Review Boards of the Pontificia Universidad Católica de Chile and Hospital Clínico San Borja Arriarán, Santiago Chile approved this study.
DNA extraction
Five 15 μm paraffin sections of representative areas of diffuse type gastric carcinoma (>70% tumor cells) were cut and placed into a 0.5 mL tube for DNA extraction.DNA extraction was performed in a 100 μL extraction solution (1 M Tris pH 8.0, 50 mM EDTA, 0.5% and Tween 20) with 1 mg/mL Proteinase K (Sigma) for 12 hrs at 55 o C. Proteinase K was inactivated by boiling at 100 o C for 10 minutes and DNA was purified by phenolchloroform extraction and ethanol precipitation according to standard protocols.DNA concentration was determined by spectroscopy using 1 OD260 for 50 μg/ml.
DNA Methylation Assays
DNA was treated with sodium bisulphite as described previously (Riquelme et al., 2007).Briefly, 1 μg of genomic DNA was denatured by incubation with 0.2 M NaOH for 10 min at 37°C.Aliquots of 10mM hydroquinone (30 ul; Sigma Chemical Co., St. Louis, MO) and 3 M sodium bisulphite (pH 5.0; 520 ul; Sigma Chemical Co.) were added, and the solution was incubated at 50 °C for 16 h.Treated DNA was purified by use of the Wizard DNA Purification System (Promega Corp., Madison, WI), desulfonated with 0.3 M NaOH, precipitated with ethanol, and resuspended in water.Modified DNA was stored at -80° C until used.The methylation status of 17 genes (APC, BRCA1, DAPK, ER, E-cadherin, FHIT, GSTP, hMLH1, MGMT, p14, p15, p16, p73, RARb, SEMA3B, SOCS, TIMP3) was determined by methylation-specific polymerase chain reaction (MS-PCR) (Herman and Baylin 2003) and details on primer sequences and PCR conditions are available upon request.These genes were chosen because they are known to be tumor suppressor genes, methylation at these CpG sites is associated with gene silencing, they cover essential alterations in cell physiology that collectively dictate malignant growth and they had previously been described as undergoing hypermethylation in other tumor types (Hanahan and Weinberg 2000;Virmani et al., 2000;Chan et al., 2002;Li et al., 2002;Oka et al., 2002;Sato et al., 2002;Kuroki et al., 2003;Miyamoto et al., 2003;Wild et al., 2003;Sarbia et al., 2004;Takahashi et al., 2004;Xu et al., 2004;Kim et al., 2005;Kim et al., 2005;Liu et al., 2005;Takahashi et al., 2005;Kawaguchi et al., 2006;Mori et al., 2006;Tamura 2006;Riquelme et al., 2007).Gene names, gene location and function for each gene selected in this study is summarized in Table 2.Only cases with positive unmethylated bands were considered informative for methylation status in this study.All reactions were done in triplicate.In vitro methylated Sss1 methyltransferase (New England Biolabs) and bisulfite-modified DNA from the MKN-45 cell line were used as a positive control.Samples without DNA template (water only) were included as negative control for each set of PCR reaction.
Protein expression assays
To establish the association between CpG island hypermethylation and gene silencing we determined protein expression by immunohistochemistry assay on tissue microarray (TMA) slides.Tissue microarrays were performed using a Manual Tissue Arrayer II instrument (Beecher Instruments, Silver Spring, Maryland, USA).Archival tumor tissue blocks from 104 tumors were selected, cut and stained with hematoxylin and eosin for the best tumor area identification.After whole-section glass slide evaluation, tumor area was selected for placement into the TMA by a circling on the glass slide and identified in the corresponding paraffin block.Six hundred μm stylets in inner diameter were used to take three cylindrical core biopsies from each tumor tissue block (donor block), with subsequent arraying into a new recipient paraffin block.In this way, all 104 cases were held in the 3 recipient blocks.An adequate case was defined as a tumor occupying more than 10% of the core area.Immunohistochemistry was performed on 4-μm-thick section TMA blocks.Sections were dewaxed in xylene, rehydrated through graded alcohol, and placed in an endogenous peroxide block for 15 minutes.Antigen retrieval was performed in a citrate buffer (10% citrate buffer stock in distilled water, pH 6.0) and microwaved for 10 minutes.Non-reactive staining was blocked by 1% horse serum in Tris-buffered saline, pH6.0 for 3 minutes.After primary incubation, antibody binding was detected using two-stage visualization systems based on an enzyme-conjugated polymer backbone carrying secondary antibody staining from 10% or more tumor cells was considered positive for expression (Fig. 2).
Data Analysis
In order to identify clinically relevant groups based on to the DNA methylation pattern, we performed hierarchical clustering analysis in a similar fashion to cDNA expression microarrays in breast tumors and lymphomas (Alizadeh et al., 2000;van 't Veer et al., 2002) or more recently, DNA methylation signature in neuroblastoma (10 genes) or hepatocellular carcinoma (18 genes) (Alaminos et al., 2004;Nishida et al., 2007).TIGR MultiExperiment Viewer was applied to the DNA methylation dataset using unsupervised hierarchical clustering analysis with Pearson correlation and complete linkage to cluster the tumors.In addition, we analyzed this data using the Methylation index, defined as the number of methylated genes, divided by the total number of genes analyzed, as a way to compare the methylation status of each cluster.Clinical variables and follow-up data of cases from each particular cluster were compared using c2 tests.Survival analysis was performed using the Kaplan-Meier method and differences among groups were compared with the Log-rank testing and Cox regression models (Stata 8.0, Stata Corporation; College Station, TX).For all tests, probability values of p<0.05 were regarded as statistically significant.
Correlations between DNA methylation and loss of protein expression
To establish the association between DNA methylation and gene silencing, we determined protein expression of three genes that were methylated in a relatively large number of cases (E-cadherin, p16 and p73).This analysis was performed in all 104 tested cases.For all three genes, the presence of CpG island methylation was associated with loss of protein expression.Conversely, in those cases harbouring an unmethylated CpG island, each gene was expressed.Representative examples are shown in Fig. 2.
DNA methylation patterns and clinical variables
To explore the relationship among DNA methylation patterns and clinical variables, we performed unsupervised hierarchical clustering analysis.As shown in Fig. 3, clustering analysis revealed two different clusters, both further subdivided into two major branches.The most upper branch was characterized by hypermethylation of BRCA1 gene (p=0.0006).The second branch was characterized by hypermethylation of p14 and p16 (p<0.0001).The upper branch of the second major cluster was characterized by hypermethylation of FHIT (p<0.0001) and the lower branch was not associated with hypermethylation of any specific gene.Accordingly, methylation index analysis revealed that this cluster was significantly less methylated in comparison to all other clusters (p<0.0001)(datanot shown).Subsequently, clinical variables of cases from each cluster were compared and are shown in Table 1.The most upper branch (BRCA1 cluster) was associated with patients of younger age (age <45 y.o.) (p=0.009).The second branch (p14/p16 cluster) was associated with male predominance (p=0.002) and Epstein-Barr virus infection (p=0.008).The upper branch of the second major cluster (FHIT cluster) was associated with antrum as the predominant location (p=0.02).Finally, the lower branch of the second major group (low methylation index cluster) was not related to any specific clinical variables.
Survival Analysis
The impact in prognosis of clusters and individual genes was examined by survival analysis using Kaplan-Meier method, Logrank testing, and Cox regression models.The average follow-up period was 64 months (Standard deviation = 48, range 1-146).Univariate analysis, using all four branches or individual genes, demonstrated that no clusters or single gene was significantly correlated with worse prognosis.Multivariate Cox regression analysis showed that only the lymph node metastasis was prognostic determinant (HR=3.06,95% CI=1.07-8.7;p=0.03)(data not shown).
DNA methylation patterns in normal adjacent mucosa from diffuse type gastric cancer
The frequency of DNA methylation patterns of the CpG Island of 8 genes was studied in a subgroup of 47 paired tumor and normal adjacent mucosa.The most representative genes of each cluster (BRCA1, p14/p16 and FHIT) and 4 non-cluster related genes (APC, MGMT, p15 and p73 genes) were included in this analysis.Case-informative incidence ranged from 8 to 39 samples.As shown in Fig 4A, only the p73 gene was significant less methylated in normal adjacent mucosa in comparison to tumor mucosa (p=0.006).Representative examples are shown in Fig. 4B.
DISCUSSION
In order to define the DNA methylation signature in diffuse type gastric carcinoma, the most aggressive and increasing form of gastric cancer, we used the candidate gene approach by searching the hypermethylation profile of CpG Island of 17 genes.To identify associations between genes and clinical variables, we performed hierarchical clustering analysis in a similar fashion to cDNA expression microarrays in breast tumors and lymphomas (Alizadeh et al., 2000;van 't Veer et al., 2002) or DNA methylation signature in Neuroblastoma (Alaminos et al., 2004) or Hepatocellular carcinoma (Nishida et al., 2007).Here, we found that DNA methylation is a frequent event in diffuse type gastric cancer and clustering analysis reveals different branches associated with hypermethylation of specific genes.Interestingly, these branches were associated with distinct clinical variables.For example, BRCA1 was frequently more methylated in a group of tumors associated with young age (<45 years old).Tumors at this age or less have been considered early-onset gastric cancer (Milne et al., 2007) and although they represent less than 10% of gastric carcinoma, they have unique clinicopathological features including diffuse type histology (Milne et al., 2007).Although early onset also has unique molecular features (lack of microsatellite instability, infrequent LOH, low COX2 expression, infrequent loss of TFF1 expression, no loss of RUNX3, gains at chromosomes 17q, 19q and 20q) (Milne et al., 2007), DNA methylation has not been extensively explored in this type of gastric cancer.Only Kim et al (Kim et al., 2005) assayed the hypermethylation status in genes associated with the APC-beta-catenin axis and the mismatch repair system (hMLH1, TIMP3, THBS1, DAPK, GSTP1, APC, and MINT2) and found that hypermethylation is a frequent phenomenon in early-onset gastric carcinoma.However, no specific genes were hypermethylated.Thus, to our knowledge this is the first report that has identified hypermethylation of CpG island of the BRCA1 gene in association with early-onset gastric carcinoma.Interestingly, Varis et al (Varis et al., 2003) identified increases of DNA copy number at chromosome 17q, the location of the BRCA1 gene, in 52% of 22 cases of earlyonset gastric cancer and Semba et al (Semba et al., 1998) described LOH on chromosome 17q12-21 with several neighbouring markers in this region, while no mutation was found in the BRCA1 gene.In addition, although associated with hereditary and not early-onset gastric cancer, studies exploring additional tumors on relatives of BRCA1 carriers identified gastric cancer as one of the most common sites for malignancies (Gallardo et al., 2006).Taken together, these findings suggest that alterations of the BRCA1 gene should be included as one of the molecular features of early-onset gastric carcinoma.
An association of hypermethylation of p14 and p16 and the presence of Epstein-Barr virus infection were characteristic of the lower cluster of the upper branch in the clustering analysis.This association has been described previously (Koriyama et al., 2004).However, our male gender association is contrary to previous studies for p16 methylation (Vauhkonen et al., 2006).Hypermethylation of FHIT has been associated with antrum as a predominant location.Interestingly, recent data showed that FHIT knock-out mice develop tumors in the forestomach and small intestines (Fujishita et al., 2004).These findings suggest that FHIT plays an important role in the integrity of gastrointestinal mucosal structures.Chang et al (Chang et al., 2002) described frequent LOH at the FHIT locus and loss of Fhit protein expression in a series of 7 signet-ring cell gastric cancer.Although these authors have proposed that alteration of the FHIT gene might be the hallmark of signet-ring cell gastric cancer, we did not confirm this.
The finding that seven out of 8 tested genes (with the only exception being the p73 gene) were hypermethylated in a similar frequency in normal adjacent mucosa in comparison to tumor mucosa, suggests the presence of an epigenetic field for cancerization (Ushijima 2007).Epigenetic field for cancerization has been demonstrated for Barrett's esophagus, liver, lung and urothelial cancers (Eads et al., 2000;Wistuba et al., 2002).Since changes in DNA methylation status are specific for each tumor, it is likely that specific genes are methylated according to unique carcinogenic factors (Ushijima 2007).The p73 gene was the only one not hypermethylated in normal adjacent mucosa in comparison to tumor mucosa.Diffuse type gastric cancer does not have sequential stages of precancerous changes, as does intestinal-type gastric cancer, and consequently is considered to arise "de novo" (Vauhkonen et al., 2006).Thus, this finding suggests that hypermethylation of p73 might play an important role in the early stages of diffuse type gastric carcinoma.Extremely low levels of p73 expression has been observed in gastric cancer cell lines, although reports have shown that mutations of p73 are rare in primary human cancers (Pilozzi et al., 2003).These findings suggest that p73 could be a target of epigenetic regulation in gastric carcinogenesis.
In summary, we found that in DNA, methylation is a frequent event in diffuse type gastric cancer.Clustering analysis reveals specific association between genes and clinical variables, in particular BRCA1 to early-onset gastric carcinoma.The finding that the p73 gene was significantly less methylated in normal adjacent mucosa suggests that it may play a role in the early stages of diffuse type gastric carcinoma.
Fig. 1 :
Fig. 1: Methylation-specific polymerase chain reaction (MS-PCR) analysis of 104 diffuse-type gastric carcinomas.(A) Histogram representing the percentage of tumors showing methylation for the 17 genes as indicated.B) Representative results.M PCR product with primers specific for methylated DNA, U PCR product with primers for unmethylated DNA; CM positive control for methylated DNA; CU positive control for unmethylated DNA.
Fig. 2
Fig. 2 Lack of protein expression at low (A, C and E) and high (B, D and F) magnification of Ecadherin (A-B), p16 (C-D) and p73 (E-F) on tissue microarray of diffuse-type gastric cancer.Positive controls for each antibody are shown in the corresponding inserts.
Fig. 3 :
Fig. 3: Unsupervised Hierarchical Clustering analysis of 104 diffuse-type gastric cancer showing two different clusters, both further subdivided into two major groups.Each row represents a tumor and each column a single gene.Red indicates positive, black negative and gray not determined.
Fig. 4 :
Fig. 4: Methylation-specific polymerase chain reaction (MS-PCR) analysis of 47 paired tumor and non-tumor adjacent mucosa of diffuse-type gastric cancer.(A) Histogram representing the percentage of cases showing methylation for the most representative genes as indicated.Black indicates tumor mucosa and white indicates normal adjacent mucosa.B) Representative results.M PCR product with primers specific for methylated DNA, U PCR product with primers for unmethylated DNA; CM positive control for methylated DNA; CU positive control for unmethylated DNA.
TABLE 1
Clinical associations of clusters of gene methylation in diffuse type gastric cancer a p14/p16 cluster vs all others.b BRCA1 cluster vs all others.c FHIT cluster vs all others.d p14/p16 cluster vs all others.
TABLE 2
Summary data of genes tested for aberrant promoter hypermethylation in Diffuse-type gastric cancer | 2016-10-14T01:18:46.145Z | 2008-01-01T00:00:00.000 | {
"year": 2008,
"sha1": "4cd38cea1419d1e141f8fd3d620ee7a8d44339c8",
"oa_license": "CCBY",
"oa_url": "http://www.scielo.cl/pdf/bres/v41n3/art07.pdf",
"oa_status": "GOLD",
"pdf_src": "Crawler",
"pdf_hash": "4cd38cea1419d1e141f8fd3d620ee7a8d44339c8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
17633610 | pes2o/s2orc | v3-fos-license | Vascular Health and Risk Management Dovepress the Cost of Inpatient Death Associated with Acute Coronary Syndrome
you hereby accept the Terms. Non-commercial uses of the work are permitted without any further permission from Dove Medical Press Limited, provided the work is properly attributed. For permission for commercial use of this work, please see paragraphs 4.2 and 5 of our Terms Background: No studies have addressed the cost of inpatient mortality during an acute coronary syndrome (ACS) admission. Objective: Compare ACS-related length of stay (LOS), total admission cost, and total admission cost by day of discharge/death for patients who died during an inpatient admission with a matched cohort discharged alive following an ACS-related inpatient stay. Methods: Medical and pharmacy claims (2009–2012) were used to identify admissions with a primary diagnosis of ACS from patients with at least 6 months of continuous enrollment prior to an ACS admission. Patients who died during their ACS admission (deceased cohort) were matched (one-to-one) to those who survived (survived cohort) on age, sex, year of admission, Chronic Condition Index score, and prior revascularization. Mean LOS, total admission cost, and total admission cost by the day of discharge/death for the deceased cohort were compared with the survived cohort. A generalized linear model with log transformation was used to estimate the differences in the total expected incremental cost of an ACS admission and by the day of discharge/death between cohorts. A negative binomial model was used to estimate differences in the LOS between the two cohorts. Costs were inflated to 2013 dollars. Results: A total of 1,320 ACS claims from patients who died (n=1,320) were identified and matched to 1,319 claims from the survived patients (n=1,319). The majority were men (68%) and mean age was 56.7±6.4 years. The LOS per claim for the deceased cohort was 47% higher (adjusted incidence rate ratio: 1.47, 95% confidence interval: 1.37–1.57) compared with claims from the survived cohort. Compared with the survived cohort, the adjusted mean incremental total cost of ACS admission claims from the deceased cohort was US$43,107±US$3,927 (95% confidence interval: US$35,411–US$50,803) higher. Conclusion: Despite decreasing ACS hospitalizations, the economic burden of inpatient death remains high.
Introduction
Acute coronary syndrome (ACS) is an umbrella term that encompasses patients with coronary heart disease (CHD) who present with either unstable angina (UA) or an acute myocardial infarction (MI) consisting of ST-segment elevation myocardial infarction (STEMI) or non-ST-segment elevation myocardial infarction (NSTEMI). 1,2 ACS begins with the rupture of an unstable plaque within the coronary artery, with subsequent development of associated intravascular thrombus and potential for ischemic myocardial injury, resulting in significant morbidity and mortality. Based on data from 2009, the American Heart Association reported approximately 1.2 million hospital discharges with a diagnosis of ACS. 3 Moreover, in 2015, it is estimated that 635,000 Americans will
14
Page ii et al experience a new coronary attack (defined as the first hospitalized MI or CHD death) and approximately 300,000 will have a recurrent event. 4,5 In terms of death, CHD was associated with one in every seven Americans in 2011, accounting for approximately 375,295 deaths, with approximately 27% of these deaths occurring in the hospital setting. 4,5 The cost of hospitalization for ACS is expensive and continues to rise. In terms of direct medical expenditures, ACS costs Americans more than US$150 billion annually, with approximately 60%-75% of these costs related to hospital admission and readmission. [6][7][8] Several studies have been published evaluating the direct and indirect costs for patients who have been admitted for ACS. [6][7][8][9][10] However, the majority of these studies have only gauged the cost of the index hospitalization or costs at 30 days or 1 year following the index admission (eg, patients who have survived). None of these studies have directly calculated the cost of inpatient mortality for an ACS admission. The cost of inpatient death is an important consideration, as many of the large therapeutic ACS studies used to develop evidenced-based guidelines and health-system quality core measures take into account the outcome of cardiovascular mortality, which includes in-hospital death. [11][12][13][14][15][16][17] While hospitalization and in-hospital mortality for ACS continue to decline due to implementation of medical and pharmacological interventions, the economic burden of inpatient mortality could remain high. [1][2] To fill this gap in the literature regarding the cost of in-patient death due to ACS, we compared the length of stay (LOS), total admission cost, and total admission cost by day for patients who died during an ACS-related inpatient admission with a matched cohort of those who were discharged alive following an ACS-related inpatient stay.
Methods
This study was a retrospective between-group comparison of inpatient admissions with a primary diagnosis for ACS in which patients were discharged alive (survived) or died during the hospitalization. As all patient data were de-identified, this study was reviewed and determined to be institutionally exempt.
Data source
The data for the study were obtained from the Truven Health MarketScan dataset. This data set includes medical, pharmacy, and enrollment claims from 100 employers nationwide, representing 40 million commercially insured patient lives.
Cohort identification
The study period was from January 1, 2009 to December 31, 2012. The unit of analysis for the study was inpatient admission claims, rather than patients. Inpatient hospitalizations with ACS in the primary diagnosis field (henceforth referred to as the index admission date) using International Classification of Diseases -9th Revision, Clinical Modification (ICD 9) codes for STEMI, NSTEMI, and UA (Table S1) during the study period were initially identified. From this group, the sample was narrowed further to admissions for patients who had a minimum of 6 months of continuous enrollment prior to their index ACS admission. The study cohorts consisted of two groups of admissions: those for which the patients died during the ACS admission and those for which the patients survived and were discharged alive following the hospitalization. Deaths were classified based on the Truven Health MarketScan discharge status. Admissions with a discharge status of "death" (the deceased cohort) were matched to remaining admissions (the survived cohort) on age categories (under 40, 40-44, 45-49, 50-54, 55-59, 60-64, or $65 years), sex, year of admission (2009, 2010, 2011, and 2012), CCI score (0, 1, 2, 3, 4, 5, or $6), and the presence of any revascularization -percutaneous coronary intervention (PCI) or coronary artery bypass graft (CABG) -in the 6 month period prior to the index admission date to ensure statistically comparable samples of the deceased and survived cohorts. A one-to-one matching was performed on the mentioned variables. Only one claim from the deceased cohort was not matched to an equivalent claim from the survived cohort; thus, the final sample consisted of two cohorts: 1,320 admissions for the deceased cohort, and 1,319 matched admissions for the survived cohort. This resulted in 1,320 patients in the deceased cohort and 1,319 patients in the survived cohort.
Measures length of stay
The LOS for an ACS hospitalization for the survived and deceased cohorts was determined.
cost of hospitalization
The total cost associated with an ACS hospitalization was determined, which included all costs incurred during the admission by claim. The total cost was also determined based on diagnosis of STEMI and NSTEMI.
cost of hospitalization by day of discharge/death
The total cost of an ACS hospitalization for the survived and deceased cohorts, which included all costs incurred by claim during the admission, was stratified by the day of discharge/death. Stratification was done from 2 to 6 or more hospital days.
15
The cost of acs inpatient death analysis Both descriptive and adjusted analyses were conducted between deceased and survived cohorts.
The unadjusted analyses compared means for all specified outcome variables. Depending on the type of outcome variable, different statistical models were employed for the adjusted regression analyses to estimate the incremental effect of ACS-related inpatient mortality. A generalized linear model with log transformation was used to estimate the additional cost (the incremental effect of ACS-related inpatient mortality) for the deceased compared with the survived cohort. The same model was also used to estimate the mean incremental cost of ACS-related hospitalization for each day of hospitalization. A negative binominal regression model was used to estimate the differences in the expected ACS LOS between the two cohorts. The explanatory variables for all models included at baseline (in the 6 month period prior to the index admission date): sex; CCI score; age categories (45-49 years, 50-54 years, 55-59 years, and 60-64 years); region of the United States; type of insurance coverage (health maintenance organization, point of service, preferred provider organization), presence of revascularization procedures (PCI or CABG); and industry employment (manufacturing, transportation, services; see Table S2 for revascularization codes). All costs were inflated to 2013 dollar values.
study population
From January 1, 2009 to December 31, 2012, a total of 99,924 claims (n=97,746) for an ACS admission were identified, of which 1,320 claims for deceased patients (n=1,320) were matched to claims for patients who survived their ACS admission (n=1,319; Figure 1). Table 1 shows that the baseline characteristics of age groups, sex, year of admission, CCI score, and prior revascularization were well-matched between deceased and survived cohorts. The majority of admissions were for male patients (68%) with a mean age of 57 years residing within the southern United States (41%-45%). The type of medical coverage varied between cohorts, with the majority of admissions for patients having point of service plans (60%-63%). Between 58% and 60% of admissions were for patients who worked within the oil and gas, mining, retail trade, finance, insurance, real estate, construction, wholesale, agriculture, forestry, or fishing industry.
Six months prior to hospitalization, the number of admissions for patients who had experienced a previous ACS event varied between groups. Of admissions in the deceased cohort (n=112), 8% had a previous diagnosis of STEMI, whereas 14% of admissions in the survived cohort (n=180) were for patients who had a history of an NSTEMI (Table 1). Revascularization procedures and prior all-cause hospitalization during this time period were low for both groups: 6%-7% (P=0.996) had a PCI, 1% CABG (P=0.853), and 24%-26% a prior hospitalization (P=0.210). Compared with the survived cohort, a greater number of patients in the deceased cohort had a history of cardiac arrhythmias (8% vs 14%, P,0.001, respectively) and heart failure (HF) (9% vs 12%, P=0.012, respectively).
During hospitalization, the majority of admissions for patients in the deceased cohort carried a diagnosis of Although no difference existed in the number of CABG procedures during the admission between groups, 57% in the survived cohort had a PCI compared with 44% in the deceased cohort (P,0.001). However, when stratifying based on the diagnosis of STEMI and NSTEMI, the majority of patients with STEMI received PCI in both the deceased (57%) and survived cohorts (84%) compared to patients with NSTEMI (22% vs 46%, P,0.001, respectively; Figure 2). Patients in the NSTEMI cohort had a slightly higher percentage of CABG (13%) compared to patients with STEMI (7%-9%). The incidence of both cardiac arrhythmias (69% vs 22%, P,0.001) and HF (36% vs 15%, P,0.001) was significantly higher during hospitalization for both the deceased and survived cohorts, respectively. In the deceased cohort, stroke increased from 3% prior to admission to 13% during hospitalization.
Unadjusted admission costs and total length of stay
The unadjusted total mean admission cost was US$82,965±US$138,104 for the deceased cohort, com-
17
The cost of acs inpatient death pared with US$40,568±US$53,415 for the survived cohort (P,0.001; Figure 3A). The mean LOS was also significantly longer for those who died (7.5±9.9 days) compared with those who were discharged alive (5.2±4.9 days; P,0.001; Figure 3B). When evaluating in-hospital death or discharge by day, the unadjusted mean cost for those who died climbed from US$32,285±US$29,732 on day 1 to US$60,817±US$75,393 by day 4 and remained relatively constant through day 6 (US$65,613±US$49,397; Figure 4). A similar trend was seen in those who survived. The mean unadjusted cost at day 1 was US$14,779±US$11,690, which climbed to US$31,914±US$20,116 by day 5 and then increased to US$50,554±US$40,885 on day 6. However, on all days of discharge/death, the unadjusted mean cost of hospitalization was consistently higher on each day for the deceased cohort compared with the survived cohort and was statistically significant from days 1 to 5 (P,0.001; Figure 4).
When stratifying based on the diagnosis of STEMI and NSTEMI, mean costs of admission were higher for patients with STEMI compared to NSTEMI particularly for with the deceased cohort ( Figure 5). Compared to the survived cohort (n=434) with a diagnosis of STEMI, those in the deceased cohort (n=841) had a 1.8-fold higher mean admission cost (US$48,229±US$74,110 vs US$87,392±US$148,838, P,0.001). Similarly, findings were seen in those with NSTEMI, in which compared to those who survived (n=812), those who died (n=475) during their admission had a 1.9-fold higher mean admission cost (US$38,623±US$39,665 vs US$75,633±US$117,020, P,0.001). (Table 2).
Discussion
To our knowledge, our study is the first to evaluate the economic ramifications of inpatient death associated with ACS within a commercially insured population. Although many economic analyses have evaluated the direct and indirect costs of hospitalization associated with ACS, particularly in a Medicare population, these studies have not determined an estimate of the claim cost associated with inpatient death S
18
Page ii et al when ACS was a primary diagnosis. [6][7][8][9][10] We found that those with a primary diagnosis of ACS who died were more likely to have a STEMI and carried a high comorbidity burden, especially HF and cardiac arrhythmias, when compared with those who survived their ACS hospitalizations. Additionally, expected LOS for admissions during which patients died was 47% higher than those admissions during which patients survived. Compared with admissions during which the patient survived, the adjusted mean incremental total cost of an ACS admission was US$43,107±US$3,927 higher in claims for deceased patients. The adjusted mean incremental total cost of an ACS admission continued to climb for each additional hospital day, increasing approximately twofold by day 6. When taking into account ACS diagnosis, the mean cost of admission for those who died during their admission was approximately twofold higher compared to those who survived for both STEMI and NSTEMI.
In terms of characteristics of patients who died, our data are consistent with the literature, as approximately onethird of patients with STEMI die within 24 hours of onset of ischemia compared with only 15% of patients with UA/ NSTEMI who either die or experience a reinfarction within 30 days of hospitalization. 18 Cardiac arrhythmias, especially atrial fibrillation (AF), and HF are both common yet deadly comorbidities associated with ACS. In an analysis of the Global Registry of Acute Coronary Events registry, Steg et al found that the presence of HF in patients with ACS increased hospitalization by 2 days compared with those without HF (9 vs 7 days, P,0.0001, for STEMI; 8 vs 6 days, P,0.0001, for NSTEMI; and 5 days for both, P=0.317, for UA, respectively). 19 Furthermore, HF on admission was associated with a fourfold increase in crude hospital mortality rates (12.0% vs 2.9%; odds ratio: 4.6; 95% CI: 3.85-5.40). This increase in mortality was seen regardless of an ACS subset. For AF, Jabre et al 20 found in a meta-analysis of 43 studies involving 278,854 patients with MI that AF was associated with at least a 40% increase in mortality compared with that in control patients in normal sinus rhythm. 19 This finding persisted regardless of the timing of AF development. 20 Additionally, we found that patients in the deceased cohort had a significantly lower rate of PCI when compared to those who survived (44% vs 57%, P,0.001). However, this finding might be expected as patients within the deceased cohort had a statistically higher rate of comorbid conditions such as cardiac dysrhythmias (P,0.001), HF (P,0.001), and chronic kidney disease (P=0.023), which could preclude them from being an eligible candidate for revascularization in lieu of conservative medical management. 1,2 When stratified based on the diagnosis of STEMI and NSTEMI, our data are consistent with the 2011 Acute Coronary Treatment and Intervention Outcomes Network -Get with the Guidelines registry, which consisted of 119,967 patients 18 years or older who had been admitted with a diagnosis of STEMI or NSTEMI. 21 Based on patients eligible for revascularization,
19
The cost of acs inpatient death 87.9% of patients with STEMI received PCI during admission compared to 49% with NSTEMI in the registry. Within our analysis, 84% of patients with STEMI and 46% of patients with NSTEMI in the survived cohort received PCI during their ACS admission. Finally, the mean LOS was significantly longer for those who died (7.5±9.9 days) compared with those who were discharged alive (5.2±4.9, P,0.001). In terms of LOS, our survived cohort findings are consistent with national data. In a recent analysis of Medicare data for all fee-for-service patients 65 years or older with a diagnosis of ACS, Krumholz et al estimated that the length of hospitalization for STEMI/NSTEMI to have decreased from 6.5 days in 1999 to 5.3 days in 2011. 22 Although several analyses have suggested a trend toward a reduction in ACS hospitalization and in-hospital mortality, having an estimate of the direct cost of inpatient death is an important consideration at many levels. [22][23][24] First, several of the clinical trials and analyses evaluating lifesaving pharmacotherapies and medical interventions in ACS have used inpatient mortality as a primary end point. Therefore, having a projected cost of in-patient mortality in patients with ACS, provides an estimate to gauge such benefits. For example, in the Gruppo Italiano per lo Studio della Steptochinasi nell'Infarto Micardico trial, streptokinase reduced in-hospital mortality by 18% when compared with standard of care (P=0.0002) and by 51% when administered within less than 1 hour of chest pain (P=0.0001). 25 Furthermore, many of the clinical trials of pharmacotherapeutic agents that are used acutely within the hospital setting and continued at discharge have used cardiovascular mortality as a composite of their primary outcome end points. In the Platelet Inhibition and Patient Outcomes trial, ticagrelor compared with placebo was associated with a 16% relative risk reduction in the composite end point of death from vascular causes, MI, or stroke (P,0.001). 14 Finally, in a meta-analysis of 23 trials consisting of 7,739 patients with STEMI that compared primary PCI with thrombolytic therapy, primary PCI resulted in a reduced short-term (4-6 weeks) overall mortality (P=0.0002) and long-term (6-18 months) overall mortality (P=0.0019) compared with fibrinolytic therapy. 16,17 Second, from a policy perspective, the Centers for Medicare and Medicaid Services uses 30-day mortality for MI as a quality core measure to compare hospitals. 26 For example, the 30-day Centers for Medicare and Medicaid Services mortality measure currently includes deaths regardless of whether the patient dies while still in the hospital or after discharge. 27 With this in mind, having an estimate of the cost of in-hospital deaths that are due to ACS provides further evidence that the economic burden of ACS inpatient death still remains high. These data also suggest that additional strategies to potentially reduce these costs should be explored such as better management of high-risk patients prior to admission through disease and care management models, as well as, potential implementation of evidence-based therapies and addressing comorbid conditions during admission. 10,28 Nonetheless, our analysis does have the following limitations. First, although ACS was the primary admission and discharge diagnosis for both cohorts, other comorbidities could have contributed to the death, ie, multiorgan failure or sudden cardiac death, which may not have been captured as a primary diagnosis for the discharge status. Additionally, as we are using claims data, we could not ascertain the exact timing of a patient's ACS symptoms, time to possible reperfusion, and patient-level risk factors such as blood pressure, weight, electrocardiogram changes, elevations in biomarkers, which could influence a patient's prognosis. 1-2 However, we were able to match between groups based on age, comorbidity burden through the chronic condition index and previous history of coronary artery disease through the presence of any previous revascularization prior to admission. Second, we did not match the deceased and survived cohorts for ACS subtype, as the uneven number of patients with STEMI, NSTEMI, and UA did not allow for an even matching across both cohorts. Third, in the adjusted analyses of the outcomes of interest, we did not control for specific comorbidities such as HF or cardiac arrhythmias that occurred either in the baseline period or during the ACS admission that could contribute to death. Rather, we took into account the total comorbidity burden through the CCI score. Also, within the survived cohort, we did not follow these patients longitudinally after discharge and the possibility exists that these patients could have been readmitted. Fourth, we also did not control all-cause prior hospitalization in the 6 month baseline period (prior to the index admission date), which has been associated with increased mortality for ACS patients. 29 However, our study population did not show any differences for this measure between the two cohorts (see Table 1). Finally, while total mean cost of hospitalization was evaluated, we were not able to identify specific contributors to these additional costs. The nature of the claim data only allowed us to determine the bundled cost for a total hospital admission. Additionally, the database utilized was not an inpatient database.
Conclusion
In-hospital death associated with ACS is extremely costly. Our study described critical and previously unknown
20
Page ii et al characteristics of an ACS hospital admission for patients who survived compared with those who died during the admission. Our findings demonstrate the economic consequences of inhospital mortality for ACS patients. Additional studies are needed in this population to determine if better management during an ACS admission is needed or if other approaches such as care management programs prior to admission can potentially reduce in-patient mortality-associated hospitalizations for ACS which may in turn impact costs.
Disclosure
Dr Hoetzer and Mr Bhandary are employed by AstraZeneca. All other authors report no conflicts of interest in this work.
Publish your work in this journal
Submit your manuscript here: http://www.dovepress.com/vascular-health-and-risk-management-journal Vascular Health and Risk Management is an international, peerreviewed journal of therapeutics and risk management, focusing on concise rapid reporting of clinical studies on the processes involved in the maintenance of vascular health; the monitoring, prevention and treatment of vascular disease and its sequelae; and the involvement of metabolic disorders, particularly diabetes. This journal is indexed on PubMed Central and MedLine. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors. Abbreviations: ACS, acute coronary syndromes; ICD-9, International Classification of Diseases -9th revision; NOS, not otherwise specified; STEMI, ST-segment elevation myocardial infarction; nsTeMi, non-sT-segment elevation myocardial infarction. | 2018-05-08T18:22:24.781Z | 0001-01-01T00:00:00.000 | {
"year": 2016,
"sha1": "8c38f9e48e3ec0cfb2a5968c0868ee4875d0220d",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=28869",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8c38f9e48e3ec0cfb2a5968c0868ee4875d0220d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
258981146 | pes2o/s2orc | v3-fos-license | Green Supply Chain Decisions with Corporate Social Responsibility in Mind under Government Subsidies
: Taking a green supply chain consisting of a green product manufacturer and a dominant retailer as the research object, three different government subsidy scenarios are proposed in the case of retailer social responsibility and manufacturer social responsibility, namely, manufacturer subsidy, retailer subsidy and consumer subsidy. and then discuss the influence of government subsidies on supply chain decision making and corporate social responsibility under different circumstances.
Introduction
At the same time of the rapid development of the world economy, the problems of environmental pollution and resource scarcity have become increasingly prominent. Therefore, the green and sustainable development has aroused more and more people's attention. In this context, more and more enterprises are realizing that if they want to enhance their market competitiveness, they must carry out green technology innovation, while taking into account their consumer, environmental and social responsibilities. The government also vigorously supports the development of green industry and has introduced many subsidy strategies. Therefore, it is of great significance to study the subsidy strategy of green supply chain. In recent years, the issue of green supply chain has aroused a lot of scholars' extensive research, and a lot of achievements have been made in the issues related to corporate social responsibility. Yue H et al. [1]studied the social responsibility behavior of lithium-ion battery enterprises and verified that the total profit of closed-loop supply chain would increase in the presence of corporate social responsibility. LI et al. [2]respectively constructed a two-stage closed-loop supply chain model under centralized and decentralized decision-making in which manufacturers and retailers bear social responsibilities separately. It is found that social welfare is greater when manufacturers and retailers jointly fulfill social responsibilities. In terms of government subsidy strategy, many scholars have also done in-depth research. HAN et al. [3]made a comparative analysis of the influence of retailers' fairness concerns on supply chain decision-making under two conditions: no government subsidies and government subsidies. It is found that government subsidies are beneficial to increase supply chain profits and have a coordinating effect on the influence of retailers' equity concerns on product pricing and greenness decisions. Jian X A et al. [4]investigated the pricing of governmentsubsidized green products and the optimal strategy of green investment. The study showed that the government's subsidy behavior to manufacturers could motivate manufacturers to invest in a higher level of green technology. To sum up, only a few scholars have studied both corporate social responsibility and government subsidies. In addition, most of these studies consider situations where manufacturers dominate the supply chain. Based on this, this paper constructs a single channel green supply chain consisting of a green product manufacturer and a dominant retailer on the basis of the existing literature. Considering the situation of retailers fulfilling their social responsibilities, this paper explores the influence of government subsidies on the level of retailers' social responsibilities and supply chain decisions.
Problem description and model hypothesis 2.1 Problem description
This paper is based on the green supply chain consisting of a single manufacturer and retailer, manufacturer and retailer Stackelberg game, retailer is the leader, manufacturer is the follower. At the same time, considering the impact of retailers and manufacturers to fulfill CSR, and considering three kinds of government subsidies, namely subsidies to manufacturers, subsidies to retailers and subsidies to consumers, to analyze the effects of different subsidy situations. Table 1 shows the meanings of related symbols.
Model hypothesis
Based on the above conditions, the assumptions of this paper are as follows: Hypothesis1.Manufacturers and retailers are "economic men" and risk neutral.
Hypothesis2.In order to ensure the profitability of supply chain members, the parameters of the supply chain system shall meet the following requirements: c w p .
Hypothesis3.The R&D investment of green products is 2 / 2 ug [5]and the R&D investment is a one-time investment that is fully borne by the manufacturer. Hypothesis4.The market demand of green products is affected by the price, the greenness of products and the amount of retailers' social responsibility. The demand
Situation of Government subsidies to manufacturers
To represent the subsidy coefficient of unit product based on greenness, the government subsidy expenditure of unit product is sg [7].The decision sequence of supply chain is retail price p and social responsibility performance h . The manufacturer then determines the wholesale price w, greenness of the product g and the amount of social responsibility n to be fulfilled according to the retailer. The results of Conclusion 1 show that when the government's greenness subsidy coefficient s increases, the greenness of products will increase. Due to consumers' preference for green products, the improvement of product greenness will increase the market demand, and the manufacturer will also increase its wholesale price. At the same time, the sales volume and retail price of retailers will increase correspondingly, and at the same time gain more profits. In this way, retailers will also take the initiative to improve their social responsibility performance.
Situation of government subsidies to retailers
We use to represent the unit subsidy coefficient of government subsidies for retailers to fulfill their social responsibilities. At this time, the profit functions of manufacturers and retailers are respectively: The following equilibrium solution is obtained: The proof procedure is consistent with conclusion 1, which is omitted here. The results show that when the government directly subsidizes retailers, it will more directly reduce the cost burden of retailers to fulfill their social responsibilities, so that retailers are more motivated to shoulder the corresponding social responsibilities.
The situation of government subsidies to consumers
In order to stimulate green consumption, the government will also provide subsidies to consumers as appropriate. Suppose that the amount of government subsidy for unit green product price is k , that is, the actual purchase price of the consumer is p k [8],
Conclusion
The research results show that: first, compared with the situation where the government does not provide subsidies, no matter what kind of subsidies the government adopts, the greenness of products, the amount of retailers' social responsibility fulfillment, the amount of manufacturers' social responsibility fulfillment, the profits of manufacturers and retailers can increase. Second, from the perspective of manufacturers and retailers, when the government unit subsidy expenditure is the same, the government subsidy to manufacturers will make manufacturers and retailers gain higher earnings. Therefore, both manufacturers and retailers tend to prefer the government to provide manufacturers with unit greenness subsidy. Third, for the government, when the subsidy expenditure of government units is the same, government subsidies to manufacturers will stimulate manufacturers to increase investment in green technology, thus improving the greenness of products. Government subsidies to retailers will more effectively encourage retailers to improve the level of social responsibility. Therefore, the government can choose different subsidy methods according to different situation. | 2023-05-31T15:11:50.177Z | 2023-01-01T00:00:00.000 | {
"year": 2023,
"sha1": "1bd87ba229409ebc89eeff617b7939e2dbd977ec",
"oa_license": "CCBY",
"oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2023/18/shsconf_fems2023_01008.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2da561e74107b1ba38838bbbce7d26375aa3e737",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
} |
263217904 | pes2o/s2orc | v3-fos-license | Anti-hypertensive effect of a novel angiotensin II receptor neprilysin inhibitor (ARNi) -S086 in DSS rat model
Introduction Angiotensin receptor-neprilysin inhibitor (ARNi), comprised of an angiotensin receptor blocker (ARB) and a neprilysin inhibitor (NEPi), has established itself as a safe and effective intervention for hypertension. S086 is a novel ARNi cocrystal developed by Salubris for the treatment of heart failure and hypertension. Methods Dahl Salt Sensitive (DSS) hypertensive rat model and telemetry system were employed in this study to investigate the anti-hypertensive efficacy of S086 and compare it with the first ARNi-LCZ696. Results and discussion The study showed that oral administration of S086 dose-dependently lowered blood pressure (P < 0.001). The middle dosage of S086 (23 mg/kg) exhibited efficacy comparable to LCZ696 (68 mg/kg), while also demonstrating superiority at specific time points (P < 0.05). Notably, water consumption slightly decreased post-treatment compared to the vehicle group. Furthermore, there were significant increases in natriuresis and diuresis observed on the first day of treatment with 23 mg/kg and 68 mg/kg S086 (P < 0.001). However, over the course of treatment, the effects in all treatment groups gradually diminished. This study demonstrates the anti-hypertensive efficacy of S086 in DSS hypertensive rat model, offering promising avenues for the clinical development of S086 as a hypertension treatment.
Introduction: Angiotensin receptor-neprilysin inhibitor (ARNi), comprised of an angiotensin receptor blocker (ARB) and a neprilysin inhibitor (NEPi), has established itself as a safe and effective intervention for hypertension.S086 is a novel ARNi cocrystal developed by Salubris for the treatment of heart failure and hypertension.Methods: Dahl Salt Sensitive (DSS) hypertensive rat model and telemetry system were employed in this study to investigate the anti-hypertensive efficacy of S086 and compare it with the first ARNi-LCZ696.
Results and discussion: The study showed that oral administration of S086 dose-dependently lowered blood pressure (P < 0.001).The middle dosage of S086 (23 mg/kg) exhibited efficacy comparable to LCZ696 (68 mg/kg), while also demonstrating superiority at specific time points (P < 0.05).Notably, water consumption slightly decreased post-treatment compared to the vehicle group.Furthermore, there were significant increases in natriuresis and diuresis observed on the first day of treatment with 23 mg/kg and 68 mg/kg S086 (P < 0.001).However, over the course of treatment, the effects in all treatment groups gradually diminished.This study demonstrates the anti-hypertensive efficacy of S086 in DSS hypertensive rat model, offering promising avenues for the clinical development of S086 as a hypertension treatment.
Introduction
Hypertension is a worldwide chronic cardiovascular disease (CVD) and the leading cause for premature death.Over the past few decades, the population of hypertension patients has surged due to the aging of society (1).According to data from the NCD (non-communicable diseases) Risk Factor Collaboration, there were more than 1.2 billion people aged 30-79 with hypertension worldwide in 2019, double the number from 1990 (2). Notably, the 2017 American College of Cardiology (ACC)/American Heart Association (AHA) Hypertension Guidelines have lowered the threshold for hypertension diagnosis from a systolic blood pressure (BP)/diastolic BP of ≥140/90 mm Hg to ≥130/80 mm Hg, indicating an increase in patients needing medical treatment in the future (3).High salt dietary consumption is one of the predominant risk factors for the essential pathogenesis of hypertension and strongly correlated with BP-independent organ damage.It has been reported that 30%-50% of hypertensive patients are associated with high salt intake (4).Studies suggest that high salt intake can lead to increased blood pressure by decreasing glomerular filtration membrane permeability, reducing filtration area and the number of glomeruli, and producing excess reactive oxygen species.As sodium load increases, endothelial function decreases, resulting in impaired vascular relaxation and elevated blood pressure (5,6).Diuretics are effective in lowering blood pressure caused by a high-salt diet, but their adverse effects, such as low potassium and high uric acid, also require attention in clinics (7).
ARNi, which is a co-crystal comprising an angiotensin receptor II blocker (ARB) and a neprilysin inhibitor (NEPi), is a novel therapy for cardiovascular disease (8).Its most common use in clinical practice is for heart failure patients, particularly those with reduced ejection fraction (HF-REF), and is recommended as the first-line therapy for HF-REF treatment by ACC (American College of Cardiology)/AHA (American Heart Association)/ HFSA (Heart Failure Society of America) and ESC (European Society of Cardiology) guidelines (9, 10).Additionally, ARNi has shown efficacy in hypertensive patients due to NEPi's mechanism of inhibiting ANP (Atrial Natriuretic Peptide) and BNP (Brain natriuretic peptide) degradation which leads to natriuresis and diuresis, potentially reducing blood pressure in salt-sensitive patients (11).LCZ696 (Entresto) is the first ARNi product launched globally and has shown profound efficacy in controlling BP in Asian patients with salt-sensitive hypertension (12).In several clinical trials, LCZ696 had a significantly superior effect on BP control compared with current first-line antihypertensive agents, Valsartan, and Olmesartan (13)(14)(15)(16).S086, a novel ARNi which is a co-crystal containing ARB (an active metabolite of Losartan), NEPi, calcium, and hydrate.EXP3174, which is a higher potency and longer t 1/2 ARB in S086 than Valsartan in LCZ696.The NEPi ingredient in S086 is the same as that in LCZ696, which is a prodrug of LBQ657-Sacubitril.S086 has been proven to have comparable efficacy to LCZ696 in rat and dog myocardial infarction models.Both S086 and LCZ696 improved the left ventricular ejection fraction and myocardial fibrosis, and do not have a significant impact on hemodynamics (17).The phase I clinical trial for S086 in healthy volunteers indicated a dose-dependent pharmacokinetic profile (C max and AUC) and pharmacodynamic effect (mean diastolic and systolic blood pressure).Throughout the duration of the trial, there were no reports of serious adverse events.Hypotension was the most commonly observed side effect, which is an expected pharmacological reaction to S086 (18).
To further advance the clinical applications of S086, we conducted a study utilizing an implantable telemetry system to observe real-time blood pressure.We investigated the antihypertensive effects of the new generation of ARNi in a rat model of Dahl Salt Sensitive (DSS) hypertension and measured natriuresis and diuresis.In this study, we found that S086 demonstrates a dose-dependent antihypertensive effect and greater potency in controlling blood pressure compared to LCZ696, EXP3174, and Sacubitril.These results are promising and open up opportunities for further clinical investigations of S086 as a potential hypertension treatment.
Animals
Dahl Salt Sensitive (DSS) male rats aged 7-9 weeks were provided by Beijing Vital River Laboratory Animal Technology Co., Ltd.The animals were free to access sterilized standard laboratory food and water.The animal room environment was controlled (target conditions: temperature 20 to 26°C, relative humidity 30%-70%, 12 h of artificial light, and 12 h of dark).Temperature and relative humidity were monitored twice daily.The study was conducted in accordance with the Institutional Animal Care and Use Committee at WuXi AppTec.
The study used 0.5% CMC-Na (Sodium Carboxymethyl Cellulose) as the vehicle for all the compounds.This vehicle was prepared by weighing and dissolving CMC-Na in deionized water at a concentration of 0.5 g CMC-Na/100 ml water.All the compounds were dissolved in this 0.5% CMC-Na solution at a concentration calculated based on their anhydrous-free acid weight.The theoretical weight of each compound was multiplied by a corresponding conversion factor (S086: 1.12; Sacubitril calcium salt: 1.10; EXP3174: 1.09; and LCZ696: 1.14) to determine the actual weight to be used in the solution.The prepared solutions were stored at 2-8°C and were used within specific timeframes.S086 and EXP3174 solutions were prepared every 3 days, while Sacubitril calcium salt and LCZ696 solutions were prepared daily.Prior to dosing, the solutions were allowed to reach room temperature for 10-15 min.The rats were each given a dosage of 5 ml/kg of body weight, which was well-mixed with their respective solutions.
Pentobarbital was purchased from Alfasan International B.V and stored at room temperature.Na + test kit was purchased from Medical System and stored at 2-8°C.8% salt forage (blue particles) and 0.3% salt forage (yellow particles) were purchased from Jiangsu Medicience and stored at 2-8°C.Meloxicam injection was purchased from Qilu Animal Health Products CO., LTD. and stored at room temperature.Gentamicin was purchased from Yichang Humanwell Pharma CO. and stored at room temperature.
Implantation of blood pressure telemetry implant
The rats were injected intraperitoneally with a pentobarbitalnormal saline solution (50 mg/kg) for anesthesia before undergoing the implantation procedure, which involved the following steps: performing an abdominal incision surgery on the rats and separating their abdominal aorta; injecting a blood pressure detection probe into the abdominal aorta and securing it to the abdominal wall; suturing the muscles and skin and administering subcutaneous injections of meloxicam (1 mg/kg) for pain relief and gentamicin (5 mg/kg) for infection prevention; placing the rats on a constant temperature blanket and feeding them separately after they regained consciousness.Additionally, subcutaneous injections of meloxicam (1 mg/kg) for pain relief and gentamicin (5 mg/kg) for infection prevention were also administered daily for three days after surgery.
Dahl salt sensitive hypertensive rat model
Telemetry devices were implanted in rats to measure heart rate and 24-h baseline blood pressure.The rats were divided into two groups, with one group receiving 0.3% salt forage as a sham and the other group receiving 8% salt forage to induce hypertension.After seven days, rats with an average 24-h systolic blood pressure of ≥160 mmHg were chosen and randomly divided into seven groups based on their systolic blood pressure.The successful standard for generating a hypertensive model was an average 24-h systolic blood pressure of ≥160 mmHg.The groups included sham group (n = 8), vehicle group (n = 7), LCZ696 group (68 mg/kg, n = 7), EXP3174 group (35 mg/kg, n = 7), Sacubitril group (33 mg/kg, n = 7), S086 low dose group (8 mg/kg, n = 7), S086 middle dose group (23 mg/kg, n = 7), and S086 high dose group (68 mg/kg, n = 7).The LCZ696, EXP3174, Sacubitril, and S086 high dose groups received equimolar doses of compounds.All groups were orally administered for 28 days, and dosing occurred between 11:00 AM and 12:00 PM daily.The rats' heart rate and blood pressure were monitored weekly (Figure 1).
Blood pressure and heart rate detection
The Dataquest ART system by Data Sciences International was utilized for real-time monitoring of blood pressure and heart rate for 24 h on days 1, 7, 14, 21, and 28 after administering, with 11:00 AM data serving as a baseline each day (refer to Figure 1).The analysis of blood pressure was performed using Ponemah Software 5.0 from Data Sciences International, and the MAP (Mean Arterial Pressure) was calculated using the following formula: MAP = (SBP + 2 × DBP)/3.
Water consumption, natriuresis and diuresis detection
Water consumption and urinary output were measured using metabolic cages.Rats were placed in the metabolic cages at 16:00 PM∼17:00 PM on the day before detection day.Urine was collected 6 h after dosing for the assessment of natriuresis and diuresis on days 1, 4 and 28.24-h water consumption was recorded and measured twice weekly.Natriuresis was quantified using Na + test kits and a fully automatic biochemical instrument from HITACHI Automatic Analyzer 3100 (19).
Body weight
Body weight was measured daily before dosing.
Statistical analysis
The manuscript complies with British Journal of Pharmacology's recommendations and requirements on experimental design and analysis (20).The declared group size is the number of independent values, and statistical analysis was done using these independent values (i.e., not treating technical replicates as independent values).In terms of this study, data were presented as means ± S.E.M. Plots were produced using Graphpad Prism 9.0.Statistical analysis was performed using the SPSS software.For multiple groups comparison, the Levene's test was used to test for equality of variances; if P > 0.05, one-way ANOVA were performed; if P ≤ 0.05, Kruskal-Wallis test were performed.For one-way ANOVA, if P ≤ 0.05, the LSD post hoc test was performed.For the Kruskal-Wallis test, if P ≤ 0.05, the LSD post hoc test was performed after the ranks were transformed into normal scores.Otherwise, no data normalization was performed.We conducted a full data analysis without excluding outliers.A Pvalue <0.05 was considered statistically significant.
S086 effectively reduced systolic blood pressure (SBP) in DSS rats
The SBP in DSS rats increased progressively over time in the vehicle high salt (8%) group, and was significantly higher compared to the sham low salt (0.3%) group at each measurement point between day 1 and day 28 after dosing (P < 0.001).The SBP in the sham low salt (0.3%) group was approximately 150 mmHg, whereas the SBP in the vehicle high salt (8%) group ranged from 162.87 mmHg on day 1 to Schematic design of the study.The study was divided into three stages.Firstly, telemetry devices were implanted into rats.Secondly, DSS rat hypertension model was created by administering high salt diets for 7 days and sham group animals were administered low salt diets.Thirdly, rats were treated with the designated interventions for 4 weeks, and body weight, blood pressure, heart rate, water consumption, natriuresis, and diuresis were measured during the treatment period.1).Notably, the peak SBP occurred between 9:00 PM and 2:00 AM, while the valley SBP was observed between 1:00 PM and 6:00 PM (Figure 2).On day 1, there were no significant changes in SBP observed in each dosing group compared to the vehicle high salt (8%) group.However, on day 7, the LCZ696-68 mg/kg group, EXP3174-35 mg/kg group, and different doses of S086 (8, 23, 68 mg/kg) groups showed significant reductions in the 24-h mean SBP compared to the vehicle high salt (8%) group.The reductions were 16.18 mmHg (P < 0.001), 15.75 mmHg (P < 0.01), 12.97 mmHg (P < 0.05), 15.76 mmHg (P < 0.01), and 22.56 mmHg (P < 0.001), with corresponding reduction rates of 9.4%, 9.1%, 7.5%, 9.1%, and 13.1%, respectively.S086 demonstrated dose-dependent efficacy in SBP and superior efficacy compared to an equal molar dose of LCZ696-68 mg/kg at specific time points (P < 0.05).Additionally, S086 had a significantly better effect on SBP compared to the equimolar dose of EXP3174 (P < 0.05) and Sacubitril (P < 0.001) (Table 1).
The efficacy of each dosing group on days 14, 21 and 28 showed similar results compared to day 7, and the efficacy increased over time compared to the vehicle high salt (8%) group.The reduction rate of SBP in LCZ696 group increased from 9.4% on day 7 to 16.5% on day 28, and from 13.1% to 19.5% for S086-68 mg/kg.The Sacubitril-33 mg/kg group exhibited a significant effect on day 21 and 28, with the mean 24 h SBP reduced by 16.91 mmHg (P < 0.01) and 14.86 mmHg (P < 0.05) compared to the vehicle high salt (8%) group.
The middle and high dose groups of S086 (23 and 68 mg/kg) could sustain SBP levels similar to the sham group for approximately 14 h (11:00 AM to 8:00 PM and 6:00 AM to 11:00 AM) after 28 days of dosing.The other groups were unable to lower their SBP to the level of the sham group.(Figure 2) The 24-h systolic blood pressure (SBP) and time curve.The successfully established DSS rats were randomly divided into seven groups (n = 7/group), with each group receiving different drugs or the same drug with different doses, as indicated in the figure.The low salt group (n = 8/group) was used as sham control.The vehicle model group rats that received solvent.SBP was tested every week.
S086 effectively reduced diastolic blood pressure (DBP) in DSS rats
The DBP in DSS rats followed a similar trend to the SBP results.In the sham low-salt (0.3%) group, the DBP was approximately 100 mmHg, while in the vehicle high-salt (8%) group, it increased from 113.87 mmHg on day 1 to 146.31 mmHg on day 28 (Figure 3, Table 2).
The middle and high dose groups of S086 (23 and 68 mg/kg) could sustain the DBP at or near the level of the sham group (some time points lower than the sham group) for about 14 h (from 11:00 AM to 8:00 PM and from 6:00 AM to 11:00 AM) after 28 days of dosing (Figure 3).
S086 effectively reduced mean arterial pressure (MAP) in DSS rats
As MAP is calculated from SBP and DBP, the effect of each treatment group on MAP was similar to its effect on SBP and DBP.The MAP in the sham low-salt (0.3%) group was approximately 120 mmHg.The vehicle high-salt (8%) group had significantly higher MAP than the sham low-salt (0.3%) group at all time-points from day 1 to day 28 after dosing (P < 0.001).
Compared to the vehicle group, all treatment groups showed varying degrees of slight decrease in water consumption during the treatment time (Figure 6).
The results of the natriuresis study demonstrated that the strongest effect was observed on the first day of treatment, with significant differences compared to the vehicle group (P < 0.05).As the treatment duration progressed, the natriuretic effects of all The 24-h diastolic blood pressure (DBP) and time curve.The successfully established DSS rats were randomly divided into seven groups (n = 7/group), with each group receiving different drugs or the same drug with different doses, as indicated in the figure.The low salt sham group (n = 8/group) was used as sham control.The vehicle model group rats that received solvent.DBP was tested every week.treatment groups gradually diminished.However, on the 28th day of treatment, the LCZ696 group (P < 0.05) and the middle (P < 0.05)/high (P < 0.01)-dose groups of S086 still exhibited significant natriuretic effects (Figure 7).Regarding diuresis, significant diuretic effects were observed on the first day of treatment in the LCZ696 group (P < 0.001), Sacubitril group (P < 0.01), and middle/high-dose groups of S086 (P < 0.001), with statistically significant differences.The EXP3174 group and the low-dose group of S086 showed an increasing trend in diuresis but without statistical significance.Over the course of treatment, the diuretic effects of all treatment groups gradually weakened.However, on the 28th day of treatment, the The 24-h mean arterial pressure (MAP) and time curve.MAP is calculated using the following formula: MAP = (SBP + 2×DBP)/3.† P < 0.05 vs. LCZ696-68 mg/kg.LCZ696 group and the middle/high-dose groups of S086 still exhibited significant diuretic effects (P < 0.05) (Figure 8).
Body weight
Compared to the sham group, the vehicle group, EXP3174 group, and Sacubitril group all exhibited a significant decrease in body weight on day 28 (P < 0.05).However, no significant differences were observed between the vehicle group and all other treatment groups (Figure 9).
Discussion
Hypertension is a chronic cardiovascular disease characterized by elevated systolic and/or diastolic blood pressure.It is a major risk factor for various cardiovascular diseases.A high-salt diet is Water consumption.The successfully established DSS rats were randomly divided into seven groups (n = 7/group), with each group receiving different drugs or the same drug with different doses, as indicated in the figure.The low salt sham group (n = 8/group) was used as sham control.The vehicle model group rats that received solvent.Water consumption was tested every week.### P < 0.001 vs. Sham.Heart rate (HR).The successfully established DSS rats were randomly divided into seven groups (n = 7/group), with each group receiving different drugs or the same drug with different doses, as indicated in the figure.The low salt sham group (n = 8/group) was used as sham control.The vehicle model group rats that received solvent.HR was tested every week.closely associated with hypertension, and its mechanism of action is complex.ARNi represents a novel class of antihypertensive medications, distinguished by its unique mechanism of action.This mechanism operates through the inhibition of both RAAS and NEP, which collectively contribute to blood pressure reduction.Specifically, the inhibition of RAAS results in vasodilation and a decrease in aldosterone secretion.
Concurrently, NEP inhibition activates the natriuretic peptide system, leading to diuretic and natriuretic effects.The culmination of these effects ultimately results in the lowering of blood pressure (8).The first ARNi drug, LCZ696, has been reported to have significant antihypertensive effects, particularly in patients with salt-sensitive hypertension, and its clinical efficacy in reducing blood pressure is significantly better than (17,18).We investigated the antihypertensive effect of S086 compared to LCZ696 using the DSS rat model of hypertension and explored the diuretic and natriuretic effects of ARNi drugs.Additionally, we used a realtime telemetry system for blood pressure measurement, providing a more accurate reflection of rats' real-time blood pressure status, and avoiding blood pressure fluctuations caused by traditional animal manipulation methods.
Our study found that peak blood pressure values for both DSS rats and normal rats occurred between 9:00 PM and 2:00 AM, and trough values occurred between 1:00 PM and 6:00 PM.This indicates a significant difference from the circadian rhythm of human blood pressure, which peaks between 12:00 PM and 6:00 PM and has trough values between 1:00 AM and 4:00 AM according to clinical research (21,22).Differences in blood pressure rhythms between rats and humans may be due to differences in their circadian activity rhythms.Rodents are typically active and eat at night and rest during the day, while humans are generally active and eat during the day and rest at night.Our administration of compounds to DSS rats at 11:00 AM is equivalent to humans taking medication before bedtime at 11:00 PM, which has certain guiding significance for the timing of administration in future rodent models of hypertension.If we aim to fully simulate human clinical medication habits (taking medication in the morning), we suggest administering animals from 5:00 PM to 6:00 PM (23,24).
The new generation of ARNi drug-S086 demonstrated significant antihypertensive effect in the DSS rat model of hypertension, dose-dependently reducing both systolic and diastolic blood pressure (Figures 2, 3).In the DSS hypertensive rat model, a high-salt diet leads to sodium and water retention, resulting in increased blood volume and elevated blood pressure.Furthermore, studies have reported that a high-salt diet directly activates the renin-angiotensin-aldosterone system (RAAS), which is an additional mechanism contributing to the development of high blood pressure (25).S086, composed of an ARB and NEP inhibitor, can directly inhibit the RAAS system while activating the natriuretic peptide system, ultimately lowering blood pressure.Compared to LCZ696 (68 mg/kg), the middle dose of S086 (23 mg/kg) demonstrated efficacy comparable to LCZ696 (68 mg/kg), while also demonstrating superiority at specific time points (P < 0.05) (Figure 4).Prior studies had reported that the ARB component of S086-EXP3174 exhibited greater potency towards AT1 receptor and a longer half-life compared to Valsartan in LCZ696, which could potentially lead to clinical benefits for patients with hypertension (18, 26, 27).Additionally, the antihypertensive effects of S086 are better than those of an equimolar dose of EXP3174 for two reasons.Firstly, S086 metabolizes into EXP3174 and Sacubitril.Sacubitril further metabolizes into LBQ657-NEP inhibitor, and both of EXP3174 and LBQ657 reduce blood pressure through different mechanisms.Although the antihypertensive effect of the NEP inhibitor is relatively moderate, it is still stronger than that of the separate dose of EXP3174.Secondly, Sacubitril increases the exposure level of EXP3174, resulting in a higher exposure after administering an equimolar dose of S086 (17).
LBQ657 as a neprilysin inhibitor has been reported to significantly increase the expression of ANP and other natriuretic peptides in the body (28).Upon metabolism into LBQ657, S086 activates the natriuretic peptide system, resulting in natriuresis and diuresis (29).EXP3174, a high potent ARB metabolized from S086, has been shown to have a natriuretic effect (30).We investigated the effects of each compound on water consumption, natriuresis and diuresis in the DSS model.The results indicated that each treatment group showed a slight decrease in water consumption compared to the vehicle group.This decrease may be attributed to the drug's natriuretic and diuretic effects, leading Frontiers in Cardiovascular Medicine 10 frontiersin.orgto differences in salt and water balance in the body.Regarding the natriuresis and diuresis study, significant natriuretic and diuretic effects were observed in all treatment groups on the first dosing day (P < 0.05).However, over time, the intensity of these effects gradually diminished, which aligns with the trend observed in clinical studies of LCZ696 in patients with salt-sensitive hypertension.In that study, compared to Valsartan monotherapy, LCZ696 showed significant increases in natriuresis and diuresis on the first day after administration, which could not be sustained (12).However, we observed antihypertensive efficacy with Sacubitril (pro-drug of LBQ657) monotherapy, especially after 14∼28 days of treatment, when its effect was stronger than after 7 days.This suggests that the antihypertensive effect of NEPi may not be solely due to natriuresis and diuresis, but rather from vasodilation.ANP and BNP can activate receptors expressed in peripheral blood vessels, leading to vasodilation.ANP and BNP can also inhibit aldosterone release, blocking the downstream signaling pathway of the RAAS system, which finally lead to antihypertensive effect (31).However, the inhibitory effect of NEPi on the downstream signaling of the RAAS system activates the body's negative feedback regulation mechanism, promoting the activity of upstream signals of the RAAS system, thereby activating the RAAS system.Therefore, NEPi's antihypertensive effect alone is slight and must be used in combination with RAAS system blockers.
The first-generation NEPi-omapatrilat simultaneously inhibited both NEP and ACE to activate the natriuretic peptide system and inhibit the RAAS system.However, due to the accumulation of bradykinin (a substrate of NEP and ACE) in the body, causing severe vascular edema, the drug was ultimately withdrawn from the market (32,33).NEPi and ARB combine to form a cocrystal, which reduces the risk of vascular edema caused by omapatrilat (NEPi and ACE).The cocrystal form has better drug properties than physical mixtures since it improves solubility, enhances compound pharmacokinetic properties, and increases absorption (34,35).We developed S086-a novel ARNi cocrystal, which improved EXP3174's poor PK profile.Preclinical studies validated its significant blood pressure-reduction effect, superior to LCZ696.A completed phase 1 clinical trial demonstrated that S086 is well-absorbed in the human body, exhibits linear absorption, and can significantly affect target-related biomarkers.These provide a solid foundation for conducting further clinical studies.We will explore S086's antihypertensive effect in future phase 2 and phase 3 clinical trials, providing better treatment options for hypertension patients.
Conclusions
In our preclinical study on DSS rats with hypertension, we observed that the novel ARNi drug S086 had a significant antihypertensive effect.Its effect was potential to surpass that of the first-generation ARNi drug LCZ696.These results support further phase 2 and phase 3 clinical studies on S086 to explore its efficacy and safety in treating patients with hypertension.
FIGURE 8
FIGURE 8Diuresis.The successfully established DSS rats were randomly divided into seven groups (n = 7/group), with each group receiving different drugs or the same drug with different doses, as indicated in the figure.The low salt sham group (n = 8/group) was used as sham control.The vehicle model group rats that received solvent.Diuresis was tested on days 1, 4 and 28, 6 hours after dosing.### P < 0.001 vs Sham; *P < 0.05, **P < 0.01, ***P < 0.001 vs Vehicle; ^^P < 0.01 vs EXP3174.
FIGURE 7
FIGURE 7Natriuresis.The successfully established DSS rats were randomly divided into seven groups (n = 7/group), with each group receiving different drugs or the same drug with different doses, as indicated in the figure.The low salt sham group (n = 8/group) was used as sham control.The vehicle model group rats that received solvent.Natriuresis was tested on days 1, 4 and 28, 6 hours after dosing.### P < 0.001 vs Sham; *P < 0.05, **P < 0.01, ***P < 0.001 vs Vehicle; ^^P < 0.01 vs EXP3174; & P < 0.05, && P < 0.01 vs Sacubitril.
FIGURE 9
FIGURE 9 Body weight.The successfully established DSS rats were randomly divided into seven groups (n = 7/group), with each group receiving different drugs or the same drug with different doses, as indicated in the figure.The low salt sham group (n = 8/group) was used as sham control.Body weight was measured pre-dosing and daily after dosing. | 2023-09-29T11:17:36.186Z | 2024-02-14T00:00:00.000 | {
"year": 2024,
"sha1": "8fcde2a04f04734934659cbb0d21296a39a14959",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fcvm.2024.1348897/pdf?isPublishedV2=False",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4f12353b53bf125b236a3cca17ec604672605e76",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
195574887 | pes2o/s2orc | v3-fos-license | Prediction of Dental Caries Preventive Behaviors using Health Belief Model (HBM)
Copyright© 2019, TMU Press. This open-access article is published under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License which permits Share (copy and redistribute the material in any medium or format) and Adapt (remix, transform, and build upon the material) under the Attribution-NonCommercial terms. Prediction of Dental Caries Preventive Behaviors using Health Belief Model (HBM)
Health Education and Health Promotion
Summer 2019, Volume 7, Issue 3
Introduction
One of the most common diseases that affects the world's societies is tooth decay. The general health of the body is dependent on the health of the mouth and its health affects the health of the whole body [1] . One of the main criteria for the health of the community is the examination of oral and dental health [2] . The disease since sucrose has entered a human diet strongly increased. In all countries, millions of dollars are spent annually to treat dental caries. The total cost of dental treatment is likely to be the most expensive treatment ever spent. Despite much progress, the field of fighting diseases, on a global scale and increasing communication and presence of community members in different social situations, the need to observe oral hygiene more than ever is felt [3] . The instability of everyday patterns and changes in the lifestyle of people have increased these diseases, and more than 99% of humans suffer from these diseases and because of the problems, more than 50 hours waste the time [4] . The health and behavior of people in every community about oral and dental health are influenced by the level of knowledge, knowledge, and attitudes towards oral health [5] . Preserving the teeth and supporting tissues up to the ages, on the one hand signifying high standards of health and, on the other hand, reflecting the health system's performance. Studies in our country show that health behaviors are moderate [3] . Therefore, in order to enable people to work in the right ways to maintain their health and avoid diseases, they need to shape health behaviors and make appropriate training programs for these behaviors [6] . When need to transform human behavior in health and wellness health education comes up. Effective health education depends on mastering use of the best theories and appropriative strategies and behavior. Patterns are the basis for the development of theories. One of the most important models in preventing the disease is a health belief model that explains the relationship between health beliefs and behavior. It is also designing health messages and suggesting interventions to engage people in healthpromoting behaviors [7] . Health belief model is one of the oldest patterns of behavioral health. This model is a comprehensive model that plays an important role in preventing the disease. It is an important indicator that shows the relationship between health beliefs and behavior and based on the hypothesis that preventive behavior based on the person's beliefs. This model focuses on the motivation, past experiences of the person and, in general, on the change in beliefs and can describe long-term and short-term health behaviors. Contains structures, perceived sensitivity (Perception of a person's susceptibility to disease), perceived severity (Perceived person's seriousness of the disease), Perceived benefits (Individual perception of the benefits of behavior), perceived barriers (Individual perception of problems in the way of doing behavior), action guides (Speeds of behavior), and self-efficacy (Belief in ability to do behavior) [8] . According to important oral hygiene for a healthy lifestyle and increasing number of oral and dental diseases and a large number of students at the Paramedical School determine the current status of students' belief in oral hygiene and selecting a pattern for health education, the first step in the process of educational planning for students. The purpose of this study was to predict the preventive behaviors of dental caries with the health belief model constructs in students of Paramedical School of Qazvin University of Medical Sciences.
Materials and Methods
This study is a cross-sectional study .The population under study in this cross-sectional study was the students of the Faculty of Paramedicine of Qazvin University of Medical Sciences. A multi-stage random sampling was used to conduct the study. At first, the students of the Faculty of Paramedicine were divided into 5 clusters, divided into the fields of study, including fields (Surgery room, Anesthetics, Medical Emergency, Nursing and Laboratory science). The total number of students was 640. The sample size was calculated by using the Cochran formula. In order to calculate the sample size, the following formula was used: A random sampling method was used to collect the samples from each cluster. In each group, 61 people should be collected.
Considering the possibility of loss in each group, 65 people were considered. The data collection tool was a questionnaire based on the health belief model [9] . The questionnaire consisted of three parts: the first part included demographic information, including the individual information of the field of study, and the second part related to the assessment of preventive behaviors of dental caries (7 questions). The third section included a series of poll questions based on the health belief model was in accordance with Likers criteria. The importance of using this model has been proven in numerous studies and its validity has been confirmed previously [8,10,11] . This model has the following constructs: 1) Perceived sensitivity: The person's opinion is about the chance of being in a particular position, which consists of 5 questions. condition is, which included 9 questions.
3) Benefits: The opinion of the person has an effect on the effectiveness of the recommended activities in reducing risk or severity, which includes 9 questions. 4) Objectives: The opinion of the person is that of the objective and the costs of the recommended activities, which consists of 18 questions. 5) Self-efficacy: Belief in the ability to conduct behavior that included 12 questions. The validity of this questionnaire was measured by content validity method. The questionnaire was prepared on the basis of the Health Belief Model and according to reliable sources and books. The validity of this questionnaire was assessed by content validity method so that the questionnaire was prepared based on the health belief model and according to reliable sources and books. The minimum acceptable ratio for content validity (CVR) based on the Lavelle's table and the number of specialists who evaluated the questions was 0.70. In this study, 14 people included 7 dental specialists and 7 health education specialist participated for evaluated of content validity. Therefore, the questions with a content ratio of less than 0.70 were set aside in this study. The value of the validity or CVI validity index in the designed questionnaire was also checked as follows: If the number of items from each item exceeded 0.79, appropriate, between 0.70 and 0.70, the questionnaire needed to be corrected and reviewed, and if the CVI index was less than 0.70, unacceptable, and the item was removed.
Eventually, validity has been quantitatively verified. Cronbach Alpha in perceived susceptibility (0.70), perceived barriers (0.80), perceived benefits (0.78), self-efficacy (0.85) and attitude (0.73) were obtained. SPSS 20 software was used to analyze the data. Regarding the data normalization, based on Kolmogorov test, data analysis was performed by using descriptive tests (Frequency, percentage, mean, and standard deviation), and analytical tests (Correlation coefficient, logistic regression, and linear regression.
Findings
The mean age of the studied population was (23.18±3.51 Pearson statistical test showed a significant correlation between the mean score of efficacy and the mean score of behavior (r=0.0937, p=0.00). Also, this test showed a significant relationship between the mean scores of perceived barriers (p=0.000, r=0.471). Pearson statistical test showed a significant correlation between the mean perceived benefit score and the mean score of the behavioral score (r=0.191, p=0.00). Other variables did not have a significant correlation with behavior ( Table 2). According to the results of Table 3, the sub-scales of the health belief model, predict 45% of the variance of health behavior. Also, according to Table 3 below, the perceived benefits, perceived barriers, and selfefficacy are significantly predicted by health behavior. Increasing the standard deviation in the perceived benefit score, the health behavior score will increase by 0.24 standard deviations. Also, by increasing a standard deviation in the self-efficacy score, the health behavior score will increase by 0.25 standard deviations.
Discussion
The average of health behaviors in students was 52.38%, which means that students' health behaviors are moderately evaluated. These findings are consistent with the results of Mehri, Mohseni et al. [6,9] . Based on the results of multiple regression analysis of health belief model constructs about dental caries preventive health behaviors, perceived barrier structures significantly had the most predictive power and 35% of the calculated variance of health behaviors in the research sample described. Also, Pearson test showed a significant correlation between the mean perceived barriers score and behavior (r=-0.471, p=0.00).
These results are consistent with the study of Solhi and Zamani [2,10] . When perceiving danger weakness, perceived barriers increase [11] . From the viewpoint of 43.2 of the students referring to the dentist and restoration of decayed teeth was an important barrier. Studies have shown that perceived barriers are the most potent dimension in expressing or predicting health protective behaviors [8] .
Considering the results, it is necessary to consider measures to reduce the cost of teeth for students. Therefore, it is expected that by removing existing barriers, oral hygiene behaviors can be improved. Also, multiple regression tests showed that perceived and self-efficacy barriers were one of the important factors affecting the behavior of students.
In another study on students of Yazd city selfefficacy and perceived barriers, predicted a total of 29% of behavior variance [9] . Pearson statistical test showed a significant correlation between the mean score of automatic achievement and the mean score of behavior (r=0.937, p=0.00). In other words, as students' selfefficacy and abilities increase, their oral and dental health behaviors Upgrades. This result is consistent with the findings of the Mehri [9] . Self-efficacy, as an important construct, of the health belief model is a strong predictor of oral health behaviors [9] . Considering the fact that the self-efficacy structure has a strong relationship with behavior, so it should be considered specially. According to Pearson test results, there was a significant correlation between mean perceived benefit score and mean score of behavioral scores (r=0.191, p=0.00). This results confirms the results of Mehri, Solhi, Zamani [2,9,10] .
These results indicate that when students do more health behavior who have confidence in their ability to carry out health behaviors and have oral and dental care. In this study, 49.4% of students brushed once a day, 36.1% brushed more than once, and 2.4% did not brush at all. 59.8% of students sometimes used dental floss and 24.4% of the students did not use dental floss. Because of good and bad health behaviors part of the community's culture, there is a lack of respect for health behaviors in this community. Therefore, the implementation of appropriate training programs to prepare people and use the right methods of life in order to promote health and avoidance of diseases is recommended.
In other words, findings may suggest inadequate and inappropriate education in oral and dental care.
The study of Ndiokwelu showed that although the students regularly brushed teeth and arranged meetings with the dentist, they had little knowledge about the causes of oral and dental illnesses. From the perspective of students, the preferred sources of information were, in order of priority, educational films and parents. However, according to the study of Mazloomi and Solhi at earlier age health educators and parents, the most important source and performance guide . The results showed that with increasing age the information resources changed. Therefore, in planning for the promotion of oral hygiene in students the role of the educational film must be strengthened.
In this study, 41.7% of the students visited the dentist twice a year. This is consistent with the results of Silva's study. Also, Pearson correlation test showed a significant correlation between the mean of indicators of this model (Perceived sensitivity, perceived severity, perceived benefits, perceived barriers, and selfefficacy) and behavior with parents' education. These results are consistent with the study of Mehri [12] . According to these results, it can be assumed that parental education is a means to raise the awareness of the children and lead to better performance.
Conclusion
According to the results, structures, perceived barriers, self-efficacy, and perceived benefits are considered as the most important structures with predictive power in adopting tooth decay preventive behaviors and predict 45% of health care variance. The results of the study indicated that students' health behaviors were moderate. Educational interventions based on perceived barrier structures self-efficacy and perceived benefits can be effective in promoting preventive behaviors from dental caries. Students' self-efficacy is enhanced through education and leads to increased health. According to the findings, this study is recommended to use the model along with the following suggestions for oral health care: 1) Repeat the training with new content, with the interests and needs of the students and the use of new educational methods.
2) Revision of the content of continuing education courses and increasing the practical level of these courses and the ability to implement them.
3) Provide low-cost student dental services.
Health Education and Health Promotion Summer 2019, Volume 7, Issue 3 4) Implement oral and dental care programs twice a year, at universities for students. | 2019-06-26T15:04:21.479Z | 2019-07-10T00:00:00.000 | {
"year": 2019,
"sha1": "4f748d70d9de0ef9ef81e6246a7ef4b81572f324",
"oa_license": "CCBYNC",
"oa_url": "http://hehp.modares.ac.ir/files/hehp/user_files_749497/zahedifar123-A-10-41649-6-3278456.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "f20506b607a3a0e5e4032880c4700a855ed498ff",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Psychology"
]
} |
253360852 | pes2o/s2orc | v3-fos-license | Design and Evaluate the Efficiency of Ethiopic Local Integrating System in Open-Source Database
The software includes models of internationalization over the entire world. However, the localization work which has been conducted on Amharic locale development has limitations such as managing Ethiopian calendar and numbers. This has a considerable impact in managing the Amharic locale data in a database as well as in other software systems. Even though there are some researches reporting Amharic locale data is managed by the existing Database Management Systems/DBMS/, the data management way is noticeably exposed to error. However, these kinds of errors are handled in this research by introducing new and appropriate exception handling mechanisms. In this regard, it was not found in the literature review to overcome these problems. In this work, an Amharic Locale Extension module named am_ET was developed to integrate into an open source database environment. The most suitable open-source DBMS PostgreSQL and the C programming language were used. User acceptance testing was conducted to evaluate Amharic locale extension functionalities based on the ISO 9241–11 usability testing attributes, such as user satisfaction, effectiveness and efficiency. To validate it, 35 voluntary potential users were randomly selected to participate in the usability test of the developed system. A descriptive statistical analysis and percentage method had been used to test the collected data. Accordingly, the usability test result showed that a positive opinion for the users and developers to manage Amharic local data as it easily manages data formatted to suit Ethiopian locales system such as currency, calendar, and date/time, reduced costs for developers, and better understanding of product functionalities. Consequently, the developed Amharic locale extension solved problems of data management with Ethiopian locales. The paper found that integrated extension module is helpful in localization and design application areas for all entities that are using Amharic locale in open source databases.
I. INTRODUCTION
A database to be truly purposeful, it should not only store huge amounts of records, but also be easily available, accessible and reliable. In addition, new information and modifications would also be reliable, accessible, store large amount of data, and fairly easy to input. An efficient database system should have a program that accomplishes the queries The associate editor coordinating the review of this manuscript and approving it for publication was Qichun Zhang . and data stored on the system must be integrated [1]. Due to the tremendously increasing volumes of the local data, which are a by-product of modern life, there is a growing need of an appropriate database that require efficient methods for their management and retrieval. Storing and accessing data in their original representation is very important, since converted data sometimes may result to loss of information [2]. Open source databases become by their own to stand as solutions for every data management need in the enterprise. VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ The process of adapting a product to new linguistic and cultural specifics of content to a given geographical aspect (local) is localization. The Ethiopic script (i.e here Amharic script/ Ge'ez) used as the writing method of Semitic language spoken in Ethiopia and it includes more than 400 characters. Although the language (Ge'ez) has only been used in dialect speech (it now serves as a liturgical purpose only), the inscription is still broadly used for the writing system of both the Ethiopian and Eritrean Semitic languages such as Tigré and Amharic [3], [4]. The ancient calendar of Ethiopia has differences from the Gregorian calendar in terms of day, month and year. The Ethiopic calendar has 13 months, of which 12 months have 30 days and the 13th month comes at the end of the year with 5 or 6 days depending on whether the year is a leap year or not [1], [5]. As a result, a huge amount of data in Ethiopic script uses Ethiopian calendar which needs to be stored in databases according to Ethiopian locale system.
Generally, programs are culturally neutral. They have an interaction between many components which are arranged in specific patterns and have performed many tasks. When the linguistic and cultural specificity are added to these programs many more people are comfortable with their native language. Most of the existing environmental systems including the models for internationalization are used to allow software developers to develop easily that support different locales [6]. Even though, software's including models of internationalization, the localization work conducted on Amharic locale development has limitations. This has a considerable impact on managing the Amharic locale data in database as well as in other software system. Calendar and collation sequences are examples of Amharic locales which did not manage in the current software environment. A calendar as described in [7] is a human abstraction of the physical timeline. It can measure periods and times using any well-defined time unit. A typical calendar property is the language in which time values are expressed. For example, in the Ethiopian calendar Amharic and Geez are used to express periods of time, but in the Gregorian calendar English is used to express periods of time in the United States. In almost all countries, the usage and practice of a calendar are influenced by the national, cultural, legal, and even business orientation of the user. Most of the existing operating system uses the Gregorian calendar system, foreign cultural systems such as numbers, currencies, calendars, date & time, etc, and managing of such data in the databases. However, most of the data in Ethiopia are available with Ethiopian calendar system and users in Ethiopia are not convenient with the Gregorian calendar since they use Ethiopian calendar in their day-to-day activities. This usually creates inconvenience when the user wants to have a reference to Ethiopic date as they had Gregorian calendar and date & time, foreign currency and numbers at their operating system. This study focused on the investigation of locale integration in database environment, identifying techniques and methods that have been applied in the area of database localization. Moreover, it has been a duty in designing an appropriate Amharic locale extension module to integrate into PostgreSQL; and evaluating its efficiency, advantages and reliability for the requirement of Amharic integration in the database system. The currently used database by the selected organization (Private Organizations' Employees Social Security Agency -POESSA) is known as Employee Social Security Management System (ESSMS) developed using a DBMS of the SQL server. We have selected three tables (Employee, Service, and Family) of the database for the experimental results by introducing new data types such as ethdate and ethmoney on the existing DBMS to evaluate the proposed solution by comparing with the existing DBMS and the created extension. After the experiment the currency, the calendar, the number entered to the system is retrieved and managed without error(s) and hence the result shows that the proposed/developed Amharic locale extension has a positive impact to the study of localization The aim of this research work is to develop Amharic locale extension module to integrate in open-source database, and evaluating its efficiency. Specifically, it is aimed to explore related works in locale integration in database environment; identify different techniques and methods that have been used in the area of database localization; identify the DBMS components that are linked to locale integration; identify the requirement of Amharic locale integration in database systems; design an appropriate Amharic locale integrating system using a selected open source database; and test the prototype and evaluate the system's efficiency.
Accordingly, the following questions have been obtained by the researchers to be replied at the end of the research work: • What suggestions and opinions by the users (i.e the end-users and database admins) have been entertained on the Module's Qualification of using Amharic Locale Extension (am_ET)?
• How Satisfied are the users (i.e the end-users and database admins) about the use of am_ET module of the Amharic Local Extension?
• What opinion of the users (i.e the end-users and database admins) about the Usefulness of the Amharic Locale Extension (am_ET) module?
• What opinions and suggestions are given by the users (i.e the end-users and database admins) about the Quality Attributes of Amharic Locale Extension (am_ET) module?
A. THE AMHARIC LANGUAGE AND ITS WRITING SYSTEM
It's believed that Amharic, Ethiopian official language, has more than 25 million speakers as a mother tongue language and as a second language [8]. A set of 38 phonetics, seven vowels and 31 consonants, marks up the full list of sounds for the Amharic language [9]. Consonants are commonly categorized as stops, fricatives, nasals, liquids and semi-vowels. [10]. In Amharic language, all consonants except /h/ and /ax/ might come about in either a geminated or a nongeminated method. Germination in Amharic is one of the most unique features of the rhythm of the speech, and also brings a very substantial semantic and syntactic functional weight [11]. When Amharic sentence is observed from grammatical construction point of view it is a grouping of noun phrase and verb phrase. The noun phrase emanates first and then the verb phrase. Based on the number of expressions or phrases they comprise, sentences in Amharic are categorized under two basic categories simple and complex sentence. The simple sentence only comprises a single verb while the complex sentence is built by combining more than one noun phrases and verb phrases.
Therefore, due to such variety and complex characteristics of the Amharic culture, vowels and calendars, development and integration of the Amharic Locale into an open source will make the data management routine easier.
B. LOCALIZING SCRIPTS AND CALENDARS
In Amharic script system, numbers can be characterized using either the symbols of Arabic number system or the symbols of the Ethiopic number system or using words and symbols of the Arabic number system.
There are cultural beliefs around the world that lead people to use their calendar which is entirely different or the same to some extent as the western Gregorian calendar. However, they put up with the rule of 12 months a year. An Ethiopian year is contained of 13 months, and is seven years late than the Gregorian calendar. In fact, Ethiopians celebrated the new second millennium on September 11, 2007; this is because the Ethiopians continued with the same calendar that the Roman church amended in 525 AD. While the first 12 months have 30 days, the last month, called Pagume, has five or six days depending on a leap year. Ethiopia, being one of the rare countries in the world, quite uses its own calendar system. The country celebrates some vital holidays on days that are unlike the rest of the world [12].
Lielet [13] localized some contents of the open source web development tool (specifically Joomla) into Amharic to create professional and nonprofessional users who take part in website development has been conducted who want to develop a website comfortably. The Amharic translations are issued in the front and back end interfaces. A virtual keyboard is designed To make Amharic text entry easier and make the web content manageable ny the user, a virtual keyboard is designed and implemented. However, this work also does not take into consideration the different locales of Ethiopia like calendar, date/ time format, sorting/ collation order, regular expression etc.
In Sarfraz et al. [14] study, the authors presented the process used to localize a set of open-source software applications that developed in English language for Urdu speak-ers in Pakistan. The software applications were selected are web browser, an email client, instant messaging client, word processor, graphics editor and webpage development tool. The paper presents the criterion for localizing software was that the application must be internationalized. Since, internationalized development facilitates an efficient and convenient localization process by separating the resource file that need to be customized (localized) for a target locale.
Bader [15] presented the analysis and the technique of website localization from the source language (English) to the target language (Arabic). The key points considered to localize a website to the target language (Arabic) were resource file and the database table. The author localized and put a separate resource file for the target language. Since the software localization process must include database design adjustments in order to enable the target language supported by the localized software. The author uses a commonly used method for developing a multilingual database is to create two tables.
According to Oracle Database Globalization Support Guide [16], Oracle support globalization as National Language Support (NLS) features to decide on a national or official language and store data in a definite character set and executed with Oracle NLS Runtime Library. The NLS Runtime Library provides a collection of language-independent functions that perform appropriate text, character processing and linguistic-convention operations.
According to Axmark and Widenius [17], to store and retrieve data, database tables were used in different languages and character sets differently. As such, MySQL included model of internationalization and localization for adapting different locale system for efficient data management. MySQL support different character set for SQL statement, languages for error messages, locale time at different levels, for example, the server, database, table, and column level.
The literature reviews indicated and suggested that Amharic scripts and calendars can be localized based on different localization methods.
C. INTEGRATING THE ETHIOPIC LOCALE IN TO OPEN SOURCE DATABASES
According to Elmasri [18], a database is an organized and structured collection of records and data that is stored in a computer system. A very high effective and efficient database system should have a program that manages and executes the queries, data and information stored on the system should be incorporated and integrated.
Due to tremendously increasing volumes of local data, which are a by-product of modern life, there is a growing need of an appropriate database that require efficient methods for their management and retrieval. Storing and accessing data in their original representation is very important, since converted data sometimes may result to loss of information [2]. Open source databases have come into their own to stand as solutions for every data management need in the enterprise. VOLUME 10, 2022 According to reports of DB-Engines [19], an online initiative to collect and put-into information on database management systems, popularity of open source databases are growing faster than commercial databases. The process of adapting a product to new linguistic and cultural specifics of content to a given geographical aspect (locale) is localization. The Ethiopic script (Ge'ez) is used as the writing system of Semitic language broadly spoken in Ethiopia and it includes more than 400 characters. Although the language (Ge'ez) only to be used in vernacular speech as it is now serves a liturgical function only, the script is still mostly used for characterizing the Ethiopian and Eritrean Semitic languages such as Amharic, Tigré and Tigrinya [3], [4]. As a result, a huge amount of data in Ethiopic script use Ethiopian calendar in which it needs to be stored in databases according to Ethiopian locale system. Therefore, managing the sorting and searching of data in the database system is the main important operations in database system and hence developing a module/extension that incorporates the Amharic Locale data type to integrate into an open source database is useful.
A. RESEARCH DESIGN
The study was conducted from January -November, 2021, at Private Organizations' Employee Social Security Agency (POESSA), Ethiopia, using the model-driven and experimental method of research approach. The survey was carried out for two types of users (ordinary users that always work for the system and database experts in the organization). An Amharic Locale Extension module (am_ET) that has been developed by the researchers was used in the research work.
B. PARTICIPANTS
Randomly selected and voluntary technically skilled personnel in the area of employees' social security of the existing database were identified. Moreover, 25 database users having good skills in basic computer systems and 10 database admins from the selected organization (POSSA) were invited for the trial of the developed Amharic Locale Extension module integrated into the open-source database known by PostgreSQL server to prove and verify the acceptability of the extension/module as a localization platform. According to Kozmirchuk et al. [20], the PostgreSQL server is preferable as it implements the most recent ISO and ANSI standards and gives best services of backup and recovery, and also gives local storage (temporary) and storage in the cloud. Fraenkel et al. [21] suggested that ''there are no rules for determining the size of groups (p. 267)'' in an experimental research. Both types of users' ages ranged between 25 and 37 and their experience in the organization ranged from three to nine years. 25% and 75% of the users were female and male, respectively.
C. PROTOTYPE AND ALGORITHMS
The algorithm used to design the system is depicted as follows: Architectural support is provided for the addition and modification of the database management system components (especially in PostgreSQL) that impose a particular interpretation on locale values. As shown in Figure 1, the architecture to extend PostgreSQL DBMS is composed of two major components: PostgreSQL server and extension module. To support locale specification at all levels of a database (database catalog, schema, table, record and attribute) we proposed new data types. The data types implemented as the storage format for Amharic locale data with rich semantics that capture the in-built and familiar concepts of Amharic locale and associated with DBMS components for making the process of the database system can be fully supported Amharic locale convention. We propose wider adoption of locale convention by developing Amharic locale extension module. The availability of such module in Amharic locale will help to make execution of locale operators possible.
Algorithm for Ethiopian date internal form
The PostgreSQL Server component is responsible for managing the database files, accepting connections to the database from client applications, and performing actions on the database on behalf of the clients. Since no Amharic locale conventions is specified in SQL standards, each DBMS has taken their own approach for handling such queries based on Romanization of the Amharic script. The query processing system invokes the Ethiopic locale file during query parsing to perform database tasks with Ethiopic cultural convention, type checking and binding of Amharic defined functions. We developed Amharic extension module that can be loaded into PostgreSQL server as needed. Once we load it into the database, this extension can function as built-in features. This extension is created in the form of a dynamically loadable object module. This extensions module consists of a dynamically loadable library (shared object), configuration (control) file and SQL files (script). Packaging all these files as an Amharic locale extension gives the benefit that PostgreSQL will recognize them as a single unit and it is helpful to simplify database management. There are two phases to add this extension module to the PostgreSQL server. First, we create the extension in C language and then compiling it into a dynamic object module (.dll). Next, load the extension module to PostgreSQL server by CREATE EXTENSION command.
We identified locales which universally explain culturaldependent aspects of a data. More specifically, there are properties that define the internal mechanisms of how data should be managed in a database. After the locales are identified we need a real implementation to support Amharic locale in database system. We have used a C code as real implementation of Amharic locale extension. We used C Programing language as it is preferable for bilingual model and has nothing to do with any particular hardware or system [22]. As we mentioned earlier, to support Amharic locale in PostgreSQL we proposed new data types. New data type creation requires implementing external form and the internal form of a value. These functions (internal and external forms) determine how the type appears for input by the user, output to the user and how the type is organized in memory.
The control file specifies properties/metadata about the Amharic locale extension, which tells the basics about extension to PostgreSQL to register in its system catalog, and must be placed in the installation's SHAREDIR/extension directory. The file is am_ET.control, Parameters inside this file follows the same convention as postgresql.conf file (i.e. parameter_name = parameter_value).
SQL script is mapping file, which we used to map all the SQL function with the corresponding C function of the extension (am_ET). It contains SQL commands about functions, types and operators of the Amharic locale. It also includes the required DDL and DML operations of extension. The file is am_ET-1.0.sql.
Once the extension module is loaded into PostgreSQL server, database operations can be performed on the Ethiopian locale data according to Amharic locale convention. In order to manage Amharic locale data, the database user must write SQL statement on a query tool and execute it. By using Amharic extension module, DDL and DML operations can be performed on the database as shown in figure 3.
Adding values to the table can be performed by inserting insert command into the statement. Figure 4 shows how the database users add values for the columns of the table.
D. DATA COLLECTION TOOLS
User acceptance testing was conducted to evaluate Amharic locale extension functionalities which was done based on the ISO 9241-11 usability testing characteristics, such as efficiency, effectiveness (referring the completeness and accuracy) and user satisfaction and the enquiry of the usability methods of any developed system must be validated alongside potential users [23]. This paper proposed different criteria for evaluating the quality of a developed system and finally adapted suitable criteria from Cavus [24] that best fits the developed module to evaluate its efficiency, usability, and reliability. These assessment methods and techniques are presented with respect to each development phase of the extension module. Then end-users' and database admins' feeling was taken. In the final of all these assessments and evaluations, the authors enhanced the extension module consequently based on assessment and evaluation results. In this VOLUME 10, 2022 regard, the study engaged on quantitative data, which were collected over the questionnaire named ''Database Admins' and End-users' Feelings about the Developed and Integrated Amharic Locale Extension module and Integrated into the Open Source database Acceptance'' that was compiled and developed by the researchers. But some of the questions and items are taken on and adopted from Cavus [24]. The content, quality, legitimacy and validity of the organized questionnaire were checked by 4 professionals, comprising 2 database specialists and 2 software testing professionals and were approved that it is possible to use in the research study and the consistency and reliability of the data collection tools were determined to be 0.89 using Cronbach's alpha, which is much advanced than the absolute value of 0.72 [25] indicating a higher degree of validity. The prepared questionnaire comprises of four parts. The first part consists of seven questions and it was used to asses and evaluates the integrated extension module's qualification. Five-point Likert scale (having values of strongly agree-SA (5), agree-A (4), neutral-LA (3), disagree-DA (2) and strongly disagree-SD (1)) for the responses of the items and questions were used as it is easy to understand and interpret. The second part of the designed questionnaire was used to test the module usability of the developed extension module and consists of 7 items. Usability is designated as the practice quality of a method or system from the perception of users (end-users/database admins). The third part having seven questions was used to prove the usefulness of the modules based on the attitudes of the end users and database admins. The questionnaire consists again the 5-point Likert scale type questions which strongly agree was indicating score 5 and strongly disagree was representing score 1. Strongly agree interpreted a positive attitude of the participants to the am_ET Amharic Locale Extension module. The last part of the questionnaire consists of 6 items about the satisfaction of end-users and database admins with respect to using the am_ET Amharic Locale Extension module.
E. DATA ANALYSIS
In this research work, statistical method of analysis such as descriptive analysis (averaging, standard deviation and mean), was conducted to interpret the data obtained by the questionnaire. The statistical package that we have been used for the analysis process was SPSS.
F. PROCEDURE
After developed the extension module and integrated into the open source database, the system is hosted in the local server of the selected organization (POSSA). By introducing the am-ET Amharic Locale Extension module and all the information into the selected and volunteer participants regarding the experimental study, each participant was using their own laptops or desktop computers in the computer training room of POSSA, just given permission for this purpose. In order to make sure that the extension module is working correctly according to Amharic locale convention, unit and system testing were applied. The first testing is testing on the calendar. The Ethiopian calendar is based on the Ethiopian Orthodox church computational practices. The church uses its own calculation to create a calendar for the country. The calendar is checked with other developed Ethiopian calendar system. In the evaluation, users were participated in the testing. A detailed and completed description about the extension has been given to those who participate in the experimental assessment before conducting the evaluation process, as it helps them in having an insight to the developed extension. The participants were asked to use the extension module on their own for four weeks (four hours a week) with their laptops or desktops in the training room just after the demonstration of the developed extension. Finally, participants were provided with respective questionnaire. To select representative number of participants, random sampling technique was applied. The choice of participants in the evaluation process was taken by considering their knowledge of the basics of computer skills and positions in the organization they have. Finally, each dimensions of the responses on the questionnaire were calculated, measured and analyzed.
A. MODULE'S QUALIFICATION OF USING AMHARIC LOCALE EXTENSION (am_ET)
As described in Table 1, the means scored in the opinions and the standard deviation of the participants indicated that the Amharic Locale Extension (am_ET) is efficient and useful. The means of answers to all questions to the questionnaires were above 4.20 and this result showed that the database admins and end-users had positive opinion in related to the qualification of the Amharic Locale Extension module. For example, as shown in Table 1, question 6, ''How do you rate the extension can be used by any database user with basic knowledge of using the existing DBMS? (Mean = 4.8, SD = 0.35'' and question 7, ''How do you rate the extension matches your expectation to manage (store, query and present) Amharic locale (calendar, currency, numerals phone number etc.)? (Mean = 4.9, SD = 0.39'') indicate that the database admins' and end-users' opinion of the developed extension module is useful and efficient.
B. SATISFACTION ABOUT THE USE OF am_ET MODULE OF THE AMHARIC LOCAL EXTENSION
The satisfaction of the users about the Amharic Local Extension am_ET module was measured by obtaining the following characteristics.
i Unicode Conversion: Converting input Unicode strings to Amharic locale data is easily implemented and practical with the am_ET module of the Amharic Local Extension. The users' satisfaction about the use of am_ET Amharic Locale extension is shown in percentage in Figure 5. According to Figure 5, the end-users and database admins were interested and satisfied in using the developed am_ET Amharic Locale Extension module. It was observed that the minimum opinion in percentage for end users had been 82.30% and for database admins had been 79.50% for the item ''Unicode Conversion: Converting input Unicode strings to Amharic locale data is easily implemented and practical with the am_ET module of the Amharic Local Extension''. It was also observed that the maximum opinion in percentage for end users had been 98.30% and for the database admins had been 93.30% for the item ''Localization Needs: I can say that the am_ET module of the Amharic Local Extension satisfied all of Amharic Locale users' need.'' In almost all items, it was observed that end-users are slightly more satisfied than the database admins. This slightly difference might come from repeatedly using of the old system and the newly integrated module as it has been known that it is their day to day duties of the end-users.
C. THE USEFULNESS OF THE AMHARIC LOCALE EXTENSION (am_ET) MODULE
In order to find the opinions and suggestions of participants (end-users and database admins) on the usefulness of the developed and integrated Amharic Locale extension module, the survey consisted of some questions and obtained evaluation results as shown in Table 2. According to Table 2, when the database administrators inspected the developed am_ET Amharic Locale Extension module from the viewpoint of a database expert, they engrossed their positive opinions on the insert, update and drop the required information and the highest score was given to the 3rd, 4th and 6th questions, which were ''How do you rate the usefulness of the Insert locale data module? (M = This positive impact goes to the end-users so that updating, inserting and dropping data to the system is very much useful. As it can be seen from the Table 2, for almost all items both end-users and database admins scores highly in their mean value and therefore the users were pleased to use the am_ET Amharic Locale Extension module as a localization platform.
D. QUALITY ATTRIBUTES OF AMHARIC LOCALE EXTENSION (am_ET) MODULE
In order to get and obtain the opinions and suggestions of participants (end-users and database admins) on the quality attributes of the developed and integrated Amharic Locale extension module, the survey contained of the following items.
• The usability of the Amharic Locale Extension (am_ET) module is efficient.
• The Amharic Locale Extension (am_ET) module is portable. According to one of software quality of the International Organization for Standardization (ISO), the ISO/IEC 9126-1 [26] standard divides software quality characteristics into six guidelines wherever they are used for the construction of any kind of portable-based devices software applications and utilities. These guidelines are the competency and efficiency which is about the performance of a system; functionality telling about the expected behavior of a system; usability showing about how easy it is to use a system; reliability referring to the robustness of a system to make sure it is where the operation can be trusted under all circumstances; maintainability refers to keeping a system running without difficulties and upgrade it where it is important and necessary; and the portability of a system is about the capability to use a system in different backgrounds and environments to what it is designed for. During the designing, implementing and testing processes of a system, these quality attributes and characteristics can be measured and evaluated. As a result and its consequence, the users' (database admins and end-users) perception of system quality can be resolute by determining the quality of the attributes in use.
As given in Figure 6, database admins and end-users were very satisfied about the quality attributes and characteristics of the system. The percentage results (Strongly Agree) show that the users were happy with the overall configuration of the extension module. As the gained values were between 84.50% and 95.40% for end-users, and between 79.80% and 91.50% for the database admins, the overall results were encouraging and this shows that the am_ET Amharic Locale Extension module can be used as a localization platform of Amharic texts, currencies, calendars, and numbers in POSSA and hence it can be used for any Ethiopian organizations.
V. DISCUSSION
Since the required operations related with Ethiopic numerals, Ethiopian currency and Ethiopian calendar are defined in the extension module we can perform operation as needed. To store phone number by using this extension module we can use varchar /text /char data type with fixed length and use function to_phone() to clear the difference between phone number with other text value. To manage geez number by using this extension module, integer data type and function to_number() can be used because the existing systems assume geez numbers as different character [27].
Managing Amharic locale data in the existing DBMS with western locale convention leads to miss information. For example, when inserting a value for a column (monthly salary) the amount is stored in $. In the existing DBMS, it is not possible to perform the required operations to manipulate Amharic locale data. However, by using the developed extension we can do different operations on Amharic locale data according to the locale convention. Even though Amharic locale data is managed [28] by the existing DBMS, the data management way is noticeably exposed to error. i.e. any invalid value for the column salary, date of birth, date of hire, Phone number and registration date can be inserted. However, these kinds of errors are handled by appropriate exception handling mechanisms on the developed extension module.
From the encouraging and promising estimations gained in the study, it is shown that end-user and database administrators of POSSA do not have any trouble in retrieving the required information and local data on the designed and integrated extension module whenever they need it without more effort; and its validity is approved based on [23]. In order to get the opinions and perceptions of participants on the quality attributes of the designed and developed am_ET extension, the survey considers for its functionality, reliability, usability, efficiency and maintainability as it could be adopted from Cavus [24]. The assessment of the localization applications is specifically relevant; however, few researchers have addressed these measurement criteria using laboratory experimental studies with database administrators and endusers in real areas. According to Kumar [29] a Heuristics evaluation has been proposed to test a developed system and therefore, the most significant distinctive of the paper is that the developed Amharic Locale Extension module, am_ET, has been tested and verified by real users, namely database admins and end-users. There are many researchers that have used the 5-point Likert scale format questionnaire for the assessment of the efficiency of any developed applications designed and implemented in their studies [30]. Pensabe-Rodrigueza et al. [31] used a usability assessment to define the efficiency of a developed system and applications. Almost all of these studies have proved that numerous methods and techniques can be used to assess and evaluate the efficiency of developed applications and hence they are used for localization systems.
VI. CONCLUSION AND FUTURE WORKS
While most of the technologies are often similar across customization of different software from source language to the target locales but the cultural and linguistic factors (locale development) are different. There is a need to localize DBMS that support Ethiopian locales, that could attracts users based on common and public language or shared ethnic, racial, gender, or nationality-based identities for integrating the cultural beliefs and attributes. The developed extension was tested and the result showed that its usefulness, reliability and quality has been earned a positive opinion. Also, the constructive assessment results pointed out that the database admins and end-users were gratified with the am_ET Amharic Locale Extension module's tasks and helpfulness. Both end-users and database admins appeared to have encouraged opinions about the developed Amharic Locale Extension module. Moreover, the investigational results of this research affirmed that, since the means of the end-users and database admins opinions were very high, the am_ET Amharic Locale Extension module was well designed and implemented technically.
The result of this research work would be important the experts of the indicated and selected organization for their day to day works as well as for database developers and users of database in local language. The advantages of the proposed work would enable to easily manage data formatted to suit Amharic locales system (such as collating sequence, calendar and date/time etc. . . ); to reduce costs for developers since they can focus only on handling Amharic locale system instead of dealing with conversions after developing the whole DBMS in English locale; and to local language development in the digital world and to better understanding of product functionalities of DBMS; and used as an input for other works. It is anticipated that the integrated extension module would be helpful to anyone who may have an interest in localization and design applications, and all organizations that used Amharic locales in open source databases. Apart from government efforts, individual developers also need to be motivated to take part in the localized data management development processes. In this study, we have made the first step towards the vital or ultimate objective of achieving and attaining complete multi-locale functionality in database systems. | 2022-11-06T16:19:28.456Z | 2022-01-01T00:00:00.000 | {
"year": 2022,
"sha1": "23c6c4f0706f744a1311fc7078b5b0d3496ec4a5",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09933433.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "bcf4f48fc800cc4f2a42d397331a049251bb77b4",
"s2fieldsofstudy": [],
"extfieldsofstudy": []
} |
56534023 | pes2o/s2orc | v3-fos-license | WEALTH TRANSFER BETWEEN OWNERS AND LENDERS OF EUROPEAN STOCK CORPORATIONS
Wealth transfer effects between company owners and lenders based on changes in a firm’s credit rating have primarily been examined a) for one type of security; b) on U.S. capital markets; and c) by applying standard event study methods. In contrast to these studies, we compared the price effects of stocks and corporate bonds of the same issuer using robust event study methods. Our findings indicated that downgrades cause negative price effects for owners and lenders of European firms, whereas upgrades only induced positive price effects for lenders. However, we did not find evidence for the existence of wealth transfer effects between owners and lenders on European capital
INTRODUCTION
Having been blamed for incorrect risk assessments and a low degree of transparency, the reputation of credit rating companies (CRCs) has suffered repeatedly in the past -most obviously in the wake of the dotcom and the subprime crises, which linked them with corporate scandals like Enron, WorldCom, and Lehman Brothers (see, e.g., Partnoy, 2006;Darbellay, 2013).Repeated attempts to establish a European CRC as a counterpart to the "Big Three" (S&P's, Moody's, Fitch) were (and still are) also rooted in the question of the appropriate role of U.S. CRCs on European capital markets.Prior studies indicated that the "Big Three" tended to lack behind local CRCs on local markets outside the U.S. (e.g., Steiner and Heinke, 2001;Mollemans, 2004).Despite the fact that Europe is still one of the most important capital markets worldwide, the examination of the role of U.S. CRCs for this area has been insufficient so far.Consequently, we investigated the role of credit ratings announced by the "Big Three" for owners and lenders of European firms, adding further insights to the majority of studies that focus on U.S. markets.In this context, corporate owners and lenders were represented by stockholders and bondholders.
In addition, we found evidence for the absence of wealth transfer effects for European corporate owners and lenders by testing the wealth redistribution hypothesis (WRH) for European security markets.Except for a small number of previous studies (e.g.Hand, Holthausen and Leftwich, 1992; Kliger and Sarig, 2000), the WRH has been typically tested for one particular type of security.This approach seemed to be incomplete, however, since positive (negative) stock price effects at the time of announcement of downgrades (upgrades) do not automatically imply a reduction (increase) of corporate bond prices.In light of this research gap, we assembled a unique sample of stocks and corporate bonds issued by the same European issuer to obtain more valid results.
We also applied several event study methodologies in order to ensure robust results.To the best of our knowledge, our study is the first which simultaneously used three models for calculating abnormal returns, several variations in estimating expected stock and bond returns, and four tests to examine the statistical significance of the abnormal returns.We accepted abnormal returns as significant only if they were confirmed by each of the four tests on at least a 5% level in order to increase the statistical validity of the results.
THEORETICAL BACKGROUND
Investors consider a credit rating to be valuable for their decision-making if it provides new information.The information content of credit ratings was discussed against the backdrop of information efficiency by Fama (1970).Based on this theory, a market is described as semi-strong in terms of information efficiency if prices fully reflect all publicly available information.In a rating context, this approach implies that changes in security prices can only occur directly at the announcement date of a rating change, because thereafter, the rating itself is considered publicly available information.However, already Wakeman (1981) argued that CRCs only processed and summarized such information.Although they could lower information costs, they were unable to provide genuinely new data to the market, especially when their ratings were unsolicited.
In contrast, CRCs claimed to have access to private information in the case of solicited ratings, indicating that announced revisions of existing credit ratings could be perceived by investors to be new information.This argumentation was summarized under the information content hypothesis (ICH) according to Katz (1974), who suggested that security prices changed solely upon the announcement of rating revisions and did not depend on the direction of the corresponding rating change.If market prices already changed prior to the rating announcement, the rating's information content would decrease on the announcement date, implying that investors already anticipated the change in a rated firm's credit risk.If investors anticipated a rating change, CRCs would lag the market instead of leading it.This may be also the case if CRCs act outside their home markets (e.g., Steiner and Heinke, 2001).Covitz and Harrison (2003) summarized the existence of information and anticipation effects by identifying a fundamental tradeoff concerning downgrades categorized as solicited ratings: On the one hand, CRCs tended to act in favor of investors to maintain or increase their market reputation by publishing a rating change as soon as possible.On the other hand, they were incentivized to act in favor of a corporate issuer by retaining negative information concerning credit risk to maintain their future contractual relationship with the rated firm.Zaima and McCarthy (1988) extended this approach by linking the direction of a rating change to the positivity or negativity of the corresponding price reaction.Thus, a downgrade was considered to induce security prices to react negatively upon the announcement, while an upgrade would cause prices to react positively.This reasoning was based on the assumption that owners and lenders alike perceived downgrades to be bad news, and upgrades to be good news.However, a downgrade (upgrade) could also possess information content if its announcement induced a positive (negative) price reaction.
The question concerning the positivity or negativity of price reactions is summarized under the wealth redistribution hypothesis, which was initially postulated by Holthausen and Leftwich (1986) and further developed by Zaima and McCarthy (1988). 1his theory states that wealth is transferred from investors who perceive the rating change more negatively to those with a more positive perception.Prior studies (see Goh and Ederington 1993; Chung, Frost and Kim, 2012; Imbierowicz and Wahrenburg, 2013) extended this argumentation by identifying the particular reason of the rating change as the primary driver of wealth transfer effects.If the announced rating change was primarily driven by a change in the firm's operating performance, owners' and lenders' evaluation of their respective risk-return-position, and, consequently, stock prices and bond prices would move into the same direction.In contrast, a leveragebased rating change could induce inverse price effects.If the announced downgrade resulted from increasing levels of financial leverage (or an upgrade from decreasing leverage), owners could perceive the announcement positively (or negatively, for upgrades) due to higher expected returns, which would result from investments of the additional debt amount.As lenders typically did not receive any additional compensation in terms of risk premiums after the firm's debt increased, they tended to perceive a downgrade to be bad news that induces bond prices to drop.
Previous studies testing the WRH investigated a) issue ratings and b) stock prices (e.g., Zaima and McCarthy, 1988;Goh and Ederington, 1993;Taib et al., 2009;Imbierowicz and Wahrenburg, 2013).However, Hull, Predescu and White (2004) came to the result that investors other than bondholders used credit ratings more frequently as indicators for the firm's overall creditworthiness rather than for the credit risk of a specific security issuance.Hence and in particular for corporate owners, issue ratings would be less relevant, as their residual risk-return position depended on the survival and profitability of the firm as a whole.
On the other hand, issuer ratings seem to be also relevant for bondholders, since a bond's credit risk is derived from the overall creditworthiness of the firm, although issuer ratings may not contain all the issuespecific information (e.g.collateral, maturity).Based hereupon, issuer ratings seem to be more appropriate for a rating-based comparison of stocks and corporate bonds.Therefore and because wealth transfer typically occurs in an intra-company manner, investigating wealth transfer effects requires the examination of both, stocks and corporate bonds of the same issuer (Imbierowicz and Wahrenburg, 2013).In this context, the following hypotheses were tested within the framework of the present event study: H1a: Announcements of negative rating changes do not induce significant stock returns for corporate owners.
H1b: Announcements of positive rating changes do not induce significant stock returns for corporate owners.
H2a: Announcements of negative rating changes do not induce significant bond returns for corporate lenders.
H2b: Announcements of positive rating changes do not induce significant bond returns for corporate lenders.
We examined changes in firms' issuer ratings announced by one of the three major CRCs in the period from 2000 to 2010.Our study contained a sample of European firms with actively traded stocks and corporate bonds.The majority of rating changes occurring during the research period were based on changes with respect to a firm's financial leverage.In contrast to previous studies, we analyzed both, stocks and corporate bonds, in order to investigate wealth transfer effects of announced rating changes.In addition to the univariate analysis, we also employed a multivariate approach containing several control variables.
The remainder of the paper is structured as follows: Section 3 discusses related literature.Section 4 describes the data and explains the descriptive statistics.Section 5 details the empirical method applied, while Section 6 presents and discusses the results.Finally, section 7 summarizes and concludes the study.In addition, the Appendix contains the results of our comparing price effects between stocks and corporate bonds to examine the intensity of those price effects.
RELATED LITERATURE
The majority of prior research investigating the ratingbased wealth redistribution between owners and lenders analyzed a) U.S. data and b) stocks (Zaima and McCarthy, 1988;Goh and Ederington, 1993;Gropp and Richards, 2001; Abad-Romero and Robles-Fernández, 2006; Taib et al., 2009;Imbierowicz and Wahrenburg, 2013).These studies commonly rejected the WRH and thus, detected neutral or negative price effects when a negative rating revision was announced.As an exception, Imbierowicz and Wahrenburg (2013) calculated positive stock returns on the announcement of downgrades which were, however, only significant on a 10%-level.Abad-Romero and Robles-Fernández (2006) found evidence for wealth transfer effects in the case of upgrades by identifying significant negative returns.However, the authors emphasized a possible bias of the price effect due to a small sample size.In solely focusing on stocks, the approach of former studies seemed to be expandable, since a positive or negative stock price reaction due to a downgrade or upgrade did not automatically imply an opposite price effect for corporate bonds.Zaima and McCarthy (1988) and Gropp and Richards (2001) investigated stocks and corporate bonds simultaneously in order to test the WRH, making their studies the most comparable to our approach.Zaima and McCarthy (1988) detected significant negative stock returns prior to announced downgrades, but did not report any price effects at the time of announcement of upgrades.In contrast, Gropp and Richards (2001), who also examined European security markets, found significant positive stock returns for upgrades without any price effects for downgrades.Both studies concluded that rating announcements do not have any price impact on bond markets which might be due to the higher liquidity of stock markets compared to bond markets.Imbierowicz and Wahrenburg (2013) provided the only study that clearly gave evidence of the existence of wealth transfer effects.However, the authors used CDS spreads as a substitute for bond prices, thus limiting the comparability of their study with our approach.
Overall, previous studies on rating-based wealth transfer effects between owners and lenders of a firm provide mixed results.Imbierowicz and Wahrenburg (2013) suggested that a possible reason for this heterogeneity is due to the sample composition of previous studies.They argued that former studies could be biased because of the use of samples which simultaneously included positive, neutral, and negative influences on credit quality.Hence, these studies contained rating changes that could have been due to a multitude of reasons, instead of creating a homogenous sample primarily characterized by a single, specific rating rationale.Our study serves to close the research gaps which, consequently, still exist -in particular by comparing different securities of a single issuer and by focusing on European markets rather than the more thoroughly researched U.S. market.
DATA
We examined a sample of European firms that experienced a change in credit rating announced by one of the three major CRCs between the years 2000 and 2010.These firms are either headquartered in one of the European Union member states or in Switzerland.Prices for both types of security as well as index data were collected as daily closing prices from Thomson Datastream.Contrary to previous studies (e.g., Hand, Holthausen and Leftwich, 1992; Imbierowicz and Wahrenburg, 2013), we used different index categories to enhance the quality of the regression.We extracted national indices and a European index for both types of security.The index category with the higher coefficient of determination was included in the sample.The descriptive statistics are available upon request.
In addition to price data, we sourced rating histories for each firm from Thomson Reuters Eikon.The credit ratings were obtained as issuer ratings, since this rating category is more appropriate in analyzing the different types of securities researched. 2he extracted rating changes were verified by the rating reports, which were available on the Standard & Poor's RatingDirect, Moody's Rating Interactive, and FitchRatings websites.In addition, we used "CreditViews" from Thomson Reuters Eikon to identify the reasons for the rating amendment if a rating report was not available.To ensure that the time series were not influenced by events other than announced rating revisions (e.g., management turnovers, company takeovers, interim and annual reports, and reports of dividend payments), we eliminated such contaminated time series from the entire sample.This approach resulted in a final sample of 115 rating events for stocks and 231 rating events for corporate bonds.
The different sizes of the stock and the corporate bond samples were due to the fact that the sample firms issued one type of stock, but multiple bond issuances.To minimize the resulting selection bias, we used the firm level approach (FLA) according to Bessembinder et al. (2009), which treats the firm as a bond portfolio.A firm's abnormal bond return was calculated as the value-weighted average of the abnormal returns for each bond issue.This approach allowed us to include all bond time series available for one sample firm and, thus, to avoid the problem of cross-correlations found in alternative approaches, such as the bond level approach and the representative approach (e.g., Hand, Holthausen and Leftwich, 1992).Table 1 also shows that our sample was well diversified across European member states and issuer's industries.As indicated in Panel C, our sample was well distributed with respect to the annual distribution of rating changes as well as with respect to the benchmark categories labelled 'economic downturn' and 'economic stability'. 3The majority of downgrades in our sample were announced during recessions, whereas most upgrade announcements occurred during periods of economic stability.This composition reflected the general distribution of upgrades and downgrades in Europe from 2000 to 2010. 4 Finally, most of the rating changes investigated were announced by Standard & Poor's (S&P), as displayed in Panel D. The majority of rating changes were consenting ones, while only 7.8% of the rating changes were categorized as split ratings, meaning that announced rating changes of at least two CRCs resulted in different credit ratings for the same rating object.
Note: The number of downgrades and upgrades applied for one of the two security categories is shown by the issuer's geographical location in
Table 2 further elaborates the structure of our sample using a migration matrix.Approximately 81% of the rating changes were categorized as being within the investment grade category, while only 15% of all rating changes were associated with the speculative grade category.Only 5 out of 115 (4.3%) of all issuer ratings crossed the line between investment grade and speculative grade.In contrast, approximately 90% of the rating changes resulting in the rating category "speculative grade" were announced during economic recessions.
To be able to identify changes in the issuer's financial leverage, we employed the rating reports and Thomson Reuters Eikon to determine the reasons behind the rating.To distinguish changes in financial leverage and financial prospects, we applied the keywords identified by Imbierowicz and Wahrenburg (2013), such as "capital structure" and "operating performance"."Capital structure" referred to any change in a firm's financial leverage, such as leveraged buyouts, debt-financed expansions, share repurchases, or other financing events."Operating performance" accounted for rating changes triggered by factors influencing a firm's ability to generate future cash flows.87.8% of all rating changes were based on changes in financial leverage, whereas 12.2% of the rating changes were the result of a change in the issuer's financial prospects.
EMPIRICAL METHOD
We employed the event study method according to Fama et al. (1969) and extended this standard approach by a number of conceptual adjustments.In an initial step, the daily returns were calculated for each type of security by including dividends and coupon payments.As recommended by Brown and Warner (1980), Di Cesare (2006), and Hudson and Gregoriou (2015), we also calculated daily returns based on a logarithmic approach, in addition to linear returns.In particular, Hudson and Gregoriou (2015, p. 16) concluded that "it may be appropriate in research studies of returns to give greater consideration to whether mean returns are calculated simple or logarithmic returns".The method for the calculation of the daily returns is shown in the following equations: ), where KA j,t and KB j,t denoted the daily price of stocks and corporate bonds with the corresponding dividends D j,t and coupons C j at date t.V j denoted the number of days between the date t and the date of the last coupon payment.We used standardized abnormal returns SCAR [T 1 ,T 2 ] stock/bond according to Patell (1976) and Mikkelson and Partch (1988) instead of cumulative abnormal returns (CARs) to reduce possible distortions: and AR j,t stock/bond = R j,t stock/bond − (α j + β j R M,t stock/bond ).
N denoted the number of observations and AR j,t stock/bond was the abnormal return of the time As already described, we applied the index that achieved the larger R 2 in the regression window.The expected returns were calculated primarily by applying the market model, since it generated the most valid results according to prior studies (e.g., Brown and Warner, 1980;Holthausen and Leftwich, 1986;Hudson and Gregoriou, 2015).However, we additionally applied alternative models for calculating expected returns -such as the mean adjusted model and the market adjusted model, as initially introduced by Brown and Warner (1980), and further developed by the same authors (1985).To make sure that the SCARs were not influenced by a certain value of TE, we used the estimation windows [-61, -11], [-111, -11], and [-161, -11] in contrast to the majority of previous studies, which usually applied only one estimation window.
Furthermore, we improved the statistical power of the regression model by performing the robust regression according to Rousseeuw (1984) and Mount et al. (2014) instead of the standard OLS regression.The robust regression was mainly based on identifying and eliminating outliers, which were defined as observations exhibiting a relatively high distance from the center of the point cloud.Outliers could bias the calculation of the parameters α j and β j depending on their position relative to the point cloud.Sorokina, Booth and Thornton (2013) showed that previous event studies failed to address this problem appropriately.For corporate bonds, in particular, the identification and elimination of outliers appeared essential to receive valid results, as they were typically traded less frequently than stocks.The advantage of applying this robust regression method was that the regression model achieved a higher breakdown value of up to 50%, implying that the regression results were valid even if 50% of the sample observations were outliers.In addition, it was not necessary to remove the entire time series in favor of the sample size and representativeness of the whole sample.
Moreover, the robustness of our results was increased with respect to the significance analysis, as we performed four tests.The SCARs were accepted as being statistically significant only if all of the four tests applied exhibited significance on the 5% level.The majority of the tests performed were non-parametric, because the test according to Shapiro and Wilk (1965) provided evidence that the SCARs were not normally distributed. 5The t-test and the Wilcoxon signed-rank test were performed mainly to provide comparability with previous studies.In addition, the significance analysis contained the generalized rank test (GRANK test) according to Kolari and Pynnonen (2011) because of its high robustness against heteroscedasticity and autocorrelation.Finally, we applied the bootstrap 5 The results of the Shapiro-Wilk test are available from the authors.method according to Efron (1979), which allowed for inference and hypothesis testing even if the distribution of the test statistic did not follow a standard distribution.We also used the t-statistic for testing and choose 1,000 as the population size for the bootstrap simulations.Based on the resulting empirical distribution, the p-value of the original value of the t-statistic was calculated.Including this fourth test, we also provided comparability to a small number of previous event studies using bootstrap techniques such as Di Cesare (2006).
Finally, we benchmarked the calculated returns in the treatment group against those of a control group in order to increase the validity of the entire study.Rather than using control firms, the control group consisted of randomly selected control events that represented dates other than the announcement dates investigated.By applying this approach, we tried to avoid the problem of using control firms that were different in terms of structural characteristics (e.g. market position, ownership structure, capital structure, risk profile, performance) and thus, lacking comparability (e.g.Antanasov and Black, 2016).However, if the control date represented another price-relevant event (e.g.M&As, performance reports, CEO turnovers), this date was not included in the control group.Moreover, the control dates and the corresponding control windows had to occur within a period on the verge of the estimation window used in the treatment group in order to reduce the probability of changes in the firm's environment.
EMPIRICAL RESULTS
This section contains the major event study results.First, we examined abnormal effects for corporate owners based on positive and negative rating changes.Next, we applied the bond-adjusted approach for lenders of the firm.In addition, we looked for abnormal return differences between stocks and corporate bonds to further investigate the existence of wealth transfer effects between owners and lenders.Finally, we employed a multivariate regression, including a variable representing the rating event and several control variables.
Owners' perspective
In the case of downgrades, the results shown in Table 3 indicated a significant negative SCAR in the announcement window [-1, 1].This result was confirmed by the benchmark models for calculating abnormal returns and by the control group, which did not indicate any announcement effect.Thus, downgrades had information content, providing bad news to owners of European firms.This finding supported the findings of Covitz and Harrison (2003), who argue that CRCs tend to act in favor of investors to maintain or increase their market reputation by publishing a rating change as soon as possible.
Negative rating changes (N = 66)
Positive rating changes (N = 49) Note: The table displays the mean of daily cumulative abnormal returns (CARs) and the mean standardized cumulative abnormal returns (SCARs) as four-digit decimal numbers.The number of rating events included in each security category is represented by N. The SCARs are calculated by dividing the CARs by the standard deviations of the CARs.The table also shows the p-values of the parametric test and the three non-parametric tests.The bootstrap consists of 1,000 randomly built populations.The results are assumed to be statistically significant if all tests show p-values at or below the 5% level.Panel A contains abnormal returns based on the linear return approach used in the market model.In contrast, the abnormal returns in Panel B are alternatively calculated using logarithmic returns.The expected returns calculated in Panels A and B are based on the regression window [-111, -11].The abnormal returns in Panel C are defined as excess returns calculated as the difference between the daily linear returns of a stock and the average stock returns of a control group (see Zaima and McCarthy 1988;Bi and Levy 1993).The abnormal returns in Panel D are based on expected returns, which are calculated as the average returns during the estimation period (see Singh and Power 1992).Panel E and Panel F contain abnormal returns based on the market model approach by varying the estimation window for 50 trading days.Panel G includes dates other than the announcement dates of rating changes.The control group consists of the same companies and has the same characteristics as the test sample in terms of N and the distribution of time.
We also detected significant negative SCARs in the pre-announcement windows, indicating anticipation effects.This finding contradicted Micu, Remolona and Wooldridge (2006), who came to the result that issuers intended to restrain negative information concerning credit risk as long as possible.Since institutional as well as private investors were involved in trading on European stock markets, anticipation effects and information effects may have coexisted because these groups of investors differed with respect to their access to risk-related information and their capability to process them.Institutional investors such as hedge funds were typically superior in collecting and assessing information, so that they could anticipate an increase of a firm's credit risk during the corresponding rating process.On this basis, we confirmed the ICH in the case of downgrades, since it assumes the absence of any anticipation effects.
In addition, the univariate results implied a rejection of the WRH based on significant negative SCARs within several event windows.The positive effect of increasing financial leverage (i.e., the profitability of investments funded by additional debt capital) could be overcompensated by higher leverage risk.The latter is characterized by a decreasing return on investments due to the increasing costs of debt capital, which are driven upward by downgrades.
In the case of upgrades, we did not find a significant price effect within the announcement window [-1, 1].This result indicated that positive rating changes did not provide any new information to stockholders of European firms.Compared to downgrades, this asymmetric reaction could be due to variations in stockholders' risk perception.Stockholders tended to be more sensitive with regard to increases in a firm's credit risk, rather than being focused on positive risk developments.Assuming that credit ratings differed from a firm's real credit risk, Abad-Romero und Robles-Fernández (2006) argued that CRCs had an asymmetric loss function, since their reputational damage was much larger in cases of inappropriate downgrades than with upgrades.Thus, these information intermediaries were incentivized to allot more technical and human resources to possible cases of downgrades.
Analogous to downgrades, we also found significant SCARs within the event windows [-10, 0] and [-5, 0] prior to announced upgrades.Combined with the absence of information effects, these results supported the argument that CRCs tended to lag the market at the time of announcement of upgrades.According to Holthausen and Leftwich (1986), issuers may have had the incentive to announce positive information concerning their credit risk as soon as possible in order to profit from the improved financing opportunities immediately.Due to this signaling, risk-related information could have been sent to market participants even before CRCs announce their rating results.Along with their asymmetric loss function, this causality might also have been a possible explanation for the observed anticipation effects in European stock markets.
Although the majority of upgrades in our sample were due to decreases in financial leverage, we did not detect any wealth transfer effects between corporate owners and lenders within either the announcement window or prior to announced upgrades.This result also supported a rejection of the WRH in the case of positive rating changes.A decrease in financial leverage means that a smaller amount of debt capital is available for investing in high risk/high return projects.In this situation, owners of stock corporations face increasing opportunity costs due to the risk of missed returns on investment.In contrast, a decrease in a firm's financial leverage and a corresponding risk reduction can lead to smaller costs of debt.Our results implied that the effect of reduced costs of debt exceeded the increase in opportunity costs.We therefore rejected hypothesis H1a, and confirmed H1b.
Lenders' perspective
In line with stocks, we also found significant negative bond SCARs following downgrades within the announcement window [-1, 1], implying that negative rating changes also contained information regarding European bond markets.
Hence, bondholders perceived downgrades to be bad news if these rating changes were based on increasing levels of financial leverage, as they were not compensated by a higher risk premium.
Along with the information effects, we also identified significant negative SCARs ten trading days prior to the announcement date.This result was in line with Hettenhouse und Sartoris (1976), who also found anticipation effects on corporate bond markets.In contrast to the situation with stocks, we detected significant negative SCARs in the post-announcement window [0, 1].This result further suggested that corporate bond markets are less liquid than stock markets.The overall results showed that downgrades were incorporated into bond prices over a certain period of time, rather than having been a date-specific event.This was mainly driven by different groups of bondholders with different levels of access to risk-related information.The result contradicted Di Cesare (2006), who did not find any significant bond price effects at the time of announcement of negative rating changes.
In the case of upgrades, Table 4 also shows positive SCARs within the announcement window [-1, 1], indicating an information effect on European corporate bond markets.Contrary to stocks, both kinds of rating announcements provided new information to bondholders.Thus, we did not find any evidence for CRCs allocating more resources in the assessment of downgrades compared to upgrades.Our results also did not indicate anticipation effects prior to the official upgrade announcement, implying that CRCs led European bond markets when the firm's credit risk improved.This result contradicted Hettenhouse und Sartoris (1976), who only identified strong price effects of bonds prior to upgrade announcements.In addition to the significant post-announcement effects within the window [0, 1], the absence of an anticipation effect further confirmed the illiquidity of corporate bond markets compared to stock markets in Europe.This finding was also in line with the results shown in the Appendix.In summary, our results indicated that bondholders of European firms perceived downgrades and upgrades as equally important for their decision making.Based on the negative SCARs in the case of downgrades and positive SCARs in the case of upgrades, we rejected hypotheses H2a and H2b, thus confirming the absence of wealth transfer effects between owners and lenders of European companies.[-111, -11].The abnormal returns in Panel C are defined as excess returns calculated as the difference between daily linear returns of a corporate bond and average bond returns of a control group (see Zaima and McCarthy 1988;Bi and Levy 1993).The abnormal returns in Panel D are based on expected returns, which are calculated as average returns during the estimation period (see Singh and Power 1992).Panel E and Panel F contain abnormal returns based on the market model approach by varying the estimation window for 50 trading days.Panel G includes dates other than the announcement dates of rating changes.The control group consists of the same companies and has the same characteristics as the test sample in terms of N and the distribution of time.
Cross-sectional analysis
The results of the previous univariate analysis provided an initial indication concerning the information content of rating changes, as well as the absence of wealth transfer effects between corporate owners and lenders.In addition, however, we conducted a cross-sectional analysis to examine the influence of several issuer-specific and ratingspecific variables.In addition to the variable representing the rating event, the multivariate regression also contained several control variables that affect the information content of rating changes in bond markets and stock markets.To investigate the effect of these variables, we applied four models for each type of security.Model 1 included all of the explanatory variables employed, including the dummy variable EVENT, which assumed a value of 1 in the case of a rating announcement.Model 2 included the total number of control variables except for the variable EVENT, and served as a benchmark for Model 3 and Model 4. Model 3 included issuer-specific factors to control for certain characteristics of stocks and bonds, as well as for the respective issuers.In Model 4, rating-specific factors were used to control for specific rating characteristics.We estimated the regression separately for upgrades and downgrades as follows: where, SCAR [−1, 1] denoted the standardized cumulative abnormal return of issuer j within the announcement window [-1, 1].The application of the Durbin-Wu-Hausman Test indicated only a weak influence of endogeneity, which we therefore disregarded thereafter. 6 SIZE referred to the issuer's firm size measured by total assets.According to Kisgen (2006), firm size was one of the most important factors in determining credit risk.Usually, large firms showed higher degrees of diversification, income, and lossabsorbing capacity.Consequently, their abnormal returns responded less negatively to downgrade (and less positively to upgrade) announcements.LEV denoted the financial leverage of issuer j, which was calculated as total debt divided by total assets.Based on the WRH, stocks were expected to react more positively to downgrade announcements (and more negatively to upgrades).In contrast, the higher the financial leverage, the more negative the price effects of corporate bonds at the date of announced negative rating revisions should have been.Similarly, lower levels of financial leverage had meant more positive bond price effects upon positive rating revisions.
PROFIT was defined as earnings before interest and taxes divided by sales revenue.According to the major CRCs, an issuer's profitability played an important role in assessing the firm's credit risk, as retained profits contributed to its loss-absorbing capacity and, thus, affected a credit rating positively.For owners of stock corporations, a high profitability implied higher expected dividend payments, resulting in higher expected returns.In contrast, a high profit reduced the negative effects of downgrades for bondholders due to the higher lossabsorbing capacity of profitable issuers.We used the variable MAT to control for the effect of time to maturity on bond SCARs.In general, a longer time to maturity implied a higher degree of uncertainty with respect to a firm's credit risk.
We also controlled for industry-specific effects using the dummy variable FIN, which assumed a value of 1 if the issuer provided financial services.Allen, Fulghieri and Mehran (2011) suggested that the incentive of financial institutions to extend their risk monitoring increased with higher levels of 6 The results of the Durbin-Wu-Hausman Test are available from the authors.capital.Consequently, both owners and lenders benefitted from these self-monitoring procedures, which reduced the credit risk of financial institutions.In addition, European financial institutions were forced to disclose a large amount of information due to a relatively high degree of regulation (e.g., the European CRR/CRD IV regulations based on Basel II/III).Hence, stockholders as well as bondholders of European financial institutions had access to more riskrelevant information than those of non-financial bodies, enabling them to better anticipate changes in the issuer's credit risk.Thus, we expected rating changes of financial institutions to convey less new information for their owners and lenders than changes of non-financial firms.
We controlled for the influence of economic downturns using the dummy variable RECESS, which assumed a value of 1 if the rating change was announced during the dotcom crisis or the subprime crisis.Economic downturns could enhance the information content of announced rating changes in two ways.First, investor uncertainty could have increased due to a growing amount of risk-relevant information (e.g., Hsueh and Liu, 1992).Facing additional transaction costs for processing this information, investors were incentivized to rely on CRCs, so that the information content of credit ratings grew.Additionally, investors took a downgrade more seriously during economic downturns because of higher risk sensitivity (e.g., Hoffmann, Post and Pennings, 2013).The second reason was the asymmetric loss function of CRCs, which found it more difficult to make risk assessments in the more volatile market environment of an economic downturn.As the danger of incorrect risk assessment increased in those periods, investors became more risk averse, so that CRCs faced a higher risk of reputation loss (e.g., deHaan, 2013, on the subprime crisis).Reputational risk due to incorrect (i.e., overly optimistic) or delayed ratings incentivized CRCs to allocate more personnel and technical resources to the provision of rating changes during recessions, positively affecting the changes' information content.
In addition, we investigated the influence of the watchlist by including the dummy variable WATCH, which took a value of 1 if the rating change followed a rating review.The information content of watchlists was examined in previous studies with regard to information that would signal a change of the issuer's credit risk (e.g., Holthausen and Leftwich, 1986;Bannier and Hirsch, 2010).Thus, watch-preceded rating changes are expected to possess lower information contents, since the rating change was at least partly expected through the prior announcement of watchlists.
If the leading CRCs assessed the creditworthiness of the same firm, they might have disagreed about a firm's credit risk.In addition, they typically announced their rating on different dates.Alsakka, ap Gwilym and Vu (2014) showed that S&P's, as opposed to Moody's and Fitch, commonly acted as a first mover.Thus, rating changes announced by S&P's were expected to induce stronger reactions.We used the following dummy variables to control for agency-specific price effects: The variable S&P (FITCH) took a value of 1 if the rating change was announced by S&P's (Fitch Ratings).We controlled for split ratings by using the dummy variable SPLIT, which assumed a value of 1 if at least two CRCs arrived at different results concerning the same rated entity.Such a divergence could have increased investors' uncertainty regarding an issuer's creditworthiness, and decreased the information content of a rating change announcement by a particular rating agency.Prior studies contradicted this reasoning, as they detected stronger price effects for split ratings compared to concordant rating changes (e.g., Gropp and Richards, 2001;Livingston and Zhou, 2010).
The distinction between investment grade and speculative grade is of critical importance for investment decisions and capital requirements.Rating revisions crossing this line induced price effects regardless of their information content because of rating-based regulation, so that a stronger price effect of rating changes between both rating categories was regarded more probable than for those occurring within a rating category (e.g., Steiner and Heinke, 2001).We therefore used the dummy variable INBET, which took a value of 1 for rating changes from investment grade to speculative grade and vice versa.Because most of the rating changes in our sample occurred within the investment grade category, we additionally applied the dummy variable ININVEST, which took a value of 1 if the rating change occurred only within the investment grade category.Finally, we further specified the price effects as a function of the intensity of the rating change by using the dummy variable INCLASS, which assumed a value of 1 if the rating change occurred within a particular rating class.For example, the rating class AA of S&P's contained the three ratings AA+, AA, and AA-.Hand, Holthausen and Leftwich (1992) concluded that significant price effects of rating changes did not depend on the particular rating class.Table 5 depicts our stock-related results, while Table 6 does so for our bond sample.
In the case of downgrades, we detected a significant negative coefficient of the variable EVENT for stocks.Regarding the significant negative SCARs shown in Table 3, this result also indicated that downgrades possessed information content for owners of European stock corporations.In contrast, the variable EVENT was not significant at the time of announcement of upgrades, which was also in line with the non-significant price effect of the univariate analysis.Thus, our findings provide evidence that European stockholders perceived changes in a firm's credit risk asymmetrically.Unlike stocks, corporate bonds did not show significant coefficients representing downgrades, whereas we found upgrades to be significant.Compared to stockholders, bondholders also perceived changes in credit risk asymmetrically, though in a different way: Downgrades seemed to be most important for stockholders, whereas bondholders were mainly focused on upgrades, because they received a relatively high fixed risk premium due to the improved credit risk.We detected a negative coefficient of LEV for stocks at the time of announcement of downgrades, which was highly significant across all models applied.This result implied that negative stock-price effects became even more pronounced with higher levels of financial leverage of the downgraded firm.Because we did not find a significant impact of this variable on bond prices, our study could not provide any evidence for the existence of wealth transfer effects, which was in line with Zaima and McCarthy (1988), Goh and Ederington (1993), and Gropp and Richards (2001).
For corporate bonds, we identified further variables as significant.However, only one of the three models applied exhibited significant results, indicating weak validity.In Model 1, we detected a significant positive coefficient for the variable ININVEST at the time of announcement of downgrades, and a significant negative coefficient for announced upgrades.In line with the argumentation of Hand, Holthausen and Leftwich (1992), this result indicated that announced downgrades and upgrades induced a lower abnormal price effect inside the investment grade category, since bondholders typically became less sensitive for marginal changes in a firm's credit risk with an increasing rating category.In the case of downgrades, Model 1 also showed a positive significant impact of the variable SIZE, implying that the negative SCAR became smaller for bigger companies.According to Micu, Remolona and Wooldridge (2006), bigger firms typically provided a higher amount of information, enabling investors to anticipate (at least in part) the underlying increase in a firm's credit risk.In addition, they had a higher loss-absorbing capacity and, thus, a higher financial stability despite their higher credit risk.Finally, we found a positive impact of WATCH at the time of announcement of downgrades, meaning that the negative abnormal price effect decreased if the rating change followed a previous watchlisting.This result confirmed the findings of Holthausen and Leftwich (1986).
CONCLUSION
We investigated price effects of stocks and corporate bonds at the time of announced changes in European firms' credit ratings.For this reason, we modified the standard event study approach by applying several robust methods, and completed the univariate regression by a cross-sectional analysis.At the date of announced downgrades, our results showed significant negative abnormal returns for both owners and lenders of the firm.In contrast, we found significant positive abnormal bond returns, while we did not detect any significant price reaction for corporate owners in the case of announced upgrades.In combination with the results of the cross-sectional analysis, our findings implied that owners of European stock corporations tended to be focused on negative rating changes, while bondholders of these firms perceived both rating change directions to be of equal importance.We also did not find any indication for the existence of wealth transfer effects.In addition, our study provided some evidence of a varying magnitude of price reactions among both types of security due to differences in liquidity of European stock and bond markets.
However, our study leaves some unresolved questions.First, since the rating changes in our sample were primarily driven by changes in a firm's financial leverage, it would be interesting to conduct this study for rating changes that are caused by other factors, such as changes in expected firm profits, or merger announcements.Second, future research should extend the period of investigation to examine the effects of the Euro crisis starting 2011.Finally, our approach could be extended to other types of securities and the owner, lender or mezzanine investor positions they represent, such as preferred stocks, commercial papers, or convertibles, to identify differences and similarities of market price movements.Since the importance and liquidity of stock markets and bond markets increases as continuously as the number of data sources and the quality of the data they provide, these questions will become easier to answer, ensuring that the effect of rating announcements on market prices of securities will remain a stimulating area of research.Note: The table displays the difference of mean standardized cumulative abnormal returns (DSCARs) between stocks and corporate bonds, or vice versa.The SCARs are calculated by dividing the CARs by the standard deviations of the CARs.DSCARs are shown as four-digit decimal numbers subdivided by the direction of rating changes over a period of 21 trading days.The table also shows the p-values of the parametric test and the three non-parametric tests.The bootstrap consists of 1,000 randomly built populations.The results are assumed to be statistically significant if all tests show p-values at or below the 5% level.Panel A contains DSCARs calculated by subtracting the SCARs of corporate bonds from the SCARs of stocks, whereas Panel B contains DSCARs calculated by subtracting the SCARs of stocks from the SCARs of corporate bonds.The SCARs are calculated by using the market model approach with an estimation window of [-111, -11].In addition to these results, we calculate abnormal returns for a control group consisting of dates other than the announcement date of the rating changes investigated.The results do not indicate any significant abnormal returns in the announcement window [-1, 1], and are available from the authors.
DSCARs
We do not detect a significant difference of stocks compared to corporate bonds in the case of downgrades, implying that both types of security react quite similarly to the rating event.In contrast, we detect a significant DSCAR of 0.2355 for upgrades within the pre-announcement window [-10, 0].Therefore, stocks react more strongly than corporate bonds prior to the announcement of upgrades.This asymmetric intensity may be due to the higher liquidity of stocks compared to corporate bonds, since bondholders prefer a long-term buy-and-hold strategy.Hence, stockholders are more capable in processing risk-related information and translating it into actions that amend prices than bondholders, which is in line with Yan and Zhang (2009).We do not find any significant DSCARs between corporate bonds and stocks in the different event windows.This non-significance of DSCARs implies that corporate bonds are more illiquid than stocks.
Panel A. Panel B shows the industry sectors of the issuer, which are compiled under the categories Financials and Non-financials.Data concerning the specific industry sectors is available from the authors.Panel C displays the annual number of downgrades and upgrades between the years 2000 and 2010.The recession period includes the sub-periods 2001-2002 and 2007-2010, whereas the years 2000 and 2003-2006 are assumed to represent periods of economic recovery.Both economic periods are based on the classification of the National Bureau of Economic Research.The number of positive and negative rating changes announced by a specific rating agency is displayed in Panel D.
series j at date t depending on the type of security investigated.σ ̂(AR j stock/bond ) was the standard deviation of abnormal returns AR j,t stock/bond for stocks and corporate bonds, while ED j denoted the number of trading days within the estimation window [ -11, -(11+TE)].TE described the number of days in the estimation window, which ended one trading day before the beginning of the maximum event window [-10, 10].The calculation of expected returns was based on R M,t stock/bond , defined as the market return calculated using national and European indices for stock and bond markets.The calculated SCARs were further re-standardized by their cross-sectional variation according to Kolari and Pynnonen (2011) to reduce event-induced volatility.Abnormal stock and bond returns were analyzed within several symmetrical and asymmetrical event windows.The maximum event window [-10, 10] was split into the preannouncement windows [-10, 0], [-5, 0], [-1, 0] to investigate anticipation effects, and the postannouncement windows [0, 1], [0, 5], [0, 10] to examine liquidity-based price distortions.Since the majority of previous studies analyzed the information content of announced rating changes within the window [-1, 1], we also used this window to ensure comparability with previous studies (e.g., Gropp and Richards, 2001; Han et al., 2009; Imbierowicz and Wahrenburg, 2013).
Table 1 .
Sample description for stocks and corporate bonds
Table 3 (
continued).Abnormal returns of stocks
Table 4 .
Abnormal returns of corporate bonds
Table 4 (
continued).Abnormal returns of corporate bonds
Table 5 .
Multivariate analysis of standardized cumulative abnormal stock returns in response to negative and positive rating announcements
Table 6 .
Multivariate analysis of standardized cumulative abnormal corporate bond returns in response to negative and positive rating announcements | 2018-12-18T14:30:43.400Z | 2017-01-01T00:00:00.000 | {
"year": 2017,
"sha1": "6529631f14eb4942db748a270fa9702f13d02ed0",
"oa_license": "CCBYNC",
"oa_url": "https://virtusinterpress.org/spip.php?action=telecharger&arg=8209&hash=138f132b9467944eb7fea9921f55ad5070b321fa",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "6529631f14eb4942db748a270fa9702f13d02ed0",
"s2fieldsofstudy": [
"Economics",
"Business"
],
"extfieldsofstudy": [
"Business"
]
} |
3245953 | pes2o/s2orc | v3-fos-license | Is Resistance to Dolutegravir Possible When This Drug Is Used in First-Line Therapy?
Dolutegravir (DTG) is an HIV integrase inhibitor that was recently approved for therapy by the Food and Drug Administration in the United States. When used as part of first-line therapy, DTG is the only HIV drug that has not selected for resistance mutations in the clinic. We believe that this is due to the long binding time of DTG to the integrase enzyme as well as greatly diminished replication capacity on the part of viruses that might become resistant to DTG. We further speculate that DTG might be able to be used in strategies aimed at HIV eradication.
Introduction
Current HIV therapy usually involves the use of three antiretroviral (ARV) drugs in combination, often as part of a simplified regimen. Indeed the introduction of triple ARV therapy in 1996 has since led to rates of therapeutic success that have increased to over 90%, based on suppression of plasma viremia to below 50 copies of viral RNA/mL. This progress is attributable to several facts: 1. Dosing regimens have become simplified, often because of the use of co-formulations, some of which only need to be taken once-daily; this has greatly enhanced rates of adherence to ARV regimens; 2. Pill regimens have become far less toxic and more tolerable over time; this has also promoted adherence as well as diminished the likelihood of development of HIV drug resistance against individual drugs [1,2]; 3. The drugs used in therapy are now more potent than the compounds that were in use only 10 years ago.
To be sure, the use of ARVs in first line regimens has always been associated with some degree of drug resistance and treatment failure. Over the past several decades, scientists have catalogued a wide array of drug resistance mutations that are located within each of the reverse transcriptase, integrase and protease enzymes of HIV-1 that are the targets of HIV therapy, and have documented how each of these mutations may lead to diminished likelihood of a favorable clinical response to each ARV, both in therapy and in cell culture [1]. In addition, phase III clinical trials that led to the approval of each of the ARVs now used for therapy also provided valuable information on the types of mutations that were most likely to be identified in the virus in the aftermath of viral rebound. This includes members of the integrase strand transfer inhibitor (INSTI) family of drugs such as raltegravir (RAL) and elvitegravir (EVG) [3][4][5][6][7].
Recently, a drug termed dolutegravir (DTG) has been studied in phase III investigations and has yielded the most robust results ever obtained in HIV clinical trials [8]. First, approximately 88% of patients who received DTG in these studies attained suppression of viral load to <50 copies RNA/mL and, in addition, none of the individuals in these studies possessed a single drug resistance-related mutation that was associated with either DTG or the nucleoside drugs that were used together with DTG as a part of therapy. It should be noted, however, that approximately 10%-15% of patients in the trials did not respond to therapy and possessed detectable levels of viral load in plasma, perhaps for reasons of non-adherence [9,10].
Of course, resistance against boosted protease inhibitors (PIs) after virological failure (VF) was also very rare but this has been primarily investigated for mutations in the viral protease (PR) gene (1). It is possible, of course, that mutations at gag cleavage sites may have been present in certain of these cases. In addition, the M184V mutation, associated with resistance to 3TC, was present in some cases of failure involving boosted PIs.
Viral Fitness Prevents HIV-1 from Evading Dolutegravir Pressure
The question is how to explain these results. Among the hypotheses that have been advanced is that viruses that become resistant to DTG may be relatively replication incapacitated and cannot efficiently grow; hence, such variants might not be detectable in patient plasma [11] (Figure 1). It is known, for example, that DTG can select a mutation at position R263K in the integrase gene in tissue culture and that this mutation diminishes both viral replication capacity as well as the enzymatic activity of the integrase enzyme [12]. Although this is not unusual, it should be noted that similar results were also obtained with the two other approved integrase inhibitors EVG and RAL [11]. Indeed, in the case of the latter two compounds, the presence of an initial substitution was often quickly followed by the appearance of a second mutation that had the dual effect of increasing the level of drug resistance, often to a level that might preclude any further clinical benefit from the drug, while simultaneously restoring viral replication capacity to close to that of wild-type viruses (Table 1). However, the secondary mutations that were selected by DTG only modestly increased overall levels of resistance against the drug but simultaneously impacted even more adversely on the ability of the virus to grow, often resulting in impairment of >80%, and this was accompanied by a further diminution in the activity of HIV integrase in biochemical assays [11,12]. These findings may be due in large part to the fact that the ability of DTG to bind to the integrase enzyme is extremely long and exceeds by at least several fold the ability of either RAL or EVG to achieve similar binding [13].
It should be stated that secondary and/or tertiary drug resistance mutations often play a compensatory role in regard to replication for many microorganisms besides HIV, including bacteria that are resistant to numerous antibiotics as well as viruses that display resistance against specific antiviral drugs. Compensatory mutations in HIV that simultaneously augment viral replication while increasing overall levels of drug resistance have been documented for members of each of protease inhibitors (PIs) as well as the nucleoside reverse transcriptase inhibitor (NRTI) and non-nucleoside RT inhibitor (NNRTI) families of drugs [1]. However, no such mutation has been identified for DTG, representing a unique observation that is bolstered by the results of tissue culture selection experiments that have yielded only two distinct mutations that diminish viral replicative capacity but never a third compensatory mutation [11] over more than four years of selection pressure in culture.
Can Dolutegravir Be Used in Strategies Aimed at HIV Eradication?
Accordingly, we should wonder what will happen if viruses that are resistant to DTG cannot be compensated by additional mutations within integrase and if such viruses are truly at a severe replication disadvantage in comparison with wild-type HIV. Would such a result take on even greater significance if it turned out that DTG can retain clinically significant antiviral activity, despite the presence of one or two drug resistance mutations? Such a scenario is indeed suggested by the fact that the level of resistance conferred against DTG by the combination of two such mutations within integrase is <10-fold and that biochemical results have shown that the ability of DTG to bind to the integrase enzyme and remain associated with it is very long, i.e., >60 hours. Moreover, the R263K mutation only diminished this level of binding by about 50% [13,14] which is still far longer than the binding affinity half-life of RAL and EVG for wild-type integrase. This raises the possibility that the development of low-level resistance against DTG in first-line therapy might not have adverse virologic or clinical consequences. However, it should also be noted that DTG was only approved for treatment in the USA approximately one year ago and that all of the clinical data that pertain to this compound have been obtained as part of clinical trials. Support for this concept will only accrue after DTG is widely prescribed outside of clinical trial settings, including under conditions in which a far greater degree of non-adherence to treatment can be expected. At the present time, the data suggest that patients who may become resistant to DTG will still respond to RAL, but further clinical experience will be needed to substantiate this point.
How could this hypothesis be tested? First, a study could be contemplated in which DTG is employed as monotherapy in treatment-naive subjects, even though we would prefer that proof-of-concept results first be obtained in relevant animal models. If the results obtained are similar to those observed in the phase III clinical trials, a partial validation of the hypothesis to explain the absence of resistance in the phase III trials will have been obtained. It goes without saying that such a monotherapy study would need to be accompanied by intense virologic monitoring for resistance mutations, that should include the use of ultrasensitive sequencing for identification of DTG resistance mutations in the DNA of patient peripheral blood mononuclear cells as well as in the RNA of patient plasma samples.
Notwithstanding the above, it should be noted that some clinical validation of the significance of the R263K mutation has already been obtained in the SAILING-clinical trial that compared the use of RAL against DTG in treatment-experienced patients who had undergone previous failures of their therapeutic regimens but who had never before been treated with an integrase inhibitor [15]. The patients in this study all possessed drug resistance mutations that might have compromised the antiviral activity of multiple ARVs in the regimens that they received, but not of the integrase inhibitors and the results showed that DTG was superior to RAL at suppression of viral load in these individuals. In fact, the only drug resistance mutation to have appeared in only two patients in the DTG arm of the study was R263K, whereas failure on the RAL arm of the study led to a broad array of RAL-associated mutations in integrase. Although, the patients who received DTG and who possessed the R263K mutation have apparently continued to be clinically well, new information is needed in regard to mutations that may have developed over time in such individuals, in order to determine whether viral evolution took place to significant extent. Although the data to date suggest that subsequent viral evolution did not take place [13] important questions of durability of responsiveness remain unanswered.
A further thought relates to the reasons for treatment failure in approximately 10%-15% of patients who have received DTG as part of first-line therapy. The most likely reason for this is patient non-adherence. However, it is inconceivable that all non-adherent individuals who failed DTG failed to take their drugs 100% of the time. Why then did they not develop resistance to DTG as happened in each of the comparator arms in the Single, Flamingo and Spring studies in which patients who failed therapy did develop resistance against each of the nucleoside compounds that were employed in therapy as well as against RAL? In fact, the development of RAL-associated mutations in the Spring study is consistent with the results of other clinical trials in which RAL was used in first-line therapy and in which resistance mutations were identified among RAL failures. Why did the non-adherent patients who received DTG in first-line therapy not generate any resistance mutations to any of the drugs that they received? The only conceivable answer is that they were unable to do so because DTG has the highest barrier to resistance of any anti-HIV compound developed to date. We, of course, believe that the basis for this is the hypothesis outlined in this manuscript. An assessment of clinical specimens from circulating lymphocytes and from lymphocytes present in gut tissue and other body compartments in which HIV is likely to become archived might help to answer this question. It is conceivable that the presence of defective viral forms that contain integrase resistance mutations that relate to the R263K pathway might be much more common than previously thought. However, such defective viruses might not easily be able to grow.
Dolutegravir and Other Integrase Inhibitors for the Management of HIV-Positive Individuals
Thus, DTG is certainly an agent to consider for patients entering first-line therapy, since the development of R263K and a subsequent mutation may not confer any deleterious effect in regard to patient well-being. In contrast, it is clear that the prior development of mutations associated with resistance against RAL or EVG may compromise the use of DTG in salvage therapy, since each of the Viking I, II, and III studies showed that DTG cannot always be successfully used to salvage patients who were first treated with RAL or EVG and who failed those regimens with resistance-associated mutations [16]. It is also true that some patients who first failed RAL-or EVG-based regimens have responded virologically when treated with DTG as part of second-line therapy, although the durability of the success of DTG in this setting remains to be determined. It is also doubtless true that many patients who have failed RAL and/or EVG may have exhausted many treatment options and that DTG may represent the only reasonable hope for some of these individuals. Nonetheless, it is probably false to believe that integrase inhibitors can or should be used sequentially, beginning with a less potent drug such as RAL or EVG and then switching to DTG; treatment should be initiated with the best drugs that are approved for therapy.
Related to this is that none of the series of secondary mutations to R263K at positions H51Y, M50L, or E138K has ever been shown to restore viral replication capacity, although these may add incrementally to the levels of DTG resistance associated with R263K [17][18][19].
Conclusions
As stated above, this article makes reference to concepts that should first be studied in animal models such as humanized mice that are infected by HIV or rhesus macaques that are infected by simian immunodeficiency virus (SIV). Although, some clinicians have experimented with monotherapy in the past and are likely to do so again, it is likely that further justification for such studies may first come from clinical trials in which patients are first suppressed with DTG plus two other drugs and then maintained on DTG monotherapy. Some might argue that the development of compensatory mutations associated with DTG might only be a matter of time. With each passing day without resistance to DTG in first-line therapy, the hypothesis that has been advanced here becomes more compelling. Among other considerations, it should be noted that failure to develop resistance to DTG or to experience a rebound in viral load in DTG-treated patients could conceivably lead to an inability of people treated with DTG to transmit HIV to others [20,21]. Should this turn out to be the case, there might be profound implications both for future HIV transmission and the sustainability of the HIV epidemic. Such a positive consequence might require that all future HIV-infected persons worldwide be initiated on DTG as a part of first-line therapy. | 2016-03-14T22:51:50.573Z | 2014-08-27T00:00:00.000 | {
"year": 2014,
"sha1": "4d5d57a80a06b2f34be09669f3c010b5b8807a10",
"oa_license": "CCBY",
"oa_url": "http://www.mdpi.com/1999-4915/6/9/3377/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4d5d57a80a06b2f34be09669f3c010b5b8807a10",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
80798140 | pes2o/s2orc | v3-fos-license | 2042 CYP2C19*2 and PON1 Q192R polymorphisms are associated with platelet reactivity to clopidogrel in Puerto Rican Hispanics with cardiovascular disease
OBJECTIVES/SPECIFIC AIMS: High on-treatment platelet reactivity (HTPR) with clopidogrel imparts an increased risk for ischemic events in adults with coronary artery disease. Although more potent antiplatelet agents are available, clopidogrel remains the most commonly used P2Y12 inhibitor in Puerto Rico. Platelet reactivity varies with ethnicity and is influenced by both clinical and genetic variables; however, no clopidogrel pharmacogenetic studies with Puerto Rican patients have been reported. Therefore, we sought to identify clinical and genetic determinants of on-treatment platelet reactivity in a cohort of Puerto Rican patients with cardiovascular disease. METHODS/STUDY POPULATION: We performed a retrospective study of 111 Puerto Rican patients on 75 mg/day maintenance dose of clopidogrel. Patients were allocated into 2 groups: Group I, without HTPR; and Group II, with HTPR. Clinical data was obtained from the medical record. Platelet function was measured ex vivo using the VerifyNow® P2Y12 assay and HTPR was defined as P2Y12 reaction units (PRU)≥230. Genotyping of CYP2C19, ABCB1, PON1, PY2R12, B4GALT2, CES1, and PEAR1 was performed using Taqman® Genotyping Assays. RESULTS/ANTICIPATED RESULTS: The mean PRU across the cohort was 203±61 PRU (range, 8–324), and 42 (38%) patients had HTPR. One in four individuals carried at least 1 copy of the CYP2C19*2 variant allele. Hematocrit and PON1 p.Q192R variant were inversely correlated with platelet reactivity (p<0.05). Multiple logistic regression showed that 27% of the total variation in PRU was explained by a history of diabetes mellitus, hematocrit, CYP2C19*2, and PON1 p.Q192R. Body mass index (OR=1.15; CI: 1.03–1.27), diabetes mellitus (OR=3.46; CI: 1.05–11.43), hematocrit (OR=0.75; CI: 0.65–0.87), and CYP2C19*2 (OR=4.44; CI: 1.21–16.20) were the only independent predictors of HTPR. DISCUSSION/SIGNIFICANCE OF IMPACT: In a representative sample of Puerto Rican patients with cardiovascular disease, diabetes mellitus, hematocrit, CYP2C19*2, and PON1 p.Q192R were associated with on-treatment platelet reactivity. These factors may identify a subset of patients at higher risk for adverse events on clopidogrel in the Hispanic population.
2042
CYP2C19*2 and PON1 Q192R polymorphisms are associated with platelet reactivity to clopidogrel in Puerto Rican Hispanics with cardiovascular disease High on-treatment platelet reactivity (HTPR) with clopidogrel imparts an increased risk for ischemic events in adults with coronary artery disease. Although more potent antiplatelet agents are available, clopidogrel remains the most commonly used P2Y12 inhibitor in Puerto Rico. Platelet reactivity varies with ethnicity and is influenced by both clinical and genetic variables; however, no clopidogrel pharmacogenetic studies with Puerto Rican patients have been reported. Therefore, we sought to identify clinical and genetic determinants of on-treatment platelet reactivity in a cohort of Puerto Rican patients with cardiovascular disease. METHODS/STUDY POPULATION: We performed a retrospective study of 111 Puerto Rican patients on 75 mg/day maintenance dose of clopidogrel. Patients were allocated into 2 groups: Group I, without HTPR; and Group II, with HTPR. Clinical data was obtained from the medical record. Platelet function was measured ex vivo using the VerifyNow ® P2Y12 assay and HTPR was defined as P2Y12 reaction units (PRU) ≥230. Genotyping of CYP2C19, ABCB1, PON1, PY2R12, B4GALT2, CES1, and PEAR1 was performed using Taqman ® Genotyping Assays. RESULTS/ANTICIPATED RESULTS: The mean PRU across the cohort was 203 ± 61 PRU (range, 8-324), and 42 (38%) patients had HTPR. One in four individuals carried at least 1 copy of the CYP2C19*2 variant allele. Hematocrit and PON1 p.Q192R variant were inversely correlated with platelet reactivity (p < 0.05). Multiple logistic regression showed that 27% of the total variation in PRU was explained by a history of diabetes mellitus, hematocrit, CYP2C19*2, and PON1 p.Q192R. Body mass index (OR = 1.15; CI: 1.03-1.27), diabetes mellitus (OR = 3.46; CI: 1.05-11.43), hematocrit (OR = 0.75; CI: 0.65-0.87), and CYP2C19*2 (OR = 4.44; CI: 1.21-16.20) were the only independent predictors of HTPR. DISCUSSION/SIGNIFICANCE OF IMPACT: In a representative sample of Puerto Rican patients with cardiovascular disease, diabetes mellitus, hematocrit, CYP2C19*2, and PON1 p.Q192R were associated with on-treatment platelet reactivity. These factors may identify a subset of patients at higher risk for adverse events on clopidogrel in the Hispanic population.
2269
Day-to-day association between alcohol use and physical activity in university students
Scott Graupensperger and Michael B. Evans Penn State Clinical and Translational Science Institute
OBJECTIVES/SPECIFIC AIMS: The goal of the present study was to advance our understanding of how alcohol use may contribute to physical inactivity among university students by investigating this association at a day-to-day level. METHODS/STUDY POPULATION: In total, 57 university students (Mage = 20.27; 54% male) completed daily diary questionnaires using a cellphone application, which prompted them each evening to report minutes of moderate/ vigorous physical activity engaged in, and number of alcoholic drinks consumed, as well as intended minutes of physical activity for the following day. Longitudinal mixed-level modeling was used to disentangle within person and between-person effects of alcohol use on physical activity behavior and intentions. Separate models were run to investigate lagged effects of previous day alcohol use. We controlled for sex and age in all models. RESULTS/ANTICIPATED RESULTS: Results indicated that participants' usual alcohol use (between-person) was not associated with physical activity behavior or intentions. At the within-person level, day-to-day variance in alcohol use was negatively associated with both physical activity behavior (γ = − 0.34, p = 0.003) and intentions to engage in physical activity the following day (γ = − 0.70, p < 0.001). The lagged model indicated that previous day alcohol use negatively predicted PA behavior (γ = − 0.33, p = 0.004).
DISCUSSION/SIGNIFICANCE OF IMPACT: Previous studies have largely been constrained to cross-sectional designs, and have surmised that there exists a positive association between alcohol use and physical activity due to trait-level differences between university students. We advance this literature by using ecological momentary assessment to investigate the within-person effects of alcohol use on physical activity at a day-to-day level while controlling for betweenperson variance. Contrary to existing literature, we found that on days when students consumed relatively more alcohol than they typically report, they: (a) report fewer minutes of physical activity on the same day, (b) plan to engage in relatively less physical activity on the subsequent day, and (c) engage in less physical activity on the subsequent day. By advancing our understanding of how alcohol use may curtail other health behaviors such as physical activity, we inform interventions that aim to target these behaviors in conjunction, or as part of a multiple behavior change intervention.
2327
Decoding/encoding somatosensation from the hand area of the human primary somatosensory (S1) cortex for a closed-loop motor/sensory brain-machine interface (BMI) Brian Lee, Richard Andersen, Helena Chui and William Mack University of Southern California OBJECTIVES/SPECIFIC AIMS: A brain-machine interface (BMI) is a device implanted into the brain of a paralyzed or injured patient to control an external assistive device, such as a cursor on a computer screen, a motorized wheelchair, or a robotic limb. We hypothesize we can utilize electrical stimulation of subdural electrocorticography (ECoG) electrodes as a method of generating the percepts of somatosensation such as vibration, temperature, or proprioception. METHODS/STUDY POPULATION: There will be 10 subjects, who are informed, willing, and consented epilepsy patients undergoing initial surgery for placement of subdural ECoG electrodes in the brain for seizure monitoring. ECoG will be used as a platform for recording high-resolution local field potentials during real-touch behavioral tasks. In addition, ECoG will also be used to electrically stimulate the human cerebral cortex in order to map and understand how varying stimulation parameters produce percepts of sensation. RESULTS/ANTICIPATED RESULTS: To determine how tactile and proprioceptive signals are integrated in S1, we will perform spectral analysis of the broadband local field potentials to look for increased power in specific frequency bands in the ECoG recordings while touching or moving the hand. To explore generating artificial sensation, the subject will be asked to perform a variety of tasks with and without the aid of stimulation. We anticipate the subject's performance will be enhanced with the addition of artificial sensation. DISCUSSION/SIGNIFICANCE OF IMPACT: Many patients might benefit from a BMI, such as those with stroke, amputation, spinal cord injury, or brain trauma. The current generation of BMI devices are guided by visual feedback alone. However, without somatosensory feedback, even the most basic limb movements are difficult to perform in a fluid and natural manner. The results from this project will be crucial to developing a closed loop motor/sensory BMI.
2564
Designing for dissemination: Characteristics of Clinical and Translational Science Award (CTSA) hubs as adopters of clinical and translational science innovation Elaine H. Morrato 1 , Lindsay Lennox 2 and Anne Schuster 1 1 Colorado School of Public Health, University of Colorado, Anschutz Medical Campus, Aurora, CO, USA; 2 Department of Communication, University of Colorado, Denver, CO, USA OBJECTIVES/SPECIFIC AIMS: The Clinical and Translational Science Award (CTSA) program is a national consortium of 50 + academic medical research centers charged with accelerating the translation of clinical research. In 2017, the NIH National Center for Advancing Translational Sciences anticipates total CTSA program funding of over $500M. The consortium's hub-and-spoke structure makes it a natural dissemination network, and the newest funding announcement makes dissemination of innovation across the consortium an explicit goal, but characteristics of CTSA hubs as adopters and transmitters of innovation are unknown. METHODS/STUDY POPULATION: A content analysis was conducted using data from CTSA hub Web sites (n = 64) and a structured coding taxonomy based on 6 constructs drawn from literature about diffusion of innovation in service organizations (Greenhalgh et al., 2004): dissemination priority, institutional complexity, communication infrastructure, | 2019-03-18T14:04:21.313Z | 2018-06-01T00:00:00.000 | {
"year": 2018,
"sha1": "09e347c5ea6e35e7b138ec4f02ee03115fe35e36",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/D72E742B68BAAE98149E4BA3BF98BE24/S2059866118000584a.pdf/div-class-title-2042-cyp2c19-2-and-pon1-q192r-polymorphisms-are-associated-with-platelet-reactivity-to-clopidogrel-in-puerto-rican-hispanics-with-cardiovascular-disease-div.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6866a6e65d5a56b744258e621793bd049e5687a3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
182948096 | pes2o/s2orc | v3-fos-license | A decade of sustained geographic spread of HIV infections among women in Durban, South Africa
Background Fine scale geospatial analysis of HIV infection patterns can be used to facilitate geographically targeted interventions. Our objective was to use the geospatial technology to map age and time standardized HIV incidence rates over a period of 10 years to identify communities at high risk of HIV in the greater Durban area. Methods HIV incidence rates from 7557 South African women enrolled in five community-based HIV prevention trials (2002–2012) were mapped using participant household global positioning system (GPS) coordinates. Age and period standardized HIV incidence rates were calculated for 43 recruitment clusters across greater Durban. Bayesian conditional autoregressive areal spatial regression (CAR) was used to identify significant patterns and clustering of new HIV infections in recruitment communities. Results The total person-time in the cohort was 9093.93 years and 613 seroconversions were observed. The overall crude HIV incidence rate across all communities was 6·74 per 100PY (95% CI: 6·22–7·30). 95% of the clusters had HIV incidence rates greater than 3 per 100PY. The CAR analysis identified six communities with significantly high HIV incidence. Estimated relative risks for these clusters ranged from 1.34 to 1.70. Consistent with these results, age standardized HIV incidence rates were also highest in these clusters and estimated to be 10 or more per 100 PY. Compared to women 35+ years old younger women were more likely to reside in the highest incidence areas (aOR: 1·51, 95% CI: 1·06–2·15; aOR: 1.59, 95% CI: 1·19–2·14 and aOR: 1·62, 95% CI: 1·2–2·18 for < 20, 20–24, 25–29 years old respectively). Partnership factors (2+ sex partners and being unmarried/not cohabiting) were also more common in the highest incidence clusters (aOR 1.48, 95% CI: 1.25–1.75 and aOR 1.54, 95% CI: 1.28–1.84 respectively). Conclusion Fine geospatial analysis showed a continuous, unrelenting, hyper HIV epidemic in most of the greater Durban region with six communities characterised by particularly high levels of HIV incidence. The results motivate for comprehensive community-based HIV prevention approaches including expanded access to PrEP. In addition, a higher concentration of HIV related services is required in the highest risk communities to effectively reach the most vulnerable populations. Electronic supplementary material The online version of this article (10.1186/s12879-019-4080-6) contains supplementary material, which is available to authorized users.
Background
South Africa has over 7 million human immunodeficiency virus (HIV) infected individuals [1,2]. HIV prevalence in the country varies greatly across the provinces [3] and is highest in KwaZulu-Natal (KZN). The reasons for the higher burden in KZN (particularly among young women and girls) are multi-faceted and dependent on complex economic and localised social, behavioural and cultural factors [4]. Coupled with high HIV infection rates, is the high prevalence and incidence of other curable sexually transmitted infections (STIs) [5]. Furthermore, factors such as age of sexual debut [6], having a male partner aged 25-34 [7,8], being unmarried or not living with a partner [9], multiple sexual partners [10,11], lack of condom use [1], living in a low antiretroviral therapy (ART) coverage area [12] and high mobility [13], have also been reported as contributing factors in a young woman's vulnerability to the risk of HIV acquisition.
Given the persistent high HIV incidence rates, particularly among young women in KZN [14], it is clear that more effective approaches are required to tackle the epidemic [15][16][17][18][19][20]. One aspect receiving increasing interest is finer scale geo-spatial analysis of HIV infection patterns to facilitate more appropriate and geographically targeted interventions [21,22]. Previous studies have frequently demonstrated the profound impact of HIV clustering on the spread of the disease [23][24][25]. Furthermore, given the current climate of declining funds and consistent with the global recommendations from the Joint United Nations program on HIV/AIDS (UNAIDS), optimal allocation of the limited resources requires identification of the geographical patterns and clustering of new HIV infections [26]. The HIV Prevention Research Unit (HPRU) of the South African Medical Research Council (SAMRC) had participated in several large-scale HIV prevention trials from 2002 to 2012. The trials have been conducted among several community-based research sites located across the greater Durban region. These trials primarily sought to determine the efficacy of various female initiated HIV prevention options; however, investigational products being tested found no effect on preventing HIV infections [15][16][17][18][19]. The primary objective of this study is to assess the geographical variations and clustering of HIV infections at a localized level in recruitment communities. Results from our study will guide policy makers to develop tailored prevention strategies in order to allocate scarce resources by targeting the most at risk individuals at a sub-geographical level.
Study area and population
From 2002 to 2012, the HPRU of the SAMRC participated in five international multi-centre HIV prevention clinical trials [15][16][17][18][19]. A total of 9145 consenting women were enrolled in this combined cohort over the period of ten years. All consenting women's places of residence (or nearest location point to residence) were geo-coordinated using Geographic Positioning System (GPS) at the time of enrolment and updated during follow-up visits (Additional file 2: Table S1 details the participating communities). We had access to GPS data of 7557 women, which were included in this analysis (Fig. 1).
GPS coordinate data were collected using Garmin™ Nüvi (model 2360) handheld devices, downloaded into Microsoft Access and plotted spatially using ArcGIS (version 10·4, CA). A total of 43 community-level recruitment area boundaries were developed based on census delineations [27].
Recruitment of trial participants was from communities in urban (peri-urban) and rural areas. Details of the study population have been described elsewhere [15][16][17][18]. For the purpose of the present analysis, data for women that were common across the clinical trials was extracted, combined and reported as unidentified and not study or site-specific [15][16][17][18][19]28].
HIV incidence was the endpoint of all trials included in this analysis. HIV diagnostic testing was conducted using two rapid tests on whole blood sourced from either finger-prick or venepuncture (Determine HIV-1/2, Abbot Laboratories, Tokyo, Japan and Oraquick, Orasure Technologies, Bethlehem, PA, USA). The Abbot IMX Enzyme-linked immunosorbent assay (ELISA) test (Abbot Diagnostics, Africa Division), in combination with the Vironostika HIV1/2 ELISA was used on whole blood sourced from venepuncture for discordant/unequivocal results.
The age eligibility criteria were consistent across all trials (> 18 years of age) except for one which enrolled women aged 16 and older. Median age across the trials varied marginally between 24 and 28 years. The average screening to enrolment ratio was 47% in this combined cohort. Other eligibility criteria were broadly similar for all studies. At each visit, participants received HIV risk reduction counselling, STI testing and treatment and had access to male and/or female condoms. Women who tested HIV positive at screening were referred to local health care facilities for care and support. Women who HIV seroconverted during the trial remained in the study and received ongoing safe sex counselling, STI testing and treatment, and condom provision. All participants provided written informed consent to participate in the studies.
Socio-demographic data collection
Data pertaining to age, contraceptive use, STI at screening (Chlamydia, Gonorrhoea, Syphilis and Trichomonas) and condom use at last sex act was consistent across all five clinical trials. Four of the clinical trials included marital status/cohabiting and parity data collection. The number of sex partners in the last three months was collected in three of the five clinical trials.
Characterisation of high incidence communities
Participant data were classified into five age categories (< 20, 20-24, 25-29, 30-34, 35+ years old) across three-time ranges (2003-2006, 2007-2009 and 2010-2012). HIV incidence rates were calculated in each of fifteen strata (5 age groups and 3 time periods) in every recruitment area and a combined weighted estimate for each recruitment area was calculated. We employed direct age-time period standardization to obtain standardized HIV incidence estimates per area to facilitate legitimate geographical comparison (free from the influence of underlying differences in the age-composition of participants in different areas as well as overall incidence changes over time). The reference population was all women enrolled in the HIV prevention trials conducted at HPRU sites between 2002 and 2012.
The characteristics of the study population were compared across pre-defined HIV incidence rate categories (≤5, 5-6·9, 7-8·9 and 9+ per 100 person-year (PY)) as the dependent ordinal variable. This analysis utilised data at the individual women-level within this ordinal variable. Age, marriage/cohabitation, type of contraception used, STI at baseline, number of sexual partners in the last three months and parity data were included as the explanatory variables at the individual level. Univariable and multivariable ordered logistic regression, with recruitment area (community or group of communities) cluster robust standard errors, were used to identify prominent factors associated with higher incidence.
Stepwise forward selection regression (inclusion if p < 0·10) was used to construct the final multivariable model. Adjusted odds ratios (aOR) and 95% confidence intervals (CI's) are presented in Table 1. The proportional odds (or parallel regression) assumption for this modelling approach was checked and upheld (Brant's test) [29].
An aggregate of women's residence per recruitment area representing a minimum of 50 PY were included in HIV incidence rate calculations (n = 7557) following exclusion of a) outlying participants not resident within core recruitment areas (n = 194), or residing in areas with less than 50 total PY (n = 207), (b) participants for whom no GPS data had been collected (n = 1054) and (c) participants who did not attend any follow up visits post enrolment (n = 133) (Fig. 1). The crude HIV incidence rate across all communities was calculated by dividing the number of seroconversions by total person-years of observation.
Micro-geographical clustering analysis
We employed the most widely used Bayesian conditional autoregressive (CAR) hierarchical model to assess incidence risk across the 43 areas and mapped the relative risk for each community ( Figure 3) [30]. Bayesian hierarchical models are one of the main statistical approaches for making inferences regarding the underlying relative risks of a given disease across often disjointed geographical areas. In addition to the parameters to account for shared boundaries (neighbouring areas) and unmeasured heterogeneity within areas, we also included additional covariate terms for age and time period to account for the potential confounding effect of these covariates and how they may vary across the 43 suburbs. We employed a Bayesian conditional autoregressive (CAR) hierarchical model to assess incidence risk across the 43 areas and mapped the relative risk for each community. Bayesian hierarchical models are most commonly used to address the problems posed by small area analysis. We utilised the Besag, York and Molliè (BYM) or convolution CAR model, which is formulated as follows: where γ i is the standardised relative risk in area i = 1 to 43, O i is the number of incident HIV events in area i, E i is the expected number of incidence events based on person time contribution, ε i is the small area unstructured random effect term (to capture unstructured heterogeneity) and φ i is the CAR spatial term (to capture structured variation). We used an adjacency matrix of common neighbouring areas (shared boundaries) of a given suburb when modelling this parameter whereby φ i is the sum of the weighted neighbourhood values; ε i was modelled assuming an independent normal distribution ε i Nð0; σ 2 ε ) with variance σ 2 ε . Non-informative gamma priors were used for variance parameters in both the unstructured and structured random effects.
In addition to the classical parametrization of the model proposed by BYM we also included additional covariate terms for age and period of the individual within the regions 1 to 43 to account for the confounding effect of these covariates and how they may vary across the 43 suburbs. Markov chain Monte Carlo simulation (MCMC) was used to fit this model [30] and implemented in the Bayesian software package WinBUGS [31]. Visual inspection of the parameter series plots was used to assess model convergence as well as by using Gelman-Rubin statistics [32].
Ethics approval and consent to participate
Ethical approval for the trials were received from the University of KwaZulu-Natal Biomedical Research Ethics Committee and the South African Medical Research Council ethics committee.
Results
The total person-time in the cohort was 9093.93 years and 613 seroconversions were observed.
Characterisation of the participants
Demographic characteristics and baseline sexual behaviour of women are presented overall and across the categories of HIV incidence rates ( Table 1). The mean and standard deviation of age in the cohort was 27.75 and 7·86, respectively. More than 60% of the women participants were less than 30 years old. Majority of the women (> 80%) were unmarried and/or not living with their sexual partners and more than half of the women had no education. The most commonly used contraception at baseline were injectables (53%); while 10 and 15% of the study population reported using oral contraceptives and male/female condoms respectively. Approximately two thirds of women reported condom-use during their last sex act. At baseline, prevalence of STIs exceeded 18% in the 5-6.9, 7-8·9 and 9+ incidence categories compared with 15% in the ≤5 HIV incidence category. Most women had already given birth previously; specifically, 43, 22 and 22% reported having 1, 2 and 3 children respectively.
Micro-geographical clustering analysis
Estimated relative risks (RRs) from the Bayesian CAR model were estimated across the 43 sub-geographical units and presented in Additional file 1: Figure S1. Our analysis identified six clusters located centrally and in the northern neighbouring areas of Durban ( Figure 2). Estimated RRs for these clusters ranged from 1.34 to 1.70. Consistent with these results, age standardized HIV incidence rates were also highest in these clusters and estimated to be as high as 10 to 11 per 100 PY.
Discussion
Through finer geospatial analysis of our data from over 7000 women participants, we observed a continuous and unrelenting hyper HIV epidemic in the greater Durban area over a ten-year period; with HIV incidence rates ranging from 3 to 12 per 100 woman-year (WY). These high HIV incidence rates were observed among trial participants who received regular HIV pre-and post-test counselling, safe sex counselling, treatment of curable STIs and male and female condom promotion. The high incidence rate observed during this period are similar to that reported from the CAP004 trial [33]; conducted between 2007 and 2009 in Durban. In that trial HIV incidence rates in the placebo group was reported to be 9/ 100 PY. Similarly, a cohort study by Nel et al. (2007Nel et al. ( -2009) observed an HIV incidence rate of 7.2/100 PY in the greater Durban area [34]. More recent data from the ASPIRE trial [35] showed that HIV incidence rates ranged from 5.4/100py to 8/100PY in the same communities from which this combined data was generated. The ASPIRE trial, suggested a slight decline in HIV incidence in these communities, However, rates as high as 8/100PY are unacceptable given the aggressive roll out of ARV treatment and access to standard prevention options in the country [35].
This is the first comprehensive study to analyse the geographic variation and clustering of HIV incidence rates among women in the greater Durban region. This study has several strengths which overcome many limitations which usually arise with averaged incidence maps (with often low number of events which make robust statistical interference difficult). We employed a Bayesian CAR model, which simultaneously estimates stable spatial and temporal structured patterns and departures from these stable components. In doing so we could capture spatial correlations or unobserved heterogeneity in HIV incidence that cannot be explained by time and other covariates. Results from this analysis underscored several pockets or clusters with high HIV infections in the greater Durban region. These areas were confined to central and northern territories. Previous work conducted in KwaZulu-Natal has examined the micro-geographical patterns of HIV prevalence [36] and HIV incidence in a rural setting over a decade of demographic surveillance [35]. However, no comparative work has been done in an urban setting in Northern KwaZulu-Natal. Similarly, our findings presented prominent heterogeneity in HIV rates in a relatively small geographic extent. In totality, these studies suggest apparent 'corridors' of elevated transmission [37] in the region which, based on the findings herein, may also be Fig. 2 Relative risks (RRs) from the Bayesian conditional autoregressive (CAR) model and age-period adjusted HIV incidence rates (per 100 PY) for the six clusters with significantly higher incidence rate seen in other urban/peri-urban communities of KZN, as well as other similar hyper-endemic rural populations.
Concurrent with the previous studies, our results also provided additional evidence that these geographically clustered areas could be further differentiated with established risk factors for HIV seroconversion [6][7][8][9][10][11][12][38][39][40]. As observed in our data, a large number of women, especially younger women, do not seem to be in a stable relationship and are likely to have multiple partners, use barrier methods less frequently thus potentially increasing their risk of HIV acquisition. This was further confirmed by high rates of other curable STIs. These findings are in keeping with those reported previously in KwaZulu-Natal [41]. Research on the role of STIs and inflammation and its impact on the vaginal microbiome and HIV acquisition risk is currently gaining international interest [42][43][44]. While behavioural and structural risks are important drivers of the HIV acquisition [45], considerable research is being undertaken to understand biological risk of HIV acquisition. It is more likely that in our region, all three risks play a significant role in the risk of HIV acquisition. Consistent with the previous studies, we also showed that women who use injectable and oral contraceptives are at a greater risk of being in a high HIV incidence category. These findings are similar to what we observed in another study [46].
Our results and that of others [41] have consistently shown that younger women are at a greater increased risk of being in the higher HIV incidence categories [47]. More recently, Akullian et al. [8] and de Oliviera et al. [7] showed that when these young women form sexual relationships with men aged 25-34 years of age, they are at particularly high risk for acquiring HIV.
South Africa has the largest ART programme in the world with just over 3.4 million individuals receiving medication [48]. The implementation of universal changes to national policies has been slow with sporadic uptake of ART from HIV positive participants, especially during the earlier period of the trials. This could explain, at least in part, some of the intra-community HIV incidence variability as well as the ubiquitously high rates of infection throughout. In addition, these findings provide further evidence for multiple sub-epidemics within a single region [49] in line with findings from KwaZulu-Natal [37] and other parts of Africa [50][51][52]. While understanding the spatial structure of HIV burden may allow for an appropriate concentration of services at a micro-geographic level, our data depicts an overall high HIV incidence among women in the entire greater Durban area and calls for a more blanketed and community-wide intervention and prevention efforts to have a major impact in this setting, with a particular focus on addressing the key drivers of the higher incidence identified among this population. In addition to this strategy, using the identified HIV clusters (that may be acting as a node for onward transmission) provides motivation for those communities to receive additional benefit from a more structured and concentrated HIV prevention and intervention approach. Targeting these HIV clusters may have numerous benefits. Firstly, such pockets of HIV are thought to re-seed nearby populations and may concurrently act as a potential "source of attraction" for HIV positive individuals from other communities. Secondly, targeting high risk areas is economically viable especially in resource constrained settings and provides an ideal starting point for ART implementation and scale-up of treatment as prevention (TasP); a strategy that is imperative to break onward HIV transmission. Indeed, without TasP implemented in such HIV concentrated pockets, one may observe a decreased efficacy of existing population-based intervention programmes. Furthermore, it has been previously demonstrated that ART adherence, and indeed uptake, is heavily negatively impacted by geographical barriers. It therefore makes placement of ARV clinics logistical, economical and ethical sense to target vulnerable individuals, aid adherence issues and ultimately achieve maximum penetration of HIV-related services to high-risk areas. This, in addition to a more blanketed intervention and prevention effort, may be the most successful strategy in reducing HIV incidence in a region with a burgeoning HIV epidemic.
The fact that socio-demographic information was not consistently collected across all trials, does pose a limitation and highlights the issue of missing data. A key issue in the extrapolation of data from clinical trials is the generalisability of the incidence data to women in the same geographical area. This may be a limitation of the study where we report on women, mostly of child-bearing age, that participated in a HIV prevention clinical trial. These women may perceive their risk for HIV as higher than the general population, due to individual circumstances and experiences. However, we utilised a consistent set of inclusion and exclusion criteria to enrol women across the decade of clinical trials. This consistency in eligibility (combined with the age and period standardisation approach employed) means that we are comparing a similar sample of women in all areas across time and space. It is thus unlikely that such selection effects could account for the remarkable spatial heterogeneity observed.
Conclusion
Despite this unfortunate and rather unwelcome result, moving forward there is a clear need to intensify HIV treatment and prevention initiatives in a two-pronged approach. Firstly, due to the unacceptably high HIV incidence in all communities in Durban, an aggressive approach to combination prevention strategies including highly effective pre-exposure prophylaxis (PrEP) is needed. Although PrEP is registered for HIV prevention in the country, the roll-out programs are focussed on high risk groups such as men who have sex with men (MSM), sex workers and students. We make a case for universal roll out of PrEP in the greater Durban region with focus on individual risk assessment and provision and monitoring through innovative community-based programs. Secondly, those communities which are clear clustered transmission nodes, need a greater concentration of HIV related services (such as increased availability of HIV/STI clinics, access to treatment), where individuals at higher risk are targeted, and ultimately act as a powerful adjunct to prevention efforts to have a major impact in this setting.
Funding
The trials where these data were generated were supported by various sponsors [Grant Numbers 21082, G0100137, U01AI048008, U01AI069422 and CB04.106G-7]. The funders of the parent studies had no role in the study design, data collection, data analysis, data interpretation or writing of this report. The corresponding author was a principal investigator/investigator of the trials and had access to the site-specific data and had final responsibility for the decision to submit for publication. The analysis for the present study was supported by the South African Medical Research Council.
Availability of data and materials
The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.
Authors' contributions GR designed the study, aided in manuscript writing and critically reviewed and edited the manuscript. NM trained field teams, managed GPS data and generated the standardised and cluster maps. HW ideated statistical methodology and created the map. TR performed statistical analysis, data interpretation and generation of tables. JDY performed literature searches and drafted the manuscript. BS performed Bayesian CAR analysis and critically reviewed the manuscript. FT oversaw the statistical and spatial analysis and contributed by reviewing the manuscript. All authors have read and approved of this manuscript.
Ethics approval and consent to participate All study protocols and informed consent forms were approved by the Biomedical Research Ethics Committee at the University of KwaZulu-Natal as well as the various study-specific Institutional Review Boards. Ethical approval for the trials were received from the University of KwaZulu-Natal Biomedical Research Ethics Committee and the South African Medical Research Council. All participants provided written informed consent to participate in the studies.
Consent for publication
Not applicable. No individual person's data is presented within this manuscript. | 2019-06-11T13:08:47.076Z | 2019-06-07T00:00:00.000 | {
"year": 2019,
"sha1": "69167c9b2ee893986a2dfc13a1668df7efb5932d",
"oa_license": "CCBY",
"oa_url": "https://bmcinfectdis.biomedcentral.com/track/pdf/10.1186/s12879-019-4080-6",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "69167c9b2ee893986a2dfc13a1668df7efb5932d",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
1388906 | pes2o/s2orc | v3-fos-license | Differential Effects of MYH9 and APOL1 Risk Variants on FRMD3 Association with Diabetic ESRD in African Americans
Single nucleotide polymorphisms (SNPs) in MYH9 and APOL1 on chromosome 22 (c22) are powerfully associated with non-diabetic end-stage renal disease (ESRD) in African Americans (AAs). Many AAs diagnosed with type 2 diabetic nephropathy (T2DN) have non-diabetic kidney disease, potentially masking detection of DN genes. Therefore, genome-wide association analyses were performed using the Affymetrix SNP Array 6.0 in 966 AA with T2DN and 1,032 non-diabetic, non-nephropathy (NDNN) controls, with and without adjustment for c22 nephropathy risk variants. No associations were seen between FRMD3 SNPs and T2DN before adjusting for c22 variants. However, logistic regression analysis revealed seven FRMD3 SNPs significantly interacting with MYH9—a finding replicated in 640 additional AA T2DN cases and 683 NDNN controls. Contrasting all 1,592 T2DN cases with all 1,671 NDNN controls, FRMD3 SNPs appeared to interact with the MYH9 E1 haplotype (e.g., rs942280 interaction p-value = 9.3E−7 additive; odds ratio [OR] 0.67). FRMD3 alleles were associated with increased risk of T2DN only in subjects lacking two MYH9 E1 risk haplotypes (rs942280 OR = 1.28), not in MYH9 E1 risk allele homozygotes (rs942280 OR = 0.80; homogeneity p-value = 4.3E−4). Effects were weaker stratifying on APOL1. FRMD3 SNPS were associated with T2DN, not type 2 diabetes per se, comparing AAs with T2DN to those with diabetes lacking nephropathy. T2DN-associated FRMD3 SNPs were detectable in AAs only after accounting for MYH9, with differential effects for APOL1. These analyses reveal a role for FRMD3 in AA T2DN susceptibility and accounting for c22 nephropathy risk variants can assist in detecting DN susceptibility genes.
Approximately 12% of AAs carry two APOL1 risk variants and are at risk for FSGS and ,50% with HIV infection will develop HIVAN in the absence of anti-retroviral therapy. Thus, additional modifying environmental and/or inherited factors appear necessary to initiate kidney disease [13,14]. Since HIV infection increases the risk for nephropathy by a factor of nearly fivefold, it is possible that other environmental or genetic factors interact with APOL1 and/or MYH9 to mediate risk of renal disease. Gene-gene interactions are a likely contributor to susceptibility for diabetic and non-diabetic nephropathy and were the focus of these analyses.
A genome-wide association study (GWAS) using the Affymetrix Genome Wide Human 6.0 SNP chip was completed to identify genetic polymorphisms that mediate risk for T2DM-ESRD in AA [15]. Herein, we scanned the genome to detect polymorphisms mediating risk for T2DM-ESRD, conditional on APOL1 G1/G2 nephropathy risk variants and the MYH9 E1 risk haplotype using case-control and case-only study designs. The case-only design increases the statistical power over more classic case-control designs, allowing us to maximize power for the first genome-wide scan testing for interactions with the strongest genetic risk factor for ESRD. We tested for replication in additional AA cases with T2DM-ESRD and AA controls with T2DM lacking nephropathy. Together, these study designs have the potential to detect additional genes mediating the risk for T2DM-ESRD in AAs, accounting for the effects of non-diabetic etiologies of nephropathy with evidence for association on c22. We were also able to assess whether the adjacent APOL1 and MYH9 genes exhibited similar effects.
Results
The discovery GWAS association analysis included 952 AA cases with T2DM-ESRD and 988 AA non-diabetic, nonnephropathy controls, as published [15]. Principal component (PC) analysis identified one PC that controlled for global admixture in this sample and yielded an inflation factor of 1.01 (see Table S1 and Figure S1). Replication analyses were performed in 640 additional unrelated T2DM-ESRD cases and 683 nondiabetic, non-nephropathy controls recruited using identical criteria. Finally, an additional 513 AA with T2DM lacking nephropathy were subsequently evaluated to determine whether associations observed between T2DM-ESRD cases and nondiabetic, non-nephropathy controls reflected nephropathy susceptibility or risk of T2DM per se. Table 1 displays the numbers in each case and control group that were homozygous for MYH9 E1 risk haplotypes and had 2 APOL1 G1 and/or G2 nephropathy risk variants, along with demographic characteristics. Individuals with the MYH9 E1 haplotype or APOL1 risk variants (homozygous or heterozygous) tended to have greater estimated West African ancestry based on the principal component analysis (PCA) (pvalue,1E 24 ).
Examination of the top 100 SNP interactions with c22 risk variants (Table S2) identified SNPs within two previously reported nephropathy susceptibility genes, FRMD3 and SHROOM3 [16,17]. These genes were further evaluated. Results of the case-only analysis in the T2DM-ESRD discovery samples revealed that 7 SNPs in FRMD3 appeared to interact with the MYH9 E1 haplotype (Table 2), as did 2 SNPs in SHROOM3, rs1493360 and rs17002201 (data not shown). SHROOM3 SNPs failed replication and were not further investigated. The FRMD3 effects were in the same direction in case-only, casecontrol and MYH9 E1 haplotype stratified case-control analyses. These SNPs in FRMD3 were all in high linkage disequilibrium (LD; Yoruban r 2 = 0.95-1.0; CEU r 2 = 1.0) and appeared to confer protective effects against T2DM-ESRD in MYH9-E1 risk homozygotes, despite having significant risk effects in non-E1 homozygotes. This effect was less pronounced for APOL1. Importantly, there was no evidence of association of FRMD3 or any of the other top 100 SNPs with T2DM-ESRD in the original GWAS, prior to accounting for these c22 nephropathy risk variants [15]. Table 3 contains the replication analysis results in T2DM-ESRD cases and non-diabetic, non-nephropathy controls with 5 of the 7 FRMD3 SNPs that could easily be multiplexed. The apparent interactive relationship between MYH9 and FRMD3 SNPs was maintained despite the smaller sample. Weaker effects persisted for APOL1. A combined analysis was then performed using all 1,592 T2DM-ESRD discovery and replication cases relative to all 1,671 non-diabetic, non-nephropathy controls (Table 4). Analyses in T2DM-ESRD cases suggested significant
Author Summary
African Americans have high rates of kidney disease attributed to type 2 diabetes mellitus. However, approximately 25% of patients are misclassified and have nondiabetic kidney disease on renal biopsy. The APOL1-MYH9 gene region on chromosome 22 is powerfully associated with non-diabetic kidney diseases in African Americans. Therefore, we tested for interactions between single nucleotide polymorphisms across the genome with APOL1 and MYH9 non-diabetic nephropathy risk variants in African Americans with presumed diabetic nephropathy. Markers in FRMD3, a gene associated with type 1 diabetic nephropathy in Caucasians, appeared to interact with MYH9; however, increased nephropathy risk was seen in diabetic cases lacking two MYH9 risk haplotypes, and protective effects were seen in those with two MYH9 risk haplotypes. Stratified analyses based on the chromosome 22 nephropathy risk haplotypes demonstrated that FRMD3 variants were associated with diabetic nephropathy risk in cases without two MYH9 (or APOL1) risk haplotypes. It appears that African Americans with diabetes and kidney disease who are not chromosome 22 nephropathy risk variant homozygotes are enriched for the presence of diabetic nephropathy and FRMD3 risk alleles. This genetic dissection ultimately allowed for detection of the FRMD3 diabetic nephropathy gene association in a subset of cases enriched for this disorder.
interactions between FRMD3 SNPs and MYH9 (e.g., rs942280, p = 9.28E 27 additive; OR 0.67, 95% CI 0.57-0.78). Subsequent analyses revealed that FRMD3 SNP rs942280 (and others) were significantly associated with increased risk for T2DM-ESRD in non-MYH9 E1 risk haplotype homozygotes (rs942280 OR 1.28, 95% CI 1.09-1.51), but not in MYH9 E1 risk allele homozygotes (homozygosity p-value comparing the effect of FRMD3 SNPs in MYH9 E1 non-risk homozygotes vs. MYH9 E1 risk homozygotes = 4.82E 24 ). Therefore, the major effect of risk from FRMD3 on T2DM-ESRD susceptibility was present in non-MYH9 E1 haplotype homozygotes. Although the direction of effect was the same when replacing the MYH9 E1 haplotype with APOL1 risk variants, results were less significant. This could have resulted from the smaller number of APOL1 risk homozygotes.
To determine whether the FRMD3 SNPs were associated with susceptibility to T2DM-ESRD or diabetes per se, a final analysis was performed. FRMD3 allele frequencies were compared between the 513 unrelated AAs with T2DM lacking nephropathy and the 1,592 T2DM-ESRD cases (
Discussion
Herein we report association analysis results accounting for the effects of APOL1 and MYH9 on risk of DN in AAs. Stratified and interaction analyses performed in cases with T2DM-ESRD provided an unbiased assessment of potential interactions between both the APOL1 G1/G2 risk variants and MYH9 E1 risk haplotype with nearly one million SNPs across the genome. MYH9 and APOL1 are strongly associated with non-diabetic ESRD in AAs and can potentially limit ability to detect other nephropathy . Interaction analyses between FRMD3 SNPs were repeated in non-diabetic nephropathy cases (with biopsy-proven FSGS and HIVAN) and interactions were not observed. This suggests that the FRMD3 association is limited to DN. In retrospect, the case-only interaction analyses based on MYH9 (and APOL1) risk variants likely segregated clinically diagnosed cases of T2DM-ESRD into those enriched for non-diabetic nephropathy (c22 nephropathy risk homozygotes with disease in the FSGS spectrum) and non-c22 homozygotes enriched for true DN. The discrepant effect of FRMD3 protection in c22 nephropathy homozygotes, versus risk in non-c22 nephropathy homozygotes, raised the possibility that the T2DM-ESRD case group contained subsets of cases with different diseases. We suggest that these groups were not comparable based on c22 status [12]. This partitioning allowed for detection of DN association with FRMD3 SNPs, limited to the non-MYH9 E1 homozygotes.
The analyses were repeated with other SNPs in the complex and extended LD region about the MYH9 E1 haplotype on c22, including APOL1, with comparable directions of results and less significant p-values. Genetic heterogeneity and gene-gene or gene-environment interactions are frequently hypothesized as being important in complex genetic disorders such as nephropathy. It remains uncommon to formally test and replicate interaction and multiple loci models. We posit that some variant's risk may depend on the influences of other genes or non-genetic factors. Clearly, variants with strong effects such as MYH9 and APOL1 are important in and of themselves. However, an important lesson from the current study is that since the MYH9 E1 haplotype is extremely common in AAs and has a large odds ratio, the E1 haplotype may mask the effects at other loci unless methods are used to account for its influence (e.g., multilocus models, interaction analyses, stratification analyses). We have no reason to expect different results in European-derived populations since MYH9 risk variants are also strongly associated with non-diabetic ESRD in Europeans and European Americans; however, larger sample sizes would need to be tested due to the markedly lower frequencies of APOL1 and MYH9 risk variants.
The challenge of developing effective genetic screening tests is the balance between correctly identifying those variants that correctly and accurately predict individuals who will develop disease and those who will not (i.e., balance between sensitivity and specificity). Here, the MYH9 E1 haplotype was both an important predictor and clarifier of the contribution of other loci to the risk of nephropathy. Therefore, the search for additional nephropathy susceptibility loci in AAs, conditional on other important loci (MYH9 and APOL1) remains critical. It initially appeared that variants in FRMD3 were protective and modified risk for developing T2DM-ESRD in AAs with two MYH9 E1 risk haplotypes; however, these same variants were associated with risk for T2DM-ESRD in non-E1 haplotype homozygotes. This observation suggested that the subset of ESRD cases homozygous for the E1 haplotype differed from non-E1 homozygotes as to their etiology of ESRD and is supported by the observation that biopsyproven FSGS can be present in AA with T2DM and heavy proteinuria in individuals homozygous for the MYH9 E1 risk haplotype [12]. Coding variants in APOL1 are major susceptibility loci for non-diabetic nephropathy; however, independent, weaker MYH9 effects remain plausible. This report demonstrates that the FRMD3 association with DN was more readily detectable in non-MYH9 risk homozygotes, relative to non-APOL1 risk homozygotes, an observation that supports a potential independent role for MYH9 in nephropathy susceptibility.
The strong association observed between variants in APOL1 and near the MYH9 gene with several kidney diseases was a major breakthrough in our understanding of nephropathy susceptibility in AA [13,18]. Additional nephropathy susceptibility genes have been identified using GWAS [16,17,19]. For example, the 4.1 protein ezrin, radixin, moesin [FERM] domain containing 3 locus [17]. Despite replication, the GoKinD GWAS failed to reach genomewide significant evidence of association. It was also unclear whether FRMD3 was a susceptibility gene for only T1DMassociated nephropathy or contributed to other etiologies of kidney disease. FRMD3 variants now appear to impact susceptibility to nephropathy from T1DM and T2DM with effects in Europeanand African-derived populations, apparently not in Japanese [20]. FRMD3 is expressed in human kidney [17]. The expression profile of FRMD3 includes human renal mesangial and proximal tubular cells, but has not yet been tested in podocytes [17,21]. FERM domains are present in a variety of mammalian proteins and the functions of FERM domain-containing proteins, although not completely known, imply that these domains link the plasma membrane with cytoskeletal structures at specific cellular locations by directly binding partner proteins and/or phosphoinositides [22]. FRMD3 encodes a protein with unknown function [23]; although other 4.1 protein family members are important in maintenance of cell shape [24] and may maintain cell integrity by interacting with transmembrane proteins and actin filaments [25,26]. Human FERM domain-containing proteins include kinases (focal adhesion kinase [FAK] and Janus kinases [JAKs]), myosins (MYO7, MYO10 and MYO15), phosphatases (proteintyrosine phosphatase 1E [PTPE1]), ERMs, kindlins, talins, and other less well-characterized proteins. Although FERM-domain containing proteins interact with myosins, we do not feel that our genetic results support FRMD3 and MYH9 variants directly interacting to initiate diabetic nephropathy in African Americans. Instead, cases with clinically-diagnosed T2DM-ESRD possessing two copies of the MYH9 E1 risk haplotype more likely had nondiabetic forms of ESRD (in the FSGS family and mis-diagnosed as T2DM-ESRD). When limiting analyses to non-MYH9 risk homozygotes, thereby enriching for T2DM-ESRD, the FRMD3 genetic association became evident. The FRMD3 SNPs that were associated with T1DM-associated nephropathy in GoKinD samples were subsequently tested in our AA T2DM-ESRD cases and non-diabetic, non-nephropathy controls. These SNPs are not in LD with the 7 SNPs identified in this report (r 2 = 0.01-0.03 in both Yorubans and CEU). No evidence of association of the GoKinD associated FRMD3 SNPs was seen in our AA sample with T2DM-ESRD (data not shown).
A limitation of this report was use of a non-diabetic nonnephropathy control group with younger ages and absence of detailed renal phenotyping, relative to T2DM-ESRD cases. Although occult nephropathy in controls would reduce power to detect association, survival-bias could result. Replication in the T2DM non-nephropathy controls likely reduced (but does not eliminate) the potential for survival bias. Similarly, it is difficult to recruit large numbers of AA controls with longstanding T2DM who lack kidney disease, due to the high prevalence of subclinical nephropathy in AAs with diabetes mellitus.
We conclude that variants in FRMD3 contribute to the risk for nephropathy in AA with T2DM, an effect that was observed only after accounting for MYH9 (and less so APOL1) gene variants and evaluating a subset of AA cases likely enriched for T2DMassociated nephropathy. These analyses replicate the FRMD3 association in susceptibility to DN, as well as implicate this gene in African-derived populations. In addition to the nephropathy risk imparted by APOL1 G1 and G2 variants, our results support residual nephropathy risk residing within MYH9, or other c22 variants in linkage disequilibrium with the MYH9 E1 haplotype.
Patient populations
Diagnostic criteria for T2DM-associated ESRD in Wake Forest University School of Medicine (WFUSM) participants (both discovery and replication samples) includes diabetes diagnosis at age .30 years (in the absence of diabetic ketoacidosis); with either renal histologic evidence of DN or diabetes duration $5 years before initiation of renal replacement therapy in the presence of diabetic retinopathy or proteinuria $500 mg/24 h and absence of other known causes of nephropathy [3,15]. Non-diabetic, nonnephropathy controls were recruited to be at low risk for nephropathy based upon the lack of a personal or family history of kidney disease; therefore, renal function testing is not routinely performed due to the low yield of nephropathy. In a subset of 200 non-diabetic, non-nephropathy controls, 98% (196/200) had serum creatinine concentrations ,1.5 mg/dl (maximum 1.85 mg/dl). We note that occult kidney disease in non-diabetic, non-nephropathy controls would bias against association and deflate significance. T2DM non-nephropathy controls met criteria for diabetes and had an estimated glomerular filtration rate .60 ml/min and spot albumin:creatinine ratio ,100 mg/g. Among T2DM non-nephropathy controls, 67.5% had diabetes durations exceeding 5 years and 29.6% reported diabetic retinopathy, 57.5% denied retinopathy; and 12.9% were unsure. All subjects provided written informed consent and studies were approved by the WFUSM Institutional Review Board and adhere to the tenets of the Declaration of Helsinki. Clinical criteria for National Institute of Health (NIH) biopsy-proven FSGS cases (229 with idiopathic FSGS; 54 with HIVAN collapsing glomerulopathy) and 222 controls have also been reported [1].
Genotyping and quality control
Genotyping of the Affymetrix Genome-Wide Human SNP Array 6.0 in the discovery sample of 966 AA cases with T2DM-ESRD and 1032 non-diabetic, non-nephropathy controls was completed at the Center for Inherited Disease Research (CIDR; www.cidr.jhmi.edu) using DNA extracted from peripheral blood. DNA from cases and controls were approximately balanced on each 96-well master plate. A fingerprinting set of 96 SNPs was independently genotyped in all samples and results compared to the corresponding SNPs on the Affymetrix array to confirm sample identity. Genotypes were called using Birdseed version 2; APT 1.10.0 by grouping samples by DNA plate to determine the genotype cluster boundaries. The minimum SNP call rate for an individual was 98.4%. Forty-six blind duplicates were genotyped and had a concordance rate of 99.59%. Cryptic relatedness was identified by the estimated identity-by-descent (IBD) statistics as implemented in PLINK (http://pngu.mgh.harvard.edu/purcell/ plink/). There were two unexpected duplicate pairs and 54 unexpected first-degree relative pairs. One of each of these pairs was removed by the following rules: 1) retain T2DM-ESRD cases over non-diabetic, non-nephropathy controls, and 2) if case/ control status was congruent, retain the individual with the most complete phenotype data. One individual had a self-reported gender inconsistent with X chromosome genotype data and one had an inbreeding coefficient, F-statistic, more than 4 standard deviations from the mean, both were excluded. The results are based on the remaining 952 T2DM-ESRD cases and 988 nondiabetic, non-nephropathy controls. Replication samples were recruited under identical ascertainment criteria to the discovery samples. FRMD3 SNPs were genotyped using the iPLEX TM Sequenom MassARRAY platform for replication. Genotyping efficiency .95% and 45 blind duplicates were included to ensure genotyping accuracy. Genotyping FSGS and HIVAN cases and controls were by TaqMan assays available from ABI Biosystems (Foster City, CA).
Statistical analysis
Each SNP was tested for departure from Hardy-Weinberg Equilibrium (HWE) expectations using a chi square goodness-of-fit test. The primary inference for this conditional/interaction GWAS was the SNPs with ,5% missing and no differential missingness between cases and controls, HWE p-value.1E 24 in cases and .1E 22 in controls and minor allele frequency (MAF) in the entire sample .0.05. A total of 832,357 SNPs met these criteria. However, SNPs that did not meet these criteria were secondarily examined for association with consideration given to potential corroborating evidence of association at flanking SNPs, especially those SNPs with some evidence of HWE departure. The average sample call rate was 99.16% for all autosomal SNPs.
A principal components analysis (PCA) was computed on the 832,357 SNPs to estimate the primary sources of genetic variations, including potential admixture. One principal component (PC) was retained and it correlated highly (r 2 = 0.87) with previously computed admixture estimates based on 70 ancestry informative markers (AIMs) using the program FRAPPE [27]. The same set of AIMs was genotyped in the replication sample and admixture estimates were computed using FRAPPE. As described below, the GWAS association analyses adjust for the first PC and the replication study and combined analyses adjusted for admixture estimates.
Since not all individuals homozygous for APOL1 risk variants and/or the MYH9 E1 risk haplotype develop nephropathy, the probability of developing ESRD may depend on non-genetic factors and other genetic factors interacting with the known c22 risk variants. Thus, a series of complementary logistic regression analyses were computed using the program SNPGWA (www.phs. wfubmc.edu). The analyses were restricted to SNPs with minor allele frequencies .0.10. The primary inference for the following analyses used the additive genetic model for the SNP, provided there was no evidence of departure from the additive genetic model (additive model lack-of-fit test p-value.5E 22 ). If the lackof-fit to an additive model was significant, then the minimum of the dominant, additive and recessive model is reported. In addition, additive genetic models required at least ten individuals homozygous for the minor allele and recessive models required at least 30 individuals homozygous for the minor allele.
The primary analysis consisted of a case-only test for an interaction between homozygosity for the MYH9 E1 haplotype or APOL1 risk variants (G1/G1; G2/G2; G1/G2) and individual SNPs across the genome. Specifically, a logistic regression model was computed in cases where the binary outcome was homozygosity for APOL1 risk SNPs or MYH9 E1 haplotypes (versus not homozygous) and independent variables (covariates) were age, gender, first PC to account for admixture and SNP. The case-only analysis makes the strong assumption that the SNP being tested and homozygosity for the c22 variants are independent under the null hypothesis of no interaction. If the assumption of independence under the null hypothesis is met, this case-only analysis can have considerably more statistical power than the corresponding classic case-control interaction model [28]. To make the inference as robust to this assumption as possible, the test was restricted to those SNPs not on c22; note by Mendel's Law of Independent Assortment chromosomes are inherited independently and therefore the independence assumption is met. This assumption was further examined by testing for the interaction in the control sample.
As an aid to interpret the case-only interaction analysis, the corresponding classic two-locus logistic regression interaction model was computed. Here, the logistic regression model had T2DM-ESRD status as the outcome, and the predictor variables (covariates) of age, gender, PC, the SNP, an indicator variable for two APOL1 risk variants or MYH9 E1 haplotype homozygosity and the centered cross-product of the SNP and indicator for c22 risk variant homozygosity. Here we mean the standard logistic regression model for two predictor variables (say X 1 and X 2 ) with their interaction term, a centered cross-product (e.g., Z) to reduce collinearity/correlation among the variables. Specifically, we would write this model as: logit y ð Þ~b 0 zb 1 X 1 zb 2 X 2 zb 3 Z; where, X 1 is the SNP and X 2 is the indicator variable for the APOL1/MYH9 haplotype (see below), respectively and Z is the center cross-product defined as Þ . The variable Z is defined in this way to reduce the collinearity or correlation among the predictors for better estimation properties. The indicator variable is a binary variable that codes an individual as either 0 or 1, depending on the characteristic of interest. Here, the indicator variable was 1 if the person was homozygous for the APOL1/MYH9 haplotype (easily determinable as it is a recessive model and phase is unambiguous) and 0 if they were not homozygous for these risk haplotypes. This binary (0, 1) variable was included in the logistic regression model. For the case-only analysis, this indicator variable was the outcome in the logistic regression analysis and for the classic two-locus interaction logistic regression models it was one of the predictor variables. Subsequent analyses stratified by homozygosity at the MYH9 E1 haplotype and APOL1 risk variants. A logistic regression model was computed in individuals homozygous for c22 variants, where T2DM-ESRD status was the outcome and the independent variables (covariates) in the model included age, gender, the first PC and the SNP of interest. The analysis was repeated for individuals not homozygous for c22 variants and the test for homogeneity of the odds ratio was computed. Analyses in the replication cohorts paralleled those in the discovery cohort.
To determine whether associated SNPs from analyses contrasting individuals with T2DM-ESRD to those without diabetes were DN-associated or T2DM-associated, allele frequencies were compared between AA with T2DM lacking nephropathy to those in the combined T2DM-ESRD case groups and the combined non-diabetic, non-nephropathy control groups. Assuming a recessive model for the MYH9 and APOL1 risk variants with main effect OR = 1.5, haplotype frequency of 0.64, and an additive genetic model for the FRMD3 SNPs having no main effect (OR = 1.0) with minor allele frequency of 0.32, then with a type 1 error rate of a = 1 210 , we have 0.50 power to detect an OR = 2.05 and 0.80 power to detect an OR = 2.34. | 2016-05-12T22:15:10.714Z | 2011-06-01T00:00:00.000 | {
"year": 2011,
"sha1": "6bb0ef99db727c97e57c20c805f01cfe6b832f9e",
"oa_license": "CC0",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1002150&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6bb0ef99db727c97e57c20c805f01cfe6b832f9e",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
247919235 | pes2o/s2orc | v3-fos-license | Proposal for Handling of Medicine Shortages Based on a Comparison of Retrospective Risk Analysis
Introduction: We reviewed and compared current drug shortages and shortage management practices in six selected countries (Hungary, Belgium, Spain, Switzerland, Australia, United States) based on the most comprehensive national shortage databases for each country, for four Anatomical Therapeutic Chemical (ATC) groups, to analyze the criticality of drug shortages across countries and identify best practices in shortage management strategies. Materials and Methods: Countries were selected to cover a wide geographical range of high-income nations where a lack of economic power as a potential source of drug shortages is not observable. ATC groups were selected based on a pre-examination of the databases to analyze groups most often in shortage, and groups where the absence of which could have a severe negative impact on treatment outcomes. The bias originating from the different reporting systems had to be reduced to gain comprehensive and comparable information. The first bias-reducing mechanism was transforming the raw number of shortages into proportion per million people. Secondly, critical cases were classified, and thirdly, critical cases were compared with the Word Health Organization (WHO) Essential Medicine Lists. Results: The results indicate that every European country studied reports significantly higher total and critical shortages per population compared to the US and Australia. Within Europe, Hungary reports the highest number of cases both for total and critical shortages, while Spain has the lowest results in both aspects. While in the US and Australia critical shortages were observable in similar proportions across all ATC groups, in European countries ATC groups of anti-infectives for systemic use (J) and the nervous system (N) were found to account for a notably higher proportion of critical shortages. Current shortage management practices were examined in each country and classified into five groups to identify common best practices. Conclusions: Due to the different characterization of reporting systems, several bias-reducing mechanisms should be applied to compare and evaluate shortages. In addition, European harmonization should be initiated to create mutually acknowledged definitions and reporting systems, which could be the basis of good drug shortage handling practices in Europe.
Introduction
Drug shortages can be defined as when the available or calculated demand for drugs does not sufficiently meet the end-user level demand. The issue affects all stakeholders along the pharmaceutical supply chain, ranging from Marketing Authorization Holders (MAHs) through wholesalers to hospitals, community pharmacies, and patients, posing a serious threat to treatment outcomes [1]. Drug shortages are recognized as a global problem, to which local responses are necessary, taking into account regionally diverse factors [2]. However, international coordination and common best practices would be beneficial for more effective shortage management. As a starting point, internationally adopted
Materials and Methods
To carry out the analysis, countries and respective databases have been selected according to pre-defined criteria. As a next step, databases have been processed focusing on selected therapeutic categories, the shortage of which would have the most severe impact on patient care. Finally, risk assessment has been carried out to identify critical shortages. In the analysis, both total and critical shortages have been compared across countries and ATC groups.
Criteria for Selecting Countries
Six countries have been selected for the analysis to cover a wide geographical range and allow for diversity. The selected countries are Australia from the Pacific region, the United States from North America, Belgium, Spain, and Hungary from the European Union (EU), and Switzerland, being a European country not part of the EU. As countries use diverse reporting systems and publish different information on shortages, the selection criteria has been developed to filter out countries where not enough details are public to carry out risk assessment and identify the ATC group for each shortage.
The following aspects were taken into account throughout the country selection method: • First world countries, where drug shortages are not a result of the lack of available financial resources.
•
Have a profound pharmaceutical industrial background. • Publicly available reporting system.
• Reported shortages must be classifiable according to the Anatomical Therapeutic Chemical (ATC) classification system. • Information is public regarding available substitutes to allow for assessment of severity. • Discontinued presentations listed separately from current shortages.
The six selected countries satisfy all the above-mentioned criteria and therefore constitute a profound ground for analysis.
Processing the Databases
To process analogous and comparable information on shortages in the chosen countries, shortage databases of all countries were accessed in the one-week time period between 4 and 9 March 2021. As most databases do not offer the possibility to access detailed data on past shortages but only show current shortages, longitudinal analysis was not possible. In Hungary, Belgium, Spain, and Australia, one national database was available, managed by the competent authorities. In two countries, the United States and Switzerland, two or more databases were available, run by multiple authorities (US) or also by private bodies (Switzerland). In the US, The American Society of Health-System Pharmacists (ASHP) database has been chosen [12], as it presented more details regarding the available substitutes compared to the FDA reporting system. In Switzerland, the Martinelli database was chosen as the national competent authority's database focuses only on a narrow scale of medicines considered essential by law [13].
Shortages reported because of discontinuation of production, stopped commercialization, or the interruption of commercialization were excluded from the analysis. This was necessary because there was not enough information regarding the reason and the duration of these shortages.
Therapeutic Categories
Shortages were classified according to the ATC Classification System rules, which classify the active ingredients of drugs according to the organ or system on which they act and their therapeutic, pharmacological, and chemical properties. The World Health Organization Collaborating Centre (WHO CC) controls it for Drug Statistics Methodology [14]. We have chosen the following particular ATC groups in shortage for further examination.
• C: Cardiovascular system • L: Antineoplastic and immunomodulating agents • J: Anti-infectives for systemic use • N: Nervous system These four ATC groups have been selected for analysis, as these categories of medications play a significant role in the therapeutic arsenal. The permanent or temporary absence of these medicines would cause a serious impact on patient care and the healthcare system [15][16][17][18]. The selection of these groups has been confirmed with the pre-examination of the data, which showed these ATC groups present are in the highest proportions among all reported shortages.
Allocation of Risk Assessment in Studied Countries
We have investigated all the posted products "currently in shortage" one-by-one to determine whether the risk is considered acceptable or critical. The different presentations of a particular pharmaceutical ingredient were counted as individual shortages. The aim was to assess the severity and separate critical and non-critical shortages. This is key, as the ultimate purpose of every reporting mechanism and countermeasure is to reduce critical shortages. Table 1 summarizes how critical shortages were identified for each country. In most of the studied countries, authorities did not publish a severity assessment, and therefore, criteria to distinguish critical cases had to be defined. The criterium was if no domestic alternatives were available, the shortage was considered critical, as in this case emergency imports would be necessary. It is general practice in every country that the competent authority suggests a domestic alternative for every drug that is in shortage. If no domestic alternatives are available, the country can perform an emergency import from abroad, but this is considered a high-risk and high-expense solution, which is to be avoided. In Belgium and Australia, the database already included some information regarding the severity, which has been augmented to match the criteria applied for all other countries. Table 1. Allocation of critical shortages.
Switzerland
If there is no domestic alternative [13].
Spain
If there is no domestic alternative [19].
Hungary
If there is no domestic alternative [20].
US
The "shortage risk index" was defined with the ratio of unavailable and available presentations. If this number is higher than 5, the shortage is considered critical. If the database states: "there are no presentations available" or "there is insufficient supply for usual ordering", despite that the index will be lower than 5, the shortage had been considered critical.
Belgium
According to the FAMHP decision tree and in case it is necessary to import from abroad [21].
Australia
According to the TGA definition, if the medicine is included on the Medicines Watch List (MWL) or if the shortage has the potential impact to have a life-threatening or serious impact on patients and there is not likely to be a sufficient supply of potential substitutes automatically considered as critical [22].
Comparison of the Extent of Different Shortages
To avoid significant populational bias, instead of comparing nominal shortage numbers, the proportion of critical shortages to the country population was taken into consideration. The ratio was calculated as the number of shortages per million people [23].
Binomial Probability Tests of Proportion of Critical Shortages across ATC Groups
Data have been analyzed to obtain insights into critical shortages and understand whether the proportion of critical shortages differs across the four ATC groups studied (C, J, L, N). We have used the software "Stata" version 17.0 for the analysis (StataCorp LLC 4905 Lakeway Drive, College Station, TX, USA).
For the purpose of statistical analysis, the variable Critical_BI was defined, which is an indicator variable taking the value 1 for shortages that are critical, and 0 for the shortages that are non-critical. Our null hypothesis was that the proportion of critical shortages will not be significantly different across ATC groups.
Results
The number of shortages derived from the databases regarding the above-mentioned four ATC groups have been analyzed both quantitatively and qualitatively. Our aim regarding the quantitative analysis was to compare total and critical shortages per million people. The qualitative evaluation aimed to compare the proportion of critical shortages by ATC groups per million people. The number of total shortages for the four analyzed ATC groups were obtained for the six chosen countries. Figure 1 shows the shortages in the studied countries, including critical ones. Nevertheless, as Table 2 outlines, the characteristics of reporting systems differ significantly across countries, as they are built on different conceptions regarding the reporting obligation. To obtain comparable data and conduct a reliable analysis, three bias-reducing steps have been performed. tion of critical shortages by ATC groups per million people. The number of total shortages for the four analyzed ATC groups were obtained for the six chosen countries. Figure 1 shows the shortages in the studied countries, including critical ones. Nevertheless, as Table 2 outlines, the characteristics of reporting systems differ significantly across countries, as they are built on different conceptions regarding the reporting obligation. To obtain comparable data and conduct a reliable analysis, three bias-reducing steps have been performed. Unable to deliver for an uninterrupted period of four days [24].
MAH should notify the FAMHP within 7 days after the start of the unavailability when a drug will be unavailable for longer than 14 days. [21].
Concerns about a certain presentation. The entire range of medicine is not unavailable.
Emergency import Greater responsibility to full-line distributors MAH should give an exact reason and compensate costs
Martinelli Consulting GmbH
Supplies not satisfying demand and orders [13].
The website is based on voluntary reports from companies and users [13].
Drugs officially approved in Switzerland are listed in Martinelli database.
Emergency import Strategic stockpiling Define essentiality
Spanish Agency of Medicines and Medical Products (AEMPS)
The number of available units is below the level of national or local consumption needs [25].
AEMPS database lists current or anticipated supply problems for different presentations. If a quick solution is expected, not included in the list [19].
Concerns about a certain presentation. The entire range of medicine is not unavailable [26].
Define essentiality Emergency import Maintain MA and production Delivery in 24 The bias-reducing steps were the following: 1.
Transforming the data into population-proportionate figures.
2.
Filtering out critical cases from all shortages (according to criteria in Table 1).
3.
Comparing critical cases with the WHO Essential Medicine List.
These steps were all necessary so that despite the differing reporting systems, some comparative international oversight could still cover drug shortages. The lack of uniform definitions, reporting systems, and severity assessments along unified international criteria creates significant obstacles to any international comparison of shortages. The characteristics of the national reporting systems of the countries under analysis are highlighted in Table 2.
To perform the quantitative analysis of the data, the first two bias-reducing steps were carried out. Data have been transformed into population-proportionate figures for all countries, and critical cases have been filtered out from all shortages according to the predefined criteria described in the Materials and Methods Section of this paper. As a result, Table 3 was developed. Column (d) of Table 3 shows the calculated proportion of total shortages per million people for each country. The highest proportion was recorded in Hungary and the lowest in the US. The average of the European countries for total shortages per million people (d) is significantly higher than non-European figures. The Spanish figure is an outlier compared to other European countries, as shortages per million people (d) are at least 75% less than any other European country. However, comparing shortages per million people across countries still does not afford unbiased results, as the total number of shortages reported can largely differ across countries due to the reporting system in use. To obtain less biased and more relevant results, the proportion of critical shortages per million people has been calculated in column (e). These figures still reflect that the proportion of critical shortages per population is higher in every European country than the non-European figures. The percentage of critical shortages among all reported cases is shown in column (f). The percentage of critical shortages is on average two times higher in Europe than in the US or Australia. This observation is crucial, as the ultimate purpose of every reporting mechanism and countermeasure is to reduce the number of critical shortages in a given country. Unable to deliver for an uninterrupted period of four days [24].
MAH should notify the FAMHP within 7 days after the start of the unavailability when a drug will be unavailable for longer than 14 days [21].
Concerns about a certain presentation. The entire range of medicine is not unavailable.
Emergency import Greater responsibility to full-line distributors MAH should give an exact reason and compensate costs
Switzerland
Martinelli Consulting GmbH Supplies not satisfying demand and orders [13].
The website is based on voluntary reports from companies and users [13].
Drugs officially approved in
Switzerland are listed in Martinelli database.
Emergency import Strategic stockpiling Define essentiality
Spanish Agency of Medicines and Medical Products (AEMPS)
The number of available units is below the level of national or local consumption needs [25].
AEMPS database lists current or anticipated supply problems for different presentations. If a quick solution is expected, not included in the list [19].
Concerns about a certain presentation. The entire range of medicine is not unavailable [26].
Define essentiality Emergency import Maintain MA and production Delivery in 24 working hours Export ban
US
American Society of Health-System Pharmacists (ASHP) Supply issue that affects how a pharmacy prepares, dispenses a drug, or influences patient care when prescribers must use an alternative agent [10].
ASHP lists every drug shortage reported through the online report form as soon as it is investigated and confirmed, usually within 24-72 h [12].
Define essentiality Emergency import Reevaluates voluntary recalls
Expedite changes Maintain MA and production Drugs into smaller units ASHP management practice
Australia
Therapeutic Goods Administration (TGA) The supply of medicine will not (or may not) meet the demand for the medicine at the subsequent six months, including all patients who take (may need to take) [27].
MAHs are required to report all registered medicines in 2-10 days upon severity [22].
The medicines are set out in Therapeutic Goods Determination [28].
National Institute of Pharmacy and Nutrition (OGYEI)
If the MAH is unable to maintain adequate and continuous supplies of specific medicinal products, or unwilling to supply the preparation temporarily or permanently [31].
Before the final delivery to the wholesaler, but in a maximum of two months [31].
Concerns about a certain presentation. The entire range of medicine is not unavailable.
Recommending alternatives Shortage declaration by CA Emergency import Export ban Table 4 shows that the proportion of critical shortages is 16.87% across the whole sample. In contrast to our hypothesis, the binomial probability tests performed for each ATC group (ATC group is denoted by ATC_ID) indicated significant differences in the shortage proportions of certain ATC groups. Table 4 displays the result that the observed proportion (Observed p) is significantly different from the expected proportion (Expected p) for 3 out of 4 groups. In therapeutic group C, only 8.6% of total shortages are critical, while group J seems to be the most affected by critical cases (28.5% of observed shortages are critical). The proportion of critical shortages observed is 22.2% for group L and 17% for N, only slightly higher than the average across the sample. Knowing that the proportion of critical shortages is significantly different across ATC groups, the next aim was to understand whether and how these differences vary across countries, taking into account the differences in population. Figure 2 outlines the proportion of critical shortages across the countries in specific ATC groups. A pattern can be observed regarding the relative distribution of shortages across the investigated ATC groups, except for a few outstanding data points. It is in line with our expectations based on the total number of critical shortages in Table 3 that the proportion of critical shortages per million people was significantly higher in Switzerland and Hungary in each ATC group, even compared to the European average. Similarly, the lowest proportion of critical cases for each ATC group was observed in the US. A pattern that can be observed is that the distribution of critical shortages across ATC groups for the US and Australia is relatively steady, and no ATC group shows significantly higher proportions of critical shortages. The opposite is true for Belgium, Hungary, and Switzerland, where the proportion of critical shortages is outstandingly higher in therapeutic group's J and N, anti-infectives for systemic use and nervous system drugs. It is apparent that in European countries with high total numbers of shortages per population, these two groups are causing critical pharmaceutical gaps. Our results are in agreement with the surveys from the European Association of Hospital Pharmacists [7] and the American Society of Health-System Pharmacists [32], who state that in the group of anti-infectives, shortages are particularly an outstanding issue due to the high ratio of medication errors, the increasing antimicrobial resistance, the substandard patient outcome, and the lack of development of new antibiotics [33]. group, even compared to the European average. Similarly, the lowest proportion of critical cases for each ATC group was observed in the US. A pattern that can be observed is that the distribution of critical shortages across ATC groups for the US and Australia is relatively steady, and no ATC group shows significantly higher proportions of critical shortages. The opposite is true for Belgium, Hungary, and Switzerland, where the proportion of critical shortages is outstandingly higher in therapeutic group's J and N, anti-infectives for systemic use and nervous system drugs. It is apparent that in European countries with high total numbers of shortages per population, these two groups are causing critical pharmaceutical gaps. Our results are in agreement with the surveys from the European Association of Hospital Pharmacists [7] and the American Society of Health-System Pharmacists [32], who state that in the group of anti-infectives, shortages are particularly an outstanding issue due to the high ratio of medication errors, the increasing antimicrobial resistance, the substandard patient outcome, and the lack of development of new antibiotics [33]. As the final step of the analysis, critical shortages were compared to the WHO Essential Medicine List. The results are presented in Table 5. The World Health Organization proposed the concept of essential medicines in 1977, which is a catalogue of every healthcare system's minimum medicine needs. Essential medicines are those that satisfy the priority healthcare needs of the population. The basic concept is that high-priority drugs should be available as part of a functioning health system for all people, guiding physicians to evidence-based and rational prescribing [34]. Clinical evidence confirms that medicines included in the list can significantly improve patients' outcomes and their shortage can have a severe negative impact on treatment quality [35]. Therefore, it should be a key priority to keep the shortage levels of medicines deemed essential by the WHO as low as possible. As the final step of the analysis, critical shortages were compared to the WHO Essential Medicine List. The results are presented in Table 5. The World Health Organization proposed the concept of essential medicines in 1977, which is a catalogue of every healthcare system's minimum medicine needs. Essential medicines are those that satisfy the priority healthcare needs of the population. The basic concept is that high-priority drugs should be available as part of a functioning health system for all people, guiding physicians to evidence-based and rational prescribing [34]. Clinical evidence confirms that medicines included in the list can significantly improve patients' outcomes and their shortage can have a severe negative impact on treatment quality [35]. Therefore, it should be a key priority to keep the shortage levels of medicines deemed essential by the WHO as low as possible. Table 5 demonstrates what percentage of all critical shortages are included on the WHO Essential Medicine List by country. The goal of every healthcare system should be to keep this ratio as low as possible, as a WHO essential drug being in critical shortage would mean that the country has no domestic alternative to replace the medication and an emergency import is necessary. Table 5 shows that Switzerland and the US are performing best in this aspect, with 36.8% and 45% of critical shortages being WHO essential medicines, respectively. Belgium displays the worst result, with over 90% of its critical shortages being WHO essential. The results show no correlation between the volume of critical shortages per population and the percentage of WHO essential medicines among critical shortages. Countermeasures should be implemented to reduce both the number of critical shortages and specifically, the WHO essential drugs in critical shortage.
Comparison of Shortage Management in Examined Countries
The analysis clearly shows that the proportion of critical shortages relative to population size is significantly higher in Europe than in other investigated regions of the world. This affects every ATC group under examination, but anti-infective (J) and nervous system (N) drugs show drastically higher critical shortage levels in Europe. Our findings are in agreement with the study from 2018 conducted by Videau in hospitals, where Switzerland was one of the most and Spain the least affected country by shortages in both studies, and anti-infective medications were the most affected therapeutic group [5]. The critical shortages of medicines on the WHO Essential Medicine List need to be reduced and prevented in particular. European nations need to adopt a wider range of transparent, unified shortage management countermeasures across countries to address drug shortages. This section of the paper reviews the various forms of countermeasures already in place in some of the examined countries. A selection of these could be unified and extended over the whole of Europe. The current forms of countermeasures by country are reported in Table 6 and can be grouped under the six following categories:
Measures for essential medicines 3.
Emergency imports Not defined.
Not providing the exact reasons is considered a clear legal violation [37].
Full-line wholesalers are assigned besides regular ones with special responsibilities and privileges [36,38].
Temporarily to medicinal products for which a shortage is notified [37].
Based on a doctor's request wholesalers may temporarily import medicine from the EU, if no substitutes are available in BE, in specific quantities requested by the doctor [39].
CH
Delegated to companies at a defined level of stocks in Ordinance [40].
Compounds for which there are no or only limited substitutes have been affected by a supply shortage over the previous three years [40].
Managing the strategic level of inventory stock delegated by the federal government in local law [40].
Parallel export does not become significant due to high prices [41].
Upon application for temporary import submission [42].
ES
CA may demand commercialization to grant the suspension, or the revocation of the product [43].
Essential if the pharmaceutical gap cannot fully cover or have a high economic impact. Critical if it has no available therapeutic alternatives and has a complex manufacturing process, and/or has only one supplier [38].
All wholesalers are required to deliver within 24 working hours [38].
If lack of medicinal products causes a pharmaceutical gap [45].
The Spanish Agency of Medicinal Products and Medical Devices can also approve the import of medicines labeled in other languages or with an expiry date shorter than 6 months [38].
If used to treat or prevent a serious disease or condition, and there is no other adequate available source [47].
A National Medical Stockpile maintains the strategic reserve of products [48].
Included on Medicines Watch List, has a potential life-threatening or serious impact, or has no potential substitutes [22].
Wholesalers also have a duty to notify authorities about the expected duration of a discontinuation [29].
Only for MAH or designated entity [49].
Therapeutic Goods Act has been amended to assist import [27].
HU
Products decreed by the minister should be available in the quantity defined therein [31].
Not defined.
Act XCV of 2005 on Medicinal Products for
Human Use [31].
Authorized wholesale distributors shall be required to procure and supply the medicinal products as their authorization for wholesale distribution pertains [31].
The active substances decreed by the minister for a period not exceeding one year [31].
Compulsory Stockpiling
In Australia, the National Medical Stockpile must maintain "key medicines" to avoid critical shortages of essential drugs [30]. A similar system exists in Switzerland, where the government defines medications that are subject to compulsory stockpiling in the appendix to Ordinance and the level of such stock that would satisfy average domestic consumption for three months without any import [40].
Measures for Essential Medicines
In Spain, the government can demand production and commercialization from MAHs to bridge supply gaps concerning essential products [52]. This seems to be an effective measure to cut back shortages, as critical shortages per million people in Spain were significantly the lowest among European countries. The US shortage management system works similarly. In response to the shortage of a medically necessary drug, the FDA can suggest and support establishing a new manufacturing site or even involving a new supplier. If a MAH decides on a voluntary recall, the FDA may conduct risk evaluation and encourage other MAHs to initiate, maintain, or increase production of the drug [46]. They can also facilitate the review of new generic applications that are potential alternatives [46]. The regulation also permits hospitals within the same health facility to repackage drugs into smaller units to alleviate drug shortages [8]. In Spain, MAHs that stop the distribution of a medicinal product without the authorities' permission face heavy fines [53].
Notification Responsibility
In the US, authorities maintain an apparent oversight of shortages. They, therefore, can handle them very effectively-the quantitative analysis clearly reflected that the critical shortages per million people are the lowest in the US. The criteria for products reported in the central systems are strict and well-defined, so authorities can monitor developing shortages from an early stage and address them accordingly [12]. For example, MAHs are mandated to inform the FDA of impending shortages six months in advance when they plan to stop producing a single-source or medically necessary (life-supporting, lifesustaining, or intended for use in the prevention or treatment of a debilitating disease or condition, including emergency medical care or during surgery) drug [8]. This effectively helps to maintain a low percentage of shortages that turn into critical. The FDA may also oblige the MAHs to conduct periodic risk assessments to address vulnerabilities in its supply chain.
Measures Affecting Wholesalers
In Belgium and Spain, some countermeasures specifically target wholesalers, who are obliged to ensure continued and adequate supply. In Spain, all wholesalers are required to deliver within 24 working hours [38]. In Belgium, to reduce the cases where shortages arise due to "distribution problems", distributors were assigned as full-line and regular wholesalers. MAHs are obliged to supply within a shorter period to full-line wholesalers. Full-line distributors are required to deliver emergency shipments in 24 h. They must have a range of specified medicines in stock to supply the needs of defined geographic areas. Moreover, full-line distributors are only allowed to supply strictly determined wholesalers, domestic pharmacies, and hospitals; thereby, parallel export is not permitted [36].
Export Bans
In Australia, export can only be performed by MAHs or designated distributors acting on behalf of the MAHs, not by most wholesalers like in many European countries [49]. This is highly important, as parallel export is usually a key factor in causing shortages, mostly affecting countries with low drug prices compared to international averages. In Spain, authorities (AEMPS) can restrict exportation only to medicinal products without therapeutic equivalents [26]. There are serious penalties and fines of up to 1 million euros for distributors who export medicines when this activity has been forbidden.
Emergency Imports
When there is no other possibility to resolve a critical shortage as no substitution is available domestically, emergency imports must be performed. The conditions of an emergency import are determined by national laws. In most countries, this is contingent on the approval of the competent authority, such as the FDA in the US, the OGYEI in Hungary, or the Spanish Agency of Medicinal Products and Medical Devices in Spain (see in Table 6). In Belgium, wholesalers may also perform emergency imports from the EU based on a doctor's request, in the specific quantities necessary for the given treatment [39].
Limitations
The findings are limited by the high degree of incomparability of data on drug shortages across countries. As reporting criteria and reporting systems used significantly differ in every country examined, it has been hard to develop a comprehensive and comparable overview of shortages in each country (including the number, severity, causing factor, and possible solution of all shortages). The authors are aware that the analysis does not contain deep statistical research to support insights. The cause of this is that only publicly available databases could be recruited, which contain only limited and categorical data. Even the data fragments provided highly differ across countries. Publicly available, internationally uniformed, and frequently updated databases containing numerical data (duration of shortage, cost of alternative therapy, number of patients using medication, effectiveness of particular countermeasures) would be necessary to perform a more accurate statistical analysis on the topic. The criteria which define shortages to be reported towards authorities can make large differences in the data, e.g., in Spain, shortages where "quick solution is available" are not noted in the reporting system, which can significantly alter the results of the cross-country comparison. In Australia, the lower proportion of critical shortages could be explained with the definition complexity as it considers both the number of alternatives and the potential impacts on patients. In Switzerland and the US, multiple databases of shortages are available offering different details, maintained by various authorities (FDA and ASHP) or organizations (Federal Office of Public Health, Swissmedic, Federal Office for National Economic Supply, Martinelli Consulting). This is an issue because large discrepancies among the various databases and varying definitions limit transparent international comparison.
Conclusions
The following recommendations should be considered to handle the issue of drug shortages more effectively in Europe. Since drug shortages do not respect borders and cross-country collaboration would be beneficial for more effective shortage management, a unified European definition and reporting criteria of shortages would be necessary to assure internationally consistent monitoring, reporting, comparisons, responses, and solutions. National authorities across Europe should be aware of shortages through coordinated systems, increasing cross-country transparency, and facilitating solutions in every country. It was a significant step forward in the European Union that in July 2019, the EMA published the "Guidance on detection and notification of shortages of medicinal products for Marketing Authorisation Holders in the Union" [54]. The document contains the effort to facilitate the more uniform reporting and communication of drug shortages and create a harmonization "drug shortage" definition. The document "Good practice guidance for communication to the public on medicines' availability issues" [55] contains communication guidelines for the national authorities and the EMA for patients and healthcare professionals.
The ideal unified definition for critical shortages should positively discriminate products included on the WHO Essential Medicine List, should take into account the number of alternatives, and should determine the exact start and end dates of a shortage. International harmonization should be initiated to create mutually acknowledged definitions and reporting systems, which could be the basis of good drug shortage handling practices in Europe. | 2022-04-03T16:05:31.059Z | 2022-03-30T00:00:00.000 | {
"year": 2022,
"sha1": "82b005f332261617418ec2544f58178e015035a1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/19/7/4102/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d7bcb37d74e21713404ed016fc0fd79b8a25db42",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
92399124 | pes2o/s2orc | v3-fos-license | Transposition, apotheosis and benign metamorphosis-lymph node
The cells may be derived from the paramesonephric ducts, salivary gland tissue, breast tissue, thyroid follicles, squamous epithelium or mesothelial incorporation. Paramesonephric duct like configurations and inclusions are preferentially located in pelvic lymph nodes and may simulate the uterine tube epithelium. Breast tissue inclusions are chiefly constituted of ectopic mammary glands and ducts of diverse morphology and unknown aetiology. Thyroid inclusions may be incorporated in the cervical and axillary lymph nodes. Mesothelial inclusions appear particularly in the mediastinal lymph nodes of the patients with pleural and pericardial effusions. The occurrence of melanocytic cells discovered in the lymph nodes are attributed to the faulty migration of neural crest cells or they may arise as benign metastasis of existing dermal nevi. The diagnosis of benign inclusions is necessitated in order to exclude adenocarcinomatous lymph node transformation and metastasis in patients presenting with benign nodal proliferations. The existence of benign inclusions was initially recounted by Reis et al. in 1897.1 These are defined as tubular spaces corresponding to cysts, appearing in lymph nodes of patients who may undergo surgical interventions for malignant tumours of the uterus, cervix and vulva. Inclusions may as well arise in non malignant disease processes and may be situated in locales extraneous to the pelvic cavity such as in the lumbar, mediastinal, parotid, submandibular, jugular, hepatic and iliac lymph nodes. Brooks et al divided the inclusions in three sub-categories, epithelial, nevomelanocytic and decidual.2
The cells may be derived from the paramesonephric ducts, salivary gland tissue, breast tissue, thyroid follicles, squamous epithelium or mesothelial incorporation. Paramesonephric duct like configurations and inclusions are preferentially located in pelvic lymph nodes and may simulate the uterine tube epithelium. Breast tissue inclusions are chiefly constituted of ectopic mammary glands and ducts of diverse morphology and unknown aetiology. Thyroid inclusions may be incorporated in the cervical and axillary lymph nodes. Mesothelial inclusions appear particularly in the mediastinal lymph nodes of the patients with pleural and pericardial effusions. The occurrence of melanocytic cells discovered in the lymph nodes are attributed to the faulty migration of neural crest cells or they may arise as benign metastasis of existing dermal nevi. The diagnosis of benign inclusions is necessitated in order to exclude adenocarcinomatous lymph node transformation and metastasis in patients presenting with benign nodal proliferations. The existence of benign inclusions was initially recounted by Reis et al. in 1897. 1 These are defined as tubular spaces corresponding to cysts, appearing in lymph nodes of patients who may undergo surgical interventions for malignant tumours of the uterus, cervix and vulva. Inclusions may as well arise in non malignant disease processes and may be situated in locales extraneous to the pelvic cavity such as in the lumbar, mediastinal, parotid, submandibular, jugular, hepatic and iliac lymph nodes. Brooks et al divided the inclusions in three sub-categories, epithelial, nevomelanocytic and decidual. 2
Clinical exponents
1. The benign epithelial inclusions may be discerned coincidentally and may typically lack specific symptoms. The diagnostic significance of the benign inclusions is considerable as they may be misinterpreted for metastatic malignant processes which may result in the employment of extensive and expensive investigative and therapeutic options (Table 1). 2. Inclusions of the lymph node situated in the intramural locations may be configured in cysts or as innumerable duct replicating formations. The epithelial cells articulating the inclusions may arise from the paramesonephric ducts, salivary glands, breast tissue, thyroid follicular epithelium, squamous epithelium and the mesothelium.
3. Lymph node inclusions of paramesonephricus sub-type may especially emerge in the pelvic lymph nodes and infrequently in the axillary lymph nodes. The concurrence with adenocarcinoma of the mammary glands is debatable.
4. Ectopic breast tissue is more prevalent in the sentinel lymph nodes, in contrast to adjunctive axillary nodes, on account of an obligatory embryologic connection.
5. Decidual inclusions are elucidated in the paramesonephric processes and may originate from submesothelial cells secondary to a hormonal trigger.
6. Lymph node inclusions of mammary gland origin may be composed of deformed mammary gland ducts with divergent morphology and a dual cell population comprising of luminal cuboidal /columnar epithelial cells and basal cells depicting myopithelial differentiation ( Figure 1).
Introduction
Ectopic foci of non neoplastic tissue situated in the lymph node are designated as benign inclusions. A sequel to a metastatic process induces the epithelial cells in a lymph node which may be clinically significant on account of the probable impact in the determination of tumour staging and the employment of subsequent therapeutic options. Therefore, a distinction from the benign proliferative lesions in the lymph node (vascular proliferations, T lymphocyte clusters, hamartomas, viral infections or angiomyomas) is a pre-requisite. 2. Squamous epithelium: Cystic configurations inter-lined by a well differentiated squamous epithelium may ensue in the upper cervical lymph nodes. These may be identical to aberrant branchial pouch derivatives -thus designated as "benign lymphoepithelial cyst" and are surmised to appear due to cystic dilatation of pre-existent cysts. Peri-pancreatic lymph nodes may also display identical epithelium lined cystic inclusions. A demarcation from a metastatic well differentiated squamous cell carcinoma with superimposed cystic change may be mandated. 5 3. Thyroid follicles Inclusions: Capsular or sub-capsular location of inclusions of non-pathogenic thyroid tissue in the midcervical lymph nodes may be elucidated in concordance with a normal thyroid gland. 5 Distinctive colloid follicles lined by a low cuboidal epithelium lacking atypia may be exemplified. Configurations such as papillae or psammoma bodies may be absent. Follicles may be detected in the peripheral sinus. Mitosis is usually absent, in contrast to a malignant melanoma which may display metastasis in the marginal sinus. The nevus inclusions may be immune reactive for S 100 protein, tyrosinase, melanin and non reactive for immune markers such as melan A, HMB45 and Ki67. In contrast, the malignant melanoma appears reactive for S100 protein, melan A, HMB45 and Ki67. On ultra-structural examination, the nevus inclusions display uniform, round cells with random aggregates of cytoplasmic fibrils, mature melanosomes and dispersed nuclear chromatin. 4 The axillary, cervical or inguinal lymph nodes may be implicated, however conventional nevus cells may be also be enunciated in the capsule of axillary lymph nodes , in the absence of an involved parenchyma. 5
Mesothelial cells:
Lymph nodes may infrequently delineate mesothelial cell aggregates accompanied by an absence of a malignant mesothelioma, from which a demarcation is required. 5 Mesothelial windows may also be depicted as mediastinal cysts, situated in the sub-capsular spaces. The cellular configuration is that of a uniform tall, columnar epithelium with a low nucleocytoplasmic ratio, a distinct basement membrane and an absence of mitosis. The cells hypothetically arise from the pleural mesothelium. 4 8. Mammary Inclusions: Axillary lymph nodes may frequently depict the ectopic mammary tissue. Inclusions of the breast tissue are demonstrated as mammary ducts in the subcapsular region. The cellular components described are the epithelium, myoepithelium and apocrine cells. Cystic spaces lined with low, uniform cuboidal epithelium lacking mitosis, hyperplasia or hyperchromasia may be evidenced. 4 A singular layer of cuboidal epithelium interlining the tubules (hobnail appearance) or epithelial inclusions situated within or beneath the lymph node capsule may also be elucidated. A distinction is required from a metastatic breast carcinoma. The inclusions may delineate three categories i) glandular structures only ii) squamous cysts only iii) a combination of glandular and squamous epithelium. 5 To surmise, an extensive morphological evaluation may be mandated for a substantial and definitive prognostic delineation of the benign lymph node inclusions as an immunohistochemical or biomolecular investigation may not categorically define a non-malignant tissue metamorphosis or the transformation into a neoplasm. | 2019-04-03T13:07:28.011Z | 2018-09-24T00:00:00.000 | {
"year": 2018,
"sha1": "a1ac78e330825f6c35a5369e153ee0682af6dbfb",
"oa_license": "CCBYNC",
"oa_url": "https://medcraveonline.com/ICPJL/ICPJL-06-00181.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "ba0ac157806c53b02948132017be2778f6ce6d63",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Biology"
]
} |
237358715 | pes2o/s2orc | v3-fos-license | Long-Term Nitrogen and Phosphorus Dynamics in Waters Discharging from Forestry-Drained and Undrained Boreal Peatlands
Contradictory results for the long-term evolution of nitrogen and phosphorus concentrations in waters discharging from drained peatland forests need reconciliation. We gathered long-term (10–29 years) water quality data from 29 forested catchments, 18 forestry-drained and 11 undrained peatlands. Trend analysis of the nitrogen and phosphorus concentration data indicated variable trends from clearly decreasing to considerably increasing temporal trends. While the variations in phosphorus concentration trends over time did not correlate with any of our explanatory factors, trends in nitrogen concentrations correlated positively with tree stand volume in the catchments and temperature sum. A positive correlation of increasing nitrogen concentrations with temperature sum raises concerns of the future evolution of nitrogen dynamics under a warming climate. Furthermore, the correlation with tree stand volume is troublesome due to the generally accepted policy to tackle the climate crisis by enhancing tree growth. However, future research is still needed to assess which are the actual processes related to stand volume and temperature sum that contribute to increasing TN concentrations.
Introduction
Internationally, around 15 million hectares of peatlands and wetlands were drained for forestry in the temperate and boreal regions, particularly between the 1960s and the late 1980s (Paavilainen & Päivänen, 1995). Although these peatlands and wetlands have now been under the influence of drainage for a long time, the primary mechanisms controlling their nutrient exports to receiving water courses are still poorly understood.
It was understood for a long time that drainage of peatlands for forestry had only short-term impacts on nutrient and carbon exports (Finér et al., 2010;Heikurainen et al., 1978). It was generally accepted that, even though peatland drainage may considerably increase nutrient and carbon exports during the first few years after treatment, their exports did not significantly differ from undrained peatlands 10-20 years after drainage. Nieminen et al. (2017) revealed, however, that nitrogen and phosphorus concentrations discharging from drained peatlands had not returned to Abstract Contradictory results for the long-term evolution of nitrogen and phosphorus concentrations in waters discharging from drained peatland forests need reconciliation. We gathered long-term (10-29 years) water quality data from 29 forested catchments, 18 forestry-drained and 11 undrained peatlands. Trend analysis of the nitrogen and phosphorus concentration data indicated variable trends from clearly decreasing to considerably increasing temporal trends. While the variations in phosphorus concentration trends over time did not correlate with any of our explanatory factors, trends in nitrogen concentrations correlated positively with tree stand volume in the catchments and temperature sum. A positive correlation of increasing nitrogen concentrations with temperature sum raises concerns of the future evolution of nitrogen dynamics under a warming climate. Furthermore, the correlation with tree stand volume is troublesome due to the generally accepted policy to tackle the climate crisis by enhancing tree growth. 371 Page 2 of 9 undrained levels but were two-to-three times higher even though several decades had elapsed since drainage. Similarly, Nieminen et al. (2020) showed that this persistent legacy effect of drainage may result in many-fold higher nutrient export estimates from drained peatland forests compared with those where its contribution is ignored (Finér et al., 2010). Furthermore, the results by Nieminen et al. (2017Nieminen et al. ( , 2018 and Räike et al. (2019) indicated that nitrogen and phosphorus exports from drained peatland forests may not only be persistently higher than from undrained peatlands, but they may also increase over time since drainage occurred. Nieminen et al. (2017Nieminen et al. ( , 2018 showed significantly higher nitrogen and phosphorus concentrations in waters discharging from old peatland drainage areas where several decades had elapsed since drainage compared with more recently drained areas. Similarly, Räike et al. (2019) found increasing trends of nitrogen concentrations in large boreal rivers discharging to the Baltic Sea. The area of drained peatlands in their catchments was the land use factor with the strongest correlation with these increasing trends.
However, the studies on the temporal trends of nutrient concentrations from drained peatland forests are contradictory. Concurrently with the studies reporting increasing trends of nutrients (Nieminen et al., 2017(Nieminen et al., , 2018Räike et al., 2019), Finér et al. (2021) showed no increase in nitrogen concentrations over time and even decreasing concentrations for phosphorus. As pointed out by Nieminen et al. (2018), these contradictory results may simply be because the temporal trends of nutrients vary in time and space. Decreasing temporal trends may be expected when drained sites are still recovering from the effects of forestry operations, such as forest drainage, fertilization, or harvesting. Increasing trends of organic carbon and organic nutrients may particularly occur due to climatic warming (Laudon et al., 2012;Sarkkola et al., 2009) and recovery from atmospheric sulfate deposition (Monteith et al., 2007;Erlandsson et al., 2008).
Recent studies also suggest increasing trends due to the "greening effect" in northern latitudes, that is, the expansion and maturation of forests in previously treeless or sparsely tree-covered sites, such as the peatlands subjected to forest drainage (Finstad et al., 2016;Škerlep et al., 2019;Nieminen et al., 2021). Increase in forest biomass in mineral soils and peatlands may contribute to increasing organic carbon and nutrient exports, for example, by increasing root exudates and litter inputs into the soil (Straková et al., 2010(Straková et al., , 2012. In drained peatlands, increasing forest biomass may contribute to nutrient and carbon exports also indirectly, that is, by increasing evapotranspiration of vegetation, thus resulting in thicker oxidized peat layers in old drained areas with mature tree stands than in recently drained areas . Differing temporal trends of nutrients in different studies may also be related to different geographic locations of study sites. Given that peat decomposition is the main source of nutrients in drained peatlands (Laurén et al., 2021), variations in temporal nutrient export trends may be related to differences in the rate of peat decomposition. Peat decomposition in drained sites is much faster in southern compared with northern locations (Hiraishi et al., 2014), suggesting that increasing trends of nutrients in waters from drained peatland forests may most likely be found in southern locations. Forests are also much denser and taller in southern boreal regions than in northern, and thus, it is likely that forests contribute more to litter inputs and evapotranspiration in the southern locations.
The aim of this study was to increase the understanding of the long-term temporal trends of nutrient concentrations in waters discharging from drained boreal peatland forests. To do that, we gathered water quality data from 18 forestry-drained and 11 undrained peatlands, and studied the factors contributing to their long-term (10-29 years) nitrogen and phosphorus concentration trends. Our hypothesis is that increasing concentration trends occur mostly in southern latitudes, where higher temperatures increase peat decomposition and tree growth more than in the north.
Study Sites
We studied the long-term (10-29 years) temporal trends of total nitrogen (TN) and total phosphorus (TP) concentrations from 29 boreal forest catchments, 18 of which were managed using typical even-aged rotation-forestry methods and 11 sites Page 3 of 9 371 were catchments dominated by undrained peatlands (Table 1). The Krycklan catchment is in Sweden; the other catchments are in Finland. Temporal trends in TN and TP concentrations from some of the catchments included in this study have been presented earlier (e.g., Lepistö et al., 2021), but this study utilizes a more extensive data set than previous trend analyses. The Latokartano catchment is a control catchment from the study by Kaila et al. (2015), the Krycklan catchment study data can be found on their data service webpage (https:// www. slu. se/ Kryck lan), and the other data were derived from the national database maintained by the Finnish Environment Institute (https:// www. syke. fi/ en-US/ Open_ infor mation/ Open_ web_ servi ces/ Envir onmen tal_ data_ API) and the Natural Resources Institute Finland (https:// metsa info. luke. fi/ fi/ vesis tokuo rmitu kset). The annual climatic cycle in the study region includes four distinct seasons. The winter period (December-March) is characterized by freezing temperatures and a snow cover ranging from about 20 to 120 cm. The snowmelt period typically starts in late April or early May, and after 2 or 3 weeks, the snowpack and the frozen ground have melted. Summer (June-August) is characterized by a daily mean temperature of about 17 °C in the southern part and 13 °C in the northernmost part of the region. During autumn (September-November), the daily temperatures gradually decrease from 10 to 15 °C to values below zero, and late in the end of the period, the ground and surface water bodies start to freeze. The mean annual precipitation is 550-700 mm, 200-300 mm of which falls as snow.
Our principal selection criteria of the sites were that the site should have been monitored for at least 10 years and that no forest operations covering large parts of the catchment (> 10%) should have been executed between 1980 and 2019, that is, during the study period and 10 years before it (see also Sarkkola et al., 2009;Nieminen et al., 2017Nieminen et al., , 2018. The latter selection criterion was to enable the long-term trends from managed catchments to be studied without the recurrent decreasing and increasing concentration periods caused by successive forestry operations (see also Nieminen et al., 2018).
The catchments are typical conifer-dominated forest areas in Nordic countries, where Norway spruce (Picea abies (L.) Karsten) dominates the most fertile sites, and Scots pine (Pinus sylvestris L.) dominates the low-fertile mineral soils and tree-covered bogs. Silver (Betula pendula Roth) and downy birch (Betula pubescens Ehrh.) are generally only found as mixed species, the first in mineral soils and the latter in peatlands. Peatlands cover 4-100% of the catchment areas in the undrained sites, and 10-54% in the drained sites (Table 1). Drained peatlands cover 4-48% of the areas of drained catchments. Agricultural areas are almost non-existent, except at Heinäjoki, where they cover about 8% of the catchment area. The tree stand volume in the study region has increased by almost 70% over the past few decades, from about 1500 million m 3 in the 1970s to about 2500 million m 3 at the end of the 2010s in Finland (Luke Forest Statistics, 2021). The stand volume in the catchments varies from 42 to 233 m 3 ha −1 in the drained catchments, and between 18 and 160 m 3 ha −1 in the undrained catchments. Temperature sum (degree days (d.d.), ≥ 5 °C) varies from 690 d.d. in the northernmost to 1475 d.d. in the southernmost site (Table 1).
Sampling and Analyses
The water samples were collected by focusing on the periods with high flows, which are in the study region the spring snowmelt period and the autumn heavy rainfalls. The samples were collected from an outflow ditch or stream draining each catchment, either from the flow of a V-notch weir or a discharge pipe of a soil embankment (Latokartano). On average, 9 samples were taken per year. Mid-summer (July-August) periods are often missing because of no or minor runoff. Concentrations of TN and TP were analyzed either colorimetrically after oxidation with K 2 S 2 O 8 (Vesihallinnon analyysimenetelmät, 1981) or TN using flow injection analysis (Tecaton FIA) and the NPOC method.
Peatlands % = proportion of peatlands in the catchment area; drained % = proportion of drained peatlands in the catchment area; Tsum = temperature sum (degree days, ≥ 5 °C); stand V = stand volume in the catchment area (m 3 ha −1 ). Stand volumes were estimated by either measuring tree attributes in conventional field plots or utilizing the open database of the Multi-source National Forest Inventory of Finland (Mäkisara et al., 2016)
Calculations
Linear trends in TN and TP concentrations (mg l −1 ) in each of the 29 catchments were calculated using the non-parametric Mann-Kendall test (Gilbert, 1996), which has been widely used in trend analyses. The Kendall test is suitable because it is robust to outliers, missing data, and non-normality of the time series. The slopes of the trends express the median change in the time series and were calculated using the Sen slope (S) estimation method. The tests were performed in R (R Core Team 2019) using the Kendall and trend packages (McLeod, 2011).
Ordinary regression analysis (OLS) was used to identify the factors behind the variation in trends (i.e., Sen's slope estimates). The tested explanatory factors in the regression models were (i) catchment area (ha), (ii) percentage of undrained peatlands in the catchments (%), (iii) percentage of drained peatlands in the catchments (%), (iv) temperature sum as the parameter for site altitudinal and latitudinal location (degree days (d.d.), > 5 °C), and (v) average tree stand volume in the catchments (m 3 ha −1 ).
Results
Average TN concentrations (± SD) in the catchments with drained peatlands were 506 (± 232) μg l −1 , and TP concentrations 25 (± 16) μg l −1 . TN and TP concentrations in the catchments with undrained sites were 394 (± 145) and 11 (± 6) μg l −1 , respectively. TN concentrations in waters discharging from the 11 undrained peatlands increased significantly in two sites and decreased significantly in two sites, while TP concentrations increased in three sites and decreased in one site (Table 2). Among the 18 drained sites, TN concentrations increased significantly from eight sites and decreased from two sites, while TP concentrations increased from three sites and decreased from five sites.
The TN trends (Sen slope estimates) correlated positively with temperature sum and tree stand volume in the catchments, and slightly negatively with the area of the catchment (Fig. 1). Peatland area or drained peatland area did not correlate with TN trends. None of the explanatory factors were in clear correlation with the temporal trends in TP concentrations.
Discussion
The results of the study supported our hypothesis in that the trends in nitrogen concentrations correlated positively with factors related to latitude, i.e., the volume of the tree stand in the catchments and the temperature sum of the study site. A few previous studies have indicated that the general increase in forest cover and biomass in northern latitudes during recent decades ("greening effect") may be one factor behind increasing organic carbon concentrations in forested streams (Finstad et al., 2016;Nieminen et al., 2021;Škerlep et al., 2019), but ours is the first study showing that the trends in nitrogen concentrations may also correlate positively with the increase in forest biomass. Tree stand volume and latitude are positively correlated; tree stands with higher volume are found in southern rather than northern locations in the Nordic region (Henttonen et al., 2020), thus complicating the interpretation of the results. That is, it is unclear whether the increasing trends are primarily related to tree physicochemical processes, such as evapotranspiration and root exudates, or site locationrelated factors, such as temperature.
None of the explanatory factors we tested correlated with the trends in TP concentrations. This may be because TP concentrations correlate with factors not studied here, such as rates of weathering of the parent material in the catchments or past fertilization activity. Fertilization of drained peatlands was very active in Finland during the 1960s to 1980s, which may have had long-term effects on phosphorus exports (Nieminen et al., 2018). More sites indicated decreasing rather than increasing trends for TP, which may be because the waters are still recovering from the effects of past fertilization Nieminen et al., 2018).
Increasing TN concentration trends in the largevolume tree stands of the south may be due to several reasons. One mechanism by which tree cover may have a positive effect on the TN concentrations is the effect of tree canopy on dry deposition capture of nitrogen (Sievering et al., 2007). Forest-covered areas capture more dry deposition than open areas, and nitrogen deposition is higher in boreal regions in the south than in the north (Lövblad et al., 1992); together, these two factors may increase TN concentrations in surface waters particularly in mature tree stands in the south. The effect of tree stand on TN concentrations may also be due to increased nitrogen-rich tree litter input into the soil (Straková et al., 2010(Straková et al., , 2012. In drained peatland forests, one mechanism by which tree cover contributes to increasing TN concentrations may also be the increasing evapotranspiration along with maturing of the tree stands , resulting in lower water levels and thicker oxidized peat layers in drained and densely forested peatlands compared with sparsely covered Fig. 1 The relationship between tree stand volume (m 3 ha −1 ), temperature sum (d.d., > 5 °C), and catchment area (ha) in the 29 catchments included in the study and TN concentration trends (Sen's slope estimates) in waters discharging from the catchments Page 7 of 9 371 or open areas. However, there was no correlation between drained area in the catchments and increasing TN concentrations in our data. Thus, the correlation between tree stand volume and increasing TN concentrations may not be specific to drained peatlands but may also apply to mineral soil forests.
Increasing TN trends primarily in the south and large-volume tree stands are worrying from the viewpoint of the future evolution of water quality in forested catchments. Future climatic conditions in the north may become similar as they are now in the south (Ruosteenoja et al., 2016), suggesting that TN concentrations in waters from forested catchments may begin to increase also in the north. The generally accepted policy in northern countries is to increase afforestation and tree growth in order to increase carbon sequestration in forest biomass and replace fossil fuels with renewable materials. Based on the results of this paper and some earlier studies (Finstad et al., 2016;Nieminen et al., 2021), this well-meaning policy may have the negative side effect of increasing carbon and nitrogen concentrations in waters discharging from forested catchments. It is thus of utmost importance to carry on monitoring water quality from these and other forested catchments in order to assess if the observed positive "forest biomass-concentration trend" relationship for TN is due to historic reasons or if increase in forest biomass will also contribute to nitrogen concentrations in the future.
It should be noted here, however, that although increasing stand volumes can result in increasing nitrogen and carbon concentrations in waters from forested catchments, water flows may reduce concurrently due to higher evapotranspiration of the larger tree stands . Thus, the overall result of the "greening effect" in high latitudes may not be increasing carbon and nitrogen exports (kg ha −1 year −1 ) to water courses, but less water with higher nitrogen and carbon concentrations in receiving lakes and streams particularly during growing season. The studies in forested areas should thus perhaps focus more on water quality issues than attempt to produce estimates on nutrient and carbon exports induced by forestry (e.g., Finér et al., 2021). Less water with poorer quality in forested lakes and streams particularly during summer seasons would significantly reduce their value for recreational purposes, and plausibly also severely decrease their aquatic biodiversity (Kritzberg et al., 2020).
In conclusion, our study indicated that there have been very variable trends in nitrogen and phosphorus concentrations in waters from forested catchments in the Nordic region over the past decades. The mechanisms behind varying phosphorus trends remained largely unexplained, but the trends in nitrogen concentrations were positively correlated with tree stand volume and temperature sum. This study thus indicates that the general increase in tree cover and biomass in forested catchments in the Nordic regions over the past decades may not only have increased carbon concentrations in forested streams and lakes (Finstad et al., 2016;Nieminen et al., 2021), but also nitrogen concentrations. A positive correlation of increasing nitrogen concentrations with temperature sum raises concerns of the future evolution of nitrogen exports under a warming climate, as does also the correlation with tree stand volume due to the generally accepted policy of tackling climatic crisis by boosting tree growth. However, future research is still needed to assess which are the actual processes related to stand volume and temperature sum that contribute to increasing nitrogen concentrations.
Funding Open access funding provided by Natural Resources Institute Finland (LUKE).
Data Availability
The data sets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.
Conflict of Interest The authors declare no competing interests.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/. | 2021-08-31T13:42:09.897Z | 2021-08-31T00:00:00.000 | {
"year": 2021,
"sha1": "16e68d745c1c492a86faf6aca6fc6f9b89ee90ae",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11270-021-05293-y.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "16e68d745c1c492a86faf6aca6fc6f9b89ee90ae",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
} |
181504576 | pes2o/s2orc | v3-fos-license | PREVENTION OF SEXUAL ABUSE IN CHILDREN WITH MENTAL RETARDATION: A SYSTEMATIC LITERATURE R EVIEW
Children with mental retardation have a 2.5 times higher risk of child sexual abuse (CSA) than other normal children. Mentally retarded children tend to experience verbal ability barriers in describing CSA cases. This study aims to describe the prevention of CSA by parents, teachers and health professionals, including the risk of CSA in mentally retarded children, need for CSA prevention, form of CSA prevention and effectiveness of CSA prevention in mentally retarded children. Method: PRISMA was used as a guide in compiling the systematic literature review based on the inclusion criteria to determine research articles, search strategies, and research findings. Four databases used in this research included Sage Journal, Ebsco Host, Scopus and Taylor & Francis (Social Sciences & Humanities Subject). Results: mentally retarded children have a higher risk of CSA than other normal children. The need for CSA prevention are felt by parents and teachers because of the lack of knowledge and ability to do prevention. The form of CSA prevention can be done with various efforts from various parties including parents. The CSA prevention program through role-play or training to face threatening situations is effective for mentally retarded children. It is expected that these findings will become a reference for nurses in ensuring the need for CSA prevention knowledge and ability for mentally disabled children.
Introduction
Mental retardation is a condition characterized by disruption or limited intelligence development and barriers to adaptive behavior (Salvador-carulla & Reed, 2011). Children with mental retardation have IQs from 55 to 74 and lack of ability in memory capacity, language skills, imagination and ability in creativity (Kachalaki & Faghirpur, 2015).
Based on a survey in 2015, there were 6,050,725 children aged 6 to 21 years who were identified as having limitations, and 3.4% of them had intellectual limitations or called mental retardation, which were managed by IDEA Part B in 50 states in the United States (Devos, Richey, & Ryder, 2017). Mentally retarded children have internalization and externalization disorders, decreased usefulness, lack of social skills and increased feelings of loneliness or likely being quiet. This triggers an increased chance of child sexual abuse (CSA) in children and adolescents with mental disabilities 2.5 times higher than those without mental disabilities (Helton, Gochez-kerr, & Gruber, 2018).
Besides, mentally retarded children tend to be unable to recognize inappropriate relationships with others. They also may have limited verbal ability to describe CSA cases and to report violent behavior they may experience (Murphy, 2011 ;Hibbard, 2007).
CSA is a social problem that is increasingly worrisome. According to a survey in 2015 in the United States, the estimated incidence of CSA in children under 5 years (2011)(2012)(2013)(2014)(2015) reached 8.4% of the 683 000 children (U. S.Department of Health and Human Services, 2015). In fact, mentally retarded children have a higher risk of experiencing CSA than normal children, with a percentage ratio of 21%:10% (Helton, Gochezkerr, & Gruber, 2018).
Most teachers in special schools state the importance of efforts to meet early sexual education needs for children and adolescents with mental disabilities. Besides high educational needs, they also reveal that there is inadequate sexual education conducted in schools. This relates to the lack of content taught and the absence of clear guidelines and appropriate tools in teaching sexual education to mentally retarded children and adolescents (Tsuda, Hartini, Hapsari, & Takada, 2017).
Parents who have mentally retarded children have concern about the risk of CSA for their children. Parents realize that there is a risk of CSA in most mentally retarded children which comes from the closest persons of the child (Thomas, Kumar, & Deb, 2014). Effective prevention of CSA is expected to reduce parents' anxiety about the independence of mentally retarded children in protecting themselves against alleged CSA (Kucuk, Platin, & Erdem, 2017).
A plenty of studies have explained the prevention of CSA that can be done in children with interventions, from the range of early age to school age that can improve children's knowledge and ability to protect themselves from the risk of CSA (S. Kim & Kang, 2017;Zhang, Chen, Feng, & Li, 2014). A systematic review by Wals, Zwi, Woolferenden, and Shlonsky (2018) describes the CSA prevention in the form of school-based prevention program that can improve the protection of students. However, there are no studies that clearly explains the prevention of CSA specifically in children with mental retardation. Therefore, the researchers are interested to see how prevention of CSA in mentally disabled children is analyzed from related articles.
This present research aims to describe the CSA prevention in mentally retarded children which are done by parents, teachers and health professionals. The prevention being discussed includes the risk of CSA in mentally retarded children, the need for CSA prevention, form of CSA prevention and the effectiveness of CSA prevention in mentally retarded children.
Inclusion Criteria
The criteria included in the article search consisted of both quantitative and qualitative research articles describing the CSA prevention in mentally retarded children by parents, teachers and health professionals. The article publications were restricted from 2010 to 2018 written in English. Articles excluded from the criteria were that of systematic review, literature reviews, and meta-analysis.
Search Strategy
In the search of relevant articles, the researchers used a number of key words consisting of, "prevention", "child sexual abused", "disable", "child sexual abused prevention","and" ,"or", "intellectual disability", and "retardation mental children".
Results
There were 1130 articles found after entering the keywords. Of the 1130 articles, 55 articles were selected based on titles and abstracts. Then the second selection was done by reading the complete articles in accordance with the inclusion criteria. Afterwards, there were 8 articles obtained.
Mentally retarded children have a higher risk of CSA. To overcome this, various parties including nurses, genetic counselors, teachers and mothers as the closest people to children, need efforts to prevent CSA in mentally retarded children. Explanation of CSA prevention is presented through a collection of 8 related journals (6 quantitative articles and 2 qualitative articles) as follows.
Risk of CSA in Mentally Retarded Children
Based on the research Helton, Gochezkerr, & Gruber (2018), hrough a survey of 5,873 children aged 0-17.5 years from 83 states in the United States, it was found that approximately 22% of the children with mental retardation were suspected experiencing CSA. Mentally retarded children had a higher percentage of tendency to experience CSA than normal children with a percentage of 21%:10%. Further, female children with mental retardation had higher risk of CSA than the male ones. These mentally retarded children had 2.5 times higher chance of getting sexual violence compared those without intellectual disability. This is because mentally retarded children have internalization and externalization disorders, decreased usefulness, lack of social skills and increased feelings of loneliness or tend to be quiet (Helton, Gochez-kerr, & Gruber, 2018).
In another qualitative research, Gurol, Polat, & Oran (2014) on 9 mothers who had mentally retarded children, it was revealed that most mothers were anxious because they knew that mentally retarded children had a risk of CSA due to the children's impaired developmental abilities. This is in line with the research of Thomas, Kumar, & Deb (2014) that investigated 60 mothers who had children with intellectual disabilities. The findings obtained were that the mothers (53%) stated the possibility of the risk of violence at home, others (40%) stated the possibility of risk obtained in a quiet place in an environment, and most of them (86%) stated that violence could be obtained from the closest persons to their children.
The Need for CSA Prevention
Based on the analysis of this research, four out of the eight articles stated that parents especially mothers, teachers, nurses, or counselors of community social institutions stated that they needed efforts to prevent CSA in mentally retarded children who were at risk of experiencing this.
According to Tsuda, Hartini, Hapsari, & Takada (2017), investigation, a study involving 130 female and male teachers who taught mentally retarded children (4-18 years) found that there were (83.1%) teachers thought that sexual education in school was inadequate. This was due to the lack of materials regarding CSA prevention education that must be taught by teachers to mentally retarded children. Meanwhile, there were (71.5%) teachers stated that the purpose of health education in schools was to teach children the appropriate knowledge.
Research by Gurol, Polat, & Oran, (2014), found that 8 of 9 mothers as the participants who had mentally retarded children stated that they needed sexual education for their mentally retarded children. Most mothers stated that sexual education was expected to be provided by schools through seminars that mothers could also participate in. This was due to lack of information that the mothers had.
In line with the research conducted by Neill, Lima, Thomson, & Newall (2015), on 6 mothers who had mentally retarded children, they found that the mothers explained that they needed information on explaining puberty or related-sexuality information to their children by considering cognitive ability and development of the children.
Further investigation on 38 counselors of mentally retarded children found that there were some of the parents, caregivers, and patients still asked about information regarding sexual abuse prevention in puberty as well as reproduction health for mentally retarded children (Murphy, Lincoln, Meredith, Cross, & Rintell, 2016).
3 Form of CSA Prevention
Prevention can be done by parents especially mothers at home, by teachers at schools and even by health professionals or counselors. However, there are not many interventions described in detail in these 8 articles reviewed. Gurol, Polat, & Oran's (2014) study found that the mothers stated they were always worried about the condition of their children by not leaving them alone, watching over them while sleeping at night, and watching over the interaction of their mentally retarded children their other normal children. The investigation by Tsuda, Hartini, Hapsari, & Takada (2017) also found that the topics of sexual education taught by most teachers (66.9%) were how to maintain their own bodies (male and female) that can and should not be touched or seen by others as well as differences in gender.
From Y. Kim's (2016) research, the form of prevention were provided in the form of program or training administered to 3 mentally retarded children. It was a CSA prevention program that provided a training or simulation that taught self-protection skills to mentally retarded children. In that simulation, the children were being faced with particular threatening situations such as when there was someone asked them to undress, asked them to kiss her/him, and touched their sensitive parts.
The Effectiveness of CSA Prevention in Mentally Retarded Children
The implementation of CSA prevention has a positive impact in improving the knowledge and ability of mentally retarded children in preventing them from the risk of CSA. CSA prevention is also useful in providing information to parents in collaborating with schools and health professionals.
The finding from the research conducted by Kucuk, Platin, Erdem, (2017) showed the data that CSA prevention program intervention that was performed in 15 mentally retarded children with mild mental retardation (9 males, 6 females) aged 10-14 years was effective in reducing the parents' anxiety on their mentally children's ability in protecting themselves against alleged CSA. The prevention was done by increasing children's knowledge of sensitive parts of the body that need to be maintained, good and bad touches, how to verbally express the word "no" when experiencing unpleasant situations, how to protect themselves from strangers and report that unpleasant experience to the closest persons. Cooperation with parents and reinforcement at home are expected to increase the effectiveness of CSA prevention education.
Another research by Y. Kim (2016) also revealed that the prevention given in the form of program or training administered to 3 mentally retarded children in the form of a simulation or role play was effectively used in teaching self-protection skills to mentally retarded children. There was an increase in the ability of mentally retarded children to verbally agree or reject certain situation by saying "no", as well as the ability to leave or to report an unpleasant situation to the closest persons.
Discussion
In compiling this literature review, there were analyses from 8 articles relating to prevention of child sexual abuse (CSA) in mentally retarded children, which were traced to relevant articles from 2010 to 2018. The results found from these articles explained the risks, the need, the form and effectiveness of CSA prevention in mentally retarded children. Furthermore, these various articles used varied methods such as cross sectional, quasi experiment, qualitative, and A Multiple Probe across research.
There were four cross sectional methods criticized based on Joanna Briggs Institute (JBI) guidelines.
All of them described the subjects of research and background of the respondents in detail consisting of respondent characteristics of the mentally regarded children by setting the age range, the characteristics of the mentally retarded children's mothers, teachers who taught mentally retarded children and the characteristics of counselors who were included in the research. In these four articles, the inclusion criteria were explained, but the exclusion criteria in the sampling were not (Helton et al., 2018;Tsuda et al., 2017).
The analysis carried out in the articles uncovered that prevention of CSA in mentally retarded children requires the collaboration of various parties including parents as the closest persons to children, teachers, counselors and health professionals. This is in line with the findings of Karia, Polat, and Oran (2014) in evaluating the views of mothers who had mentally retarded children related to the provision of sexual education and protection for children from possible abuse.
The findings of this research are guidelines for nurses, rehabilitation centers and schools to prioritize sexual education for mentally retarded children, to fulfill lack of information by families and to increase awareness of this topic. This is consistent with the research of Zang, Chen, Feng, & Li (2014) stating that the key success of CSA prevention was in collaboration and attention from various parties closest to preschoolers including parents, teachers, social workers, policy makers and public in general.
Forms of prevention done by mothers at home were less explored in these articles. There are still very few of research exploring experiences of mothers in preventing CSA and obstacles they find at home to prevent CSA. One of the eight articles in this research, Gurol, Polat, & Oran (2014), informed that there were some mothers expressed their anxiety over their children by not leaving them alone, watching over them while sleeping at night, and keeping an eye the interaction between their mentally retarded children with their other normal children.
Of the eight articles, 2 articles found the intervention given through training or role play by positioning the mentally retarded children in particular threatening situations were effective in increasing the knowledge and ability of mentally retarded children to protect themselves from the risk of CSA. CSA prevention is effective in reducing parents' anxiety over how their children can protect themselves from alleged CSA. Collaboration with parents and reinforcement at home are expected to increase the effectiveness of CSA prevention education (Y. Kim, 2016;Kucuk, Platin, & Erdem, 2017). This is in accordance with the research of Kim and Kang (2017), informing that CSA prevention program for children is an effective method of improving the ability of children to recognize forms of CSA prevention, including knowledge and behavior of self-protection.
Strange & Limitation
In reviewing the articles related to the prevention of CSA in mentally retarded children, various results were found in terms of various efforts in increasing the independence and ability of mentally retarded children to protect themselves. However, there was also limitation of this research, namely the heterogeneity of methods, populations, areas, and the results of articles that were difficult to make comparisons among the great impacts of each of the research articles.
Conclusion
In the research development so far, there have not been many studies related to prevention of CSA specifically carried out in mentally retarded children. The conclusions of this review system are drawn into four topics related to the prevention of CSA in mentally retarded children. First, mentally retarded children are children with developmental disorders who have a high risk of CSA due to their developmental disorders, especially verbal communication in expressing the insecurity they may experience.
Second, the need for information on prevention of CSA is felt by various parties including parents especially mothers as the closest person to children, and teachers who think that the existing content of information regarding CSA prevention is still inadequate. Third, the forms of CSA prevention are varied, including supervision by mothers at home, teaching of CSA prevention conducted at schools and CSA prevention training conducted by counselors or nurses.
Fourth, the CSA prevention program carried out for mentally retarded children through role-play or training on how to face threatening situations is effective for mentally retarded children. To investigate increased chances of sexual abuse of children with LD and the related factors.
-Children with LD had lower daily functional behavior, less social skills than children without LD. -Children with LD had a 2.5 times higher chance of getting sexual abuse than children without LD -Male children with LD had lower chance than the girls -Efforts are required to prevent the risk of sexual abuse to protect children with LD -There is limited program for sexual abuse prevention program specifically designed for children with LD.
DOI:
80 Cite this as: | 2019-08-04T02:54:01.099Z | 2019-01-01T00:00:00.000 | {
"year": 2019,
"sha1": "02247f86147ed66d6d565888fa8355c3e4136cb4",
"oa_license": "CCBYNCSA",
"oa_url": "https://ijds.ub.ac.id/index.php/ijds/article/download/132/95",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "02247f86147ed66d6d565888fa8355c3e4136cb4",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Psychology"
]
} |
27672106 | pes2o/s2orc | v3-fos-license | Risk factors for alcohol use among youth and main aspects of prevention programs.
Increasing alcohol consumption becomes more relevant social and health problem among youth. There is no reason to believe that this problem will decrease or be solved in the future. In such situation, it is necessary to build on the experience and conclusions of research performed by other countries. In this article, the risk factors for alcohol consumption among youth and preventive programs, in which family, school, and community play the main role, are analyzed. Such programs may attract the attention of public health specialists and public health politicians and can be not only declared, but also really implemented.
Introduction
Despite attitudes to reduce alcohol consumption in Lithuania, indicated officially in health program of Republic of Lithuania and Alcohol Control Law, the majority of law acts and law corrections during the last decade basically only liberalized alcohol marketing, advertisement, improved availability, formed favorable public attitudes towards alcohol consumption. Following processes inevitably influenced attitudes of children and youth regarding alcohol consumption and changed habits of its consumption. The results of two international surveys of schoolchildren lifestyle ("European School Survey on Alcohol and other Drugs -ESPAD" and "Health Behavior in School-Aged Children -HBSC") demonstrated only increasing extent of alcoholic consumption, and in the nearest future, a steady decrease cannot be expected as well (1)(2)(3).
Situation can be characterized as problematic: preventive matters of alcohol use are being solved slowly without clear tactics, programs are implemented episodically, and the community involvement in dealing with problems is limited. On the other hand, no quality evaluation system for alcohol prevention programs is developed.
Above-mentioned facts suggest that it is very important to analyze the phenomenon of alcohol consumption and to strive at least for stabilization of alcoholic beverage use among young people in Lithuania. In such situation, it is actual to appeal to research conclusions and experience of other coun-tries. In the article, we aimed to review the risk factors for alcohol use among youth and preventive programs in which the main role is played by family, school, and community. The information provided in this article might attract the attention of public health specialists, public health politicians and can be useful in planning and realizing alcohol use prevention programs for youth.
The risk factors for alcohol use
In the last decade, it is more and more referred about a risk behavior syndrome in adolescence (4). This term commonly specifies behavior increasing serious danger to the health. Adolescence, being itself a unique transition period, is a risky period in person's life. The risk factors developing in this moment easily cause changes of behavior. Factors of alcohol use develop early and consistently increase a probability to use it in the future (5).
Epidemiological researches enable to indicate some factors of the universal risk of alcohol use among young people such as uncontrolled availability of alcoholic beverages, miscellaneous family problems, peer pressure, and others. Youth using alcohol commonly smoke, their sexual behavior more frequently can be characterized as unsafe sex, and suicides or other risk behavior are common among them. Based on scientific literature, the presumptions of alcohol use and risk factors can be divided into two categories. The firstlegal, social, and cultural factors that provide norma-tive presumptions of behavior. The second -the factors of the individual and its interpersonal environment.
Legal, social, and cultural factors
Laws and norms. The consequences of global alcohol consumption motivate the states to realize the principles of alcohol control policy by legally and economic restrictions. In the world practice, alcohol consumption is reduced by taxes. The associations between the use of alcoholic beverages and price are established. A 1% increase in the price of alcohol reduces beer consumption by 0.3%, wine -1%, and spirit -1.5% (6). F. A. Sloan and colleagues found that a 10% increase in the price of alcoholic beverages decreased the number of binge-drinking episodes by 8% (7). According to the analysis of statistical data, health economic specialists indicate that an increase in alcohol taxes reduces the mortality rate from liver cirrhosis (8). Legal age limits for buying alcoholic beverages, advertising restrictions are an effective means against its use among youth. It is established that the increase in the minimum legal drinking age and alcohol taxes is directly proportional to the decrease of its consumption (9). Price control in reducing the total alcohol consumption is approximately twice as effective as health education (10).
Availability. Although the availability of alcoholic beverages is determined by law and public norms, this condition is analyzed as separate factor stimulating alcohol use. Incompletely strict control of availability and marketing of alcoholic beverages determines higher consumption. E. R. Weitzman et al. found that the risk for youth drinking increased from 4.76 to 6.50 when alcohol was easily obtained (11). According to the same authors' data, strict alcohol control at school decreases its consumption.
Social and economical factors. F. J. Elgar et al. found that alcohol consumption among schoolchildren depended on social and economical possibilities of the country's population and their inequality (12). According to the data of this research, 11-and 13-year-old adolescents in the families of high income compared with contemporaries in the families of low income were 82% and 123%, respectively, more likely to drink regularly (5-6 times a week). These adolescents were more frequently intoxicated with alcohol. Such situation is associated with increasing income of parents and more pocket money given to children. Finish researches reported that drunkenness was more common among 14-year-olds who got more pocket money (13). A risk of alcoholic beverage use among youth depends on parents' occupation and profession too. M. Droomers et al. found that adolescents from the lowest occupational groups 2.5 times more frequently drank alcohol (14). Adolescents who were working more than 10 hour a week consumed alcohol more often (15).
Possibility is not rejected that there are strong associations among social factors of neighborhood and negative behavior. Researchers of Finland analyzed alcohol use among adolescents according to the aspect of socioeconomic factors (16). The relative risk for alcohol use was 1.35 times among boys living in areas with low employment status compared with boys living in areas with better employment status, whereas the relative risk for alcohol abuse was 1.47 times among girls living in areas with high education status compared with areas with low education status. The results of research, mentioned above, established the influence of social factors on alcohol use.
Factors of individual and its interpersonal environment
Physiological and genetic factors. Researchers report that an early risk of alcohol use associates with the complex of physiological processes in the organism such as alcohol metabolism, the development of endocrine and nervous system functions. Organism's sensitivity to alcohol is biochemically related to platelet monoamine oxidase activity (17). Studies of monozygotic and dizygotic twins explain addiction to alcohol use (18). Alcohol dependence in monozygotic twins is more than twice as in dizygotic. Genetic studies found that from 40% to 60% of biological parents influence children alcoholism (18). According to other similar studies, less than 30% of children of alcoholic parents became alcoholic themselves (5).
Individual features. In the scientific literature, individual features are indicated as early risk behavior factors (4). According to the date of longitudinal studies, children with a "difficult" temperament, who are characterized as reticent, emotionally unstable, slow adapting, became regular users of alcohol, drugs, and tobacco in late adolescence (19). Alcohol consumption depends on other personal characteristics: impulsivity, extraversion, sensation seeking, inaptitude to cope with psychological problems (20). L. Hechman with colleagues, after a 10-year cohort study, reported that hyperactivity in childhood predicted future alcohol use (21).
Family structure role. Scientists in various aspects analyze the associations of alcohol consumption with family and its functioning peculiarities. There are several factors which increase alcohol consumption among children: disrupted communication among the members of family, insufficient control of parents, persistent con-Gintarė Petronytė, Apolinaras Zaborskis, Aurelijus Veryga flicts, long-lasting family disorganization after of father's or mother's death, divorce (22). Adolescents living with single parents or parent married one more time tend to use alcohol more than three times. Older brother or sisters using alcohol show a strong negative example for the juniors. According to longitudinal data, parental alcohol use is an important risk factor stimulating adolescents to use it in late adolescence, especially if adolescents behave antisocially (23). The research by American scientists demonstrated that maternal drinking during pregnancy determines alcohol-related problems in offsprings at the age of 21 years (24).
Parenting style in the family. Parenting style is one of the most important characteristics of family. A. C. Fletcher and B. C. Jefferies found that alcohol use depended on parental authoritative style (25). Adequate control and the acceptation of children's autonomy displays responsibility for behavior and self-regulation that help to resist against negative peer pressure. In the families where there are strict roles and restrictions, alcohol use is decreased, but constant parental communication about the consequences of alcohol use is not effective (26). According to the data from same research, the trust of parents in the older adolescents prevents the juniors from the alcohol use. However, the more parents pay attention to alcohol problems of the younger children, the more older adolescents drink.
Academic achievements. Adolescents using alcohol are characterized by poor academic achievements, more often are absent from school, and have more academic penalties (22).
Peer pressure. Research in the last decade proved strong associations existing between alcohol use and peers (20). Adolescents under the influence of drinking peers pass from binge drinking to the problematic stage of alcohol use (27). F. Li et al. conclusions indicated that drinking adolescents had stronger influence on younger adolescents (14-15-year old) than seniors (16-18-year old) (28).
Conceptions reducing the risk factors and hindrances in their implementation
The creation of prevention programs starts with an analysis of a real situation and its evaluation. In this stage, it is essential to identify risk factors determining alcohol use. There again, it is important to establish causal risk factors related to alcohol consumption.
The concepts of vulnerability and resiliency encourage defining the individual risk level. Vulnerability denotes intention of susceptibility to risk, and resiliency -the ability to surmount or withstand against the risk (29). It is important to pay attention to the effect of risk factor interaction, which increases their general influence (5).
Strategies of alcohol prevention must be directed at eliminating the most common risk factors, distancing from the moment of their development, and establishing groups or individuals at the greatest risk (5). It may be possible to reduce the risk factors or eliminate them at all directly with the help of preventive intervention. While creating a program, a task is set to ascertain which risk factors can be controlled, moderated, or which risk factors cannot be affected at all. For example, the problem of alcoholism in family can be so complex that is impossible to solve it; then it is needed to search for means how to protect children growing in the risky environment. Therefore, the importance of protective factors becomes very great. The protective factors reduce the effect of risk factors.
J. S. Brook et al. noted two mechanisms by which protective factors reduce risk for adolescent alcohol use (5). In the first "risk/protective" mechanism, the influence of risk factors is moderated by protective factors, for example, a risk of adolescent alcohol use due to drinking peers can be straiten by strong attachment to parents or defined norms of behavior. In the second "protective/protective" mechanism, one protective factor determines another thus strengthening effect of it, for example, a strong attachment to parents strengthens discipline that effectively protects adolescents against alcohol use.
Disregarding the role of protective factors in reducing the effect of risk factors is one of the major reasons for failure of most alcohol prevention programs. Therefore, it is actually that prevention programs, directed at reducing risk factors, simultaneously would strengthen the protective factors. It is essential to understand interaction mechanisms of risk and protective factors and predict optimal actions to avoid alcohol use of youth. It is noted that scientific researches revealed many abovementioned factors which can successfully help to avoid global subsequence of alcohol use (5).
Recently special efforts of countries and competent organizations are directed towards laws and social norms (30). Restriction of alcohol consumption is effected in several ways -by increasing taxes, age limits for buying alcoholic beverages, tightening up hours and places of sale. As it is noted above, restriction of availability and increasing prices of alcohol reduce consumption frequency. Age limit and restriction of alcohol sale places have desirable results too, but are less effective than taxes (9).
Risk factors for alcohol use among youth and main aspects of prevention programs
Based on above-mentioned factors, applied strategies should give expected effect on reducing the alcohol consumption among youth. Nevertheless, vicious attitudes exist in the society that youth alcohol use is a normal and inevitable phenomenon. It is probable that changing social norms would give positive results. Therefore, it would be effective to involve the mass media which would consistently form consciousness of community. Unfortunately, weak interest of various sectors, media too, only retards the solving of alcohol-using problems. Another problem, often created by mass media, is that some conclusions of research are incorrectly interpreted, for example, that alcohol use reduces blood pressure and the risk of cardiovascular diseases (31).
Social skills training programs
Scientific studies confirmed strong associations between adolescents' alcohol use and communication with drinking peers. According to social learning theory, alcohol consumption in adolescence can be relatively explained as modeled behavior of others (32). Consequentially, friends are a risk or causal factor, so efforts directed towards them might considerably reduce alcohol use. In worldwide practice, social skills training programs are implemented that learn communication and interrelationship skills, resistance to peer pressure. Such programs motivate the formation of negative attitude towards drinking, develop self-control skills, and learn to cope with stress and to solve emergent problems positively (33"35). E. A. Smith et al. found that social skills training already after a year has given positive changes in alcohol use (36). It is noticed that aggressive, rejected by peers, or bullied children more tended to delinquent behavior, so programs designed to develop social competence can help to solve not only alcohol consumption, but the other problems too. Social skills training programs reveal real guidelines for prevention strategies. However, an object for research often is the age of children for whom interventions are most effective.
Discussions emerge about the programs that are designed for the prevention of several addictions, for example reduction in alcohol and drug use and smoking. According to P. L. Ellikson's and R. M. Bell's data, mixed programs had influence only on two addictions from three (37). Meanwhile A. Biglan et al. found no influence of smoking refusal program on alcohol and marijuana use (38). Continuing discussions once again confirm complex associations of these addictions. More research is needed which would deter-mine the interactions of various addictions and possibilities of their application in programs. Besides, the titles of programs, often having form of slogan, are important too. Recent data suggest that such formulations as "just say no" or "do not drink," which are common in Lithuania too, only encourage forbidden behavior (34,35).
School program
The majority of articles designed to analyze alcohol use among youth highlight preventive role of school. School, where thousands of children meet every day, is an ideal place for realization of alcohol prevention and intervention programs. However, not only active actions are necessary, but and comprehensive analysis of school environment, when schoolchildren are surveyed for the identification of the main problems. Only in such case, it is possible to expect that the content of prevention programs will fulfill the existing needs and their realization will be effective.
In Lithuania, it is generally accepted that universal prevention models are alcohol prevention programs directed at school-aged population. National alcohol programs at schools are often a political priority of country, but it is a matter of debate whether they are the most effective part of prevention (35,39). Researches of other countries demonstrated that not every program could have a positive effect. In the US, the largest at the country level drug and alcohol use prevention program, DARE (Drug Abuse Resistance Education), has failed (35,40).
The spread of information is one of the most common means of alcohol consumption prevention programs taking place in schools. However, only information about alcohol and its harm to health commonly do not produce expected results. Probable consequences of alcohol consumption are seen by most youth as distant and directly unrelated to their behavior. There again, providing information unprofessionally it is possible just to pique curiosity and to stimulate alcohol consumption. Consequently, prevention programs designed for alcohol use prevention and reduction of harm among youth must be based not only on realistic arguments, suggestive examples, but also effective methods.
In most countries, alcohol preventive programs are grounded on interactive teaching during which the priority is given to motivation, contemplation, and emotional education (33). Programs at universities familiarize students with "safe-drinking" norms (39). Analogous programs could be really implemented in Lithuanian universities and colleges too.
School organizational activities are related to schoolchildren behavior, academic achievements, and attendance, so its efforts and changes directly motivate young people to live healthier and to avoid addictions. Novelties of learning, opportunity to be involved in school life, wide spectrum of out-of-school activities, projects -there are only few alcohol prevention strategies.
Family programs
Little research has yet been conducted to explore the possibilities of alcohol prevention in family, and there are no data about pursued programs of sufficient size in Lithuania. Family is commonly analyzed as an integral prevention part, the functions of which confine to reunions of groups. Meanwhile in other countries, the main component of programs is family involvement. The project Northland, which lasted for 7 years in Minnesota, demonstrated how the involvement of parents strengthened communication with children on alcohol use questions (41). Classical examples of early childhood and family support programs showed that such programs helped to decrease academic failure and the problems of childhood behavior in preschool and school, to stop the progression of addictions in adolescent (42,43).
In conclusion, alcohol is a psychoactive substance, legally produced and sold, the production and marketing of which can be easily controlled and regulated by country. Therefore, country's policy when it only maintains and encourages production of alcoholic beverages but leaves the prevention of use and the elimination of consequences to the health specialists is especially dangerous. The results of such policy are obvious -the proportion of youth using alcoholic beverages is increasing. Researches demonstrate that legal measures give the greater effect than pursued programs. Moreover, law means conditionally does not cost anything, but pursued programs need big investments and very high professionalism that applied measures would not cause more harm than benefit. How-ever, it does not deny the necessity of programs and education in scholastic institutions, action with family and community. This review of literature shows that as other diseases of "civilization," alcohol prevention orientated to person must begin at early age. Realizing it, the main role takes family, school, and community. But as the experience of other countries shows, alcohol prevention does not always give expected results -not only material resources, but also objective scientific information about the risk factors for alcohol use among youth and effective means for reducing them are needed. Only correctly adjusting already applied measures, adapting them in the context of the country, and following science-based methodology it is possible to hope that alcohol consumption will stabilize and the aim of Lithuanian health program will be achieved.
Conclusions
1.Alcohol use is determined by many presumptions and risk factors, which conditionally are divided into two categories. First category includes legal, social, and cultural factors that provide normative presumption for behavior. The second -the factors of the individual and its interpersonal environment. 2.The situation on alcohol consumption among youth in the country shows that prevention is ineffective, and possibilities are not explored yet. Therefore, attention given to above-mentioned laws and alcohol prevention programs must be a priority in improving youth mental health. 3.Planning and implementing youth alcohol use prevention programs in Lithuania it is worth to appeal to research conclusions and experience of other countries, correctly adapting and adjusting them to social-cultural and economical context of the country. As compared to other countries, in Lithuania, little attention is paid to the solution of this problem, and frequently it is restricted to information exchange, morals, or prohibitions. | 2018-04-03T01:00:46.643Z | 2007-02-04T00:00:00.000 | {
"year": 2007,
"sha1": "bf05c39b5031a5b7f82f46d16a1690a5b8809f18",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/43/2/103/pdf?version=1530183866",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "a98af371cb27d21a4f3768a2888e8f4b77fe6563",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
265485275 | pes2o/s2orc | v3-fos-license | Soluble P2X7 Receptor Plasma Levels in Obese Subjects before and after Weight Loss via Bariatric Surgery
Obesity is a systemic disease frequently associated with important complications such as type 2 diabetes and cardiovascular diseases. It has also been proven that obesity is a disease associated with chronic low-grade systemic inflammation and that weight loss improves this low-grade chronic inflammatory condition. The P2X7 purinergic receptor (P2X7R), belonging to the family of the receptors for extracellular ATP, is a main player in inflammation, activating inflammasome and pro-inflammatory cytokine production. In this study, we evaluated the plasma levels of soluble P2X7R (sP2X7R) measured in a group of obese patients before and one year after bariatric surgery. Furthermore, we evaluated the relation of sP2X7R to inflammatory marker plasma levels. We enrolled 15 obese patients who underwent laparoscopic sleeve gastrectomy, evaluating anthropometric parameters (weight, height, BMI and waist circumference) before and after surgery. Moreover, we measured the plasma levels of inflammatory markers (CRP, TNFα and IL-6) before and after weight loss via bariatric surgery. The results of our study show that one year after bariatric surgery, obese patients significantly decrease body weight with a significant decrease in CRP, TNF-alfa and IL-6 plasma levels. Similarly, after weight loss, obese subjects showed a significant reduction in sP2X7R plasma levels. Moreover, before surgery, plasma levels of sP2X7R were inversely related with those of CRP, TNF-alfa and IL-6. Given the role of P2X7R in inflammation, we hypothesized that, in obese subjects, sP2X7R could represent a possible marker of chronic low-grade inflammation, hypothesizing a possible role as a mediator of obesity complications.
Introduction
Obesity and its related complications have reached epidemic diffusion, and its prevalence is every increasing worldwide [1].It is well known that obesity is characterized by a state of low-grade chronic inflammation that plays an important role in the pathogenesis of obesity-related systemic complications such as cardiovascular diseases, hypertension, diabetes mellitus and some forms of cancer [2].In the obese subject, several inflammatory cytokines are released by inflammatory cells hosted within the adipose tissue but also directly by adipocytes themselves [3,4].These inflammatory cytokines exert direct and indirect effects on the body's metabolic processes, altering glucose uptake in insulin-sensitive tissues [5] and leading to the typical metabolic complications of obesity such as diabetes mellitus [1].
In humans, the expression of many different inflammatory cytokines is positively correlated with body mass index (BMI) and body fat mass [6].In recent years, the role of the P2X7 purinergic receptor (P2X7R) in the initiation of the inflammatory process has become increasingly important [7][8][9][10].P2X7R is a member of the P2X family of purinergic receptors expressed in many different cell types and, in particular, in the cells of the immune system, with a consolidated role in the modulation of inflammation [9].
ATP is generally known as a pro-inflammatory molecule, acting mainly via the activation of P2X7R on target cells after its release by damaged cells or via active release mechanisms [16].Thus, it is possible that P2X7R might affect different functions in adipocytes, and these alterations could be amplified during pathological conditions characterized by systemic adipose tissue expansion such as obesity.
The chronic low-grade systemic inflammation present in obesity due to the release of inflammatory cytokines seems to play an important role in the pathogenesis of obesityassociated systemic and metabolic complications [17][18][19].Recently, it has been reported that P2X7R is released and detectable in circulation, correlating with C-reactive protein (CRP) plasma levels in patients with different inflammatory conditions [20].
In the present study, we have determined the shed P2X7R (sP2X7R) plasma levels in a group of obese patients before and after body weight reduction due to laparoscopic sleeve gastrectomy, evaluating possible correlations with common systemic inflammatory markers.
Results
Table 1 reports the anthropometric characteristics of the studied patients, with basal elevated body weight, BMI and waist circumference together with basal plasma levels of CRP, IL-6 and TNFα that were elevated before bariatric surgery.As expected, after sleeve gastrectomy, we observed a rapid and progressive reduction in body weight (Table 1) together with a significant reduction in the plasma levels of the inflammatory markers (CRP, IL-6 and TNFα) (Table 1).
Similarly, sP2X7R plasma levels were also significantly reduced in obese subjects after weight loss via bariatric surgery (p < 0.001, Figure 1).Interestingly, we observed a significant inverse correlation between plasma levels of sP2X7R and those of CRP, IL-6 and TNFα in obese subjects before weight loss via bariatric surgery (Figure 2A-C, respectively).On the contrary, after bariatric surgery and significant weight loss, we did not observe any statistically significant correlation between these parameters (Figure 2D-F Interestingly, we observed a significant inverse correlation between plasma levels of sP2X7R and those of CRP, IL-6 and TNFα in obese subjects before weight loss via bariatric surgery (Figure 2A-C, respectively).On the contrary, after bariatric surgery and significant weight loss, we did not observe any statistically significant correlation between these parameters (Figure 2D-F, respectively).Interestingly, we observed a significant inverse correlation between plasma leve sP2X7R and those of CRP, IL-6 and TNFα in obese subjects before weight loss via bari surgery (Figure 2A-C, respectively).On the contrary, after bariatric surgery and sig cant weight loss, we did not observe any statistically significant correlation between t parameters (Figure 2D-F, respectively).In order to determine if the decrease in sP2X7R plasma levels after bariatric surgery was related to weight loss, we analyzed the relation between sP2X7R plasma levels and BMI before bariatric surgery, without observing any significant correlation (Figure 3A).Furthermore, we performed a correlation analysis between the difference in sP2X7R and BMI values before and after surgery (∆sP2X7R and ∆BMI, respectively) without observing any significant relation (Figure 3B).
In order to determine if the decrease in sP2X7R plasma levels after bariatric surgery was related to weight loss, we analyzed the relation between sP2X7R plasma levels and BMI before bariatric surgery, without observing any significant correlation (Figure 3A).Furthermore, we performed a correlation analysis between the difference in sP2X7R and BMI values before and after surgery (∆sP2X7R and ∆BMI, respectively) without observing any significant relation (Figure 3B).
Discussion
The worldwide incidence of obesity is ever increasing, together with its associated systemic complications, such as type 2 diabetes mellitus, cardiovascular diseases, dyslipidemias and some cancers, thus representing a huge threat for people's health [21].
Obesity is considered a systemic disease associated with low-grade chronic inflammation [22], and a huge wealth of studies have demonstrated the primary role of chronic inflammation in the pathophysiological mechanisms leading to the development of chronic complications of obesity [23].
The main culprits of this low-grade chronic inflammatory state of obesity are abundant adipose tissue, through the production of pro-inflammatory adipokines, and immune cells, primarily macrophages and lymphocytes, which are resident in or migrate within the adipose tissue attracted by chemotactic stimuli directly secreted by the adipocytes [24].
To confirm the close association of obesity with inflammation, it is well known that obese subjects almost always show moderately elevated plasma levels of inflammatory markers such as CRP and IL-6 [25].Furthermore, the loss of fat mass, induced by lifestyle modifications, pharmacological therapy and/or bariatric surgical treatment, determines a reduction in inflammatory cytokine levels and an improvement in the inflammatory state that characterizes obesity, confirming the primary role of adipose tissue in the pathogenesis of this condition [26].
It has been demonstrated that the purinergic receptor P2X7R plays a primary role in many different inflammatory processes underlying numerous pathologies (ocular diseases, systemic autoimmune diseases, liver diseases, COVID-19-related ARDS) through activating many different signaling pathways, ultimately determining the activation of inflammatory cells and/or the direct synthesis of cytokines [8,9,27].
P2X7R, a plasma membrane receptor activated by extracellular ATP, derives from the assembly of three identical subunits that can be shed via proteolytic cleavage [28] or associated with plasma membrane-derived microvesicles and microparticles in different cell types [29].Alternatively, it is possible that the full trimeric P2X7R is shed from cells as for the single subunits, as recently demonstrated by Giuliani et al. [20].In their study, Giuliani et al. showed that, in addition to its expression in many different tissues, P2X7R is released
Discussion
The worldwide incidence of obesity is ever increasing, together with its associated systemic complications, such as type 2 diabetes mellitus, cardiovascular diseases, dyslipidemias and some cancers, thus representing a huge threat for people's health [21].
Obesity is considered a systemic disease associated with low-grade chronic inflammation [22], and a huge wealth of studies have demonstrated the primary role of chronic inflammation in the pathophysiological mechanisms leading to the development of chronic complications of obesity [23].
The main culprits of this low-grade chronic inflammatory state of obesity are abundant adipose tissue, through the production of pro-inflammatory adipokines, and immune cells, primarily macrophages and lymphocytes, which are resident in or migrate within the adipose tissue attracted by chemotactic stimuli directly secreted by the adipocytes [24].
To confirm the close association of obesity with inflammation, it is well known that obese subjects almost always show moderately elevated plasma levels of inflammatory markers such as CRP and IL-6 [25].Furthermore, the loss of fat mass, induced by lifestyle modifications, pharmacological therapy and/or bariatric surgical treatment, determines a reduction in inflammatory cytokine levels and an improvement in the inflammatory state that characterizes obesity, confirming the primary role of adipose tissue in the pathogenesis of this condition [26].
It has been demonstrated that the purinergic receptor P2X7R plays a primary role in many different inflammatory processes underlying numerous pathologies (ocular diseases, systemic autoimmune diseases, liver diseases, COVID-19-related ARDS) through activating many different signaling pathways, ultimately determining the activation of inflammatory cells and/or the direct synthesis of pro-inflammatory cytokines [8,9,27].
P2X7R, a plasma membrane receptor activated by extracellular ATP, derives from the assembly of three identical subunits that can be shed via proteolytic cleavage [28] or associated with plasma membrane-derived microvesicles and microparticles in different cell types [29].Alternatively, it is possible that the full trimeric P2X7R is shed from cells as for the single subunits, as recently demonstrated by Giuliani et al. [20].In their study, Giuliani et al. showed that, in addition to its expression in many different tissues, P2X7R is released within the bloodstream and is associated with microvesicles and/or microparticles of cellular origin [20].The same study demonstrated the presence of a direct linear relationship between serum concentrations of sP2X7R and CRP in specific inflammatory conditions [20].Recently, we have shown that human adipocytes express P2X7R and that extracellular ATP induces the production and secretion of IL-6 [4].On the other hand, it has been previously shown that P2X7R is expressed also in CD4+ T-cells deriving from peripheral blood and adipose tissue, with increased expression in subjects with an elevated BMI [30].
In the present study, using a recently commercialized ELISA kit directed to the full molecule of sP2X7R, we have shown that in subjects with obesity, weight loss after bariatric surgery induced a significant decrease in blood sP2X7R concentrations compared to the pre-intervention concentrations.
We have also analyzed the relationships between plasma concentrations of sP2X7R and the most common plasmatic inflammatory markers associated with the chronic low-grade inflammation present in obesity, such as CRP, TNFα and IL-6.All these inflammatory parameters were significantly decreased after weight loss, as for sP2X7R levels.Nonetheless, serum concentrations of sP2X7R were inversely correlated with those of CRP, TNFα and IL-6.
The inverse relationships between the serum concentrations of sP2X7R and those of CRP, TNFα and IL-6 are currently unexplained but could be linked to the small number of samples or to a specific effect of the particular inflammatory state that characterizes obesity, with different effects on the production of each specific inflammatory marker.To this regard, there are few published studies regarding the shedding of P2X7R within the plasma in humans [4,31].In their elegant study, Giuliani et al. compared sP2X7R plasma levels in patients suffering from diseases characterized by different inflammatory states, such as infectious disease, brain/heart ischemia and cancer.Interestingly, the authors observed a positive linear correlation between serum sP2X7R and CRP, a serum marker of acute inflammation, in patients with acute inflammatory conditions such as infection/sepsis.On the contrary, the authors did not observe any relationship between sP2X7R and CRP plasma levels in patients with cancer, a condition generally characterized by low-grade chronic inflammation.Similarly, Garcia-Villalba et al. observed a direct correlation between sP2X7R and CRP plasma levels during the acute phase in COVID-19 patients [31].
The quite interesting observation of our study is represented by the fact that after significant weight reduction induced via bariatric surgery, obese patients showed a significant reduction in sP2X7R plasma levels, along with a reduction in well-known markers of systemic inflammation such as CRP and TNFα, as usually expected after weight loss, to signify a reduction in the (still unknown) inflammatory stimuli produced in obesity.
The pathophysiological significance of the elevation of serum sP2X7R in a low-grade chronic inflammatory state such as obesity is unclear.To this regard, it is known that plasma membrane receptors for cytokines can be shed into circulation through proteolytic mechanisms or released in association with plasma membrane-derived microvesicles or microparticles influencing the signaling by the cognate agonist [32][33][34].Furthermore, the measurement of soluble cytokine receptors is gaining interest for the differential diagnosis of selected immune-mediated diseases [20].
It is not possible at the moment to establish which cells release sP2X7R within circulation.Previous studies have demonstrated that dendritic cells are able to release sP2X7R after stimulation with ATP [29].ATP is present in the peri-cellular space in tissues affected by inflammation under conditions allowing for the activation of P2X7R, and, thus, there are conditions allowing for the release of microvesicles containing P2X7R itself or its subunits [13].It is highly probable that, in addition to dendritic cells, many other cell types, mainly immune cells, are able to shed P2X7R within the plasma.Human adipocytes, which express P2X7R and functionally respond to stimulation with extracellular ATP [4], could be able to release this receptor within circulation [13].The lack of any significant relation between the degree of the reduction in sP2X7R plasma levels (∆sP2X7R) and BMI (∆BMI) induced via bariatric surgery seem to exclude a role for adipose tissue as the main determinant of sP2X7R release, although the reduced number of patients considered in the present study could have hampered this result.Nonetheless, the observation that obese subjects show a significant decrease in serum sP2X7R after important weight loss could suggest the use of plasma sP2X7R concentration as a non-specific inflammatory marker in obesity that can be monitored during the clinical management of this disease, possibly being a diagnostic and/or a prognostic marker of this disease.
Finally, the increased release of sP2X7R in obese subjects, regardless of the tissue responsible for the release, could transfer P2X7R to other cells distant from the site of secretion.Considering the role of P2X7R in inflammation, the shedding of P2X7R within the circulation might represent a potential "tranferrable" pro-inflammatory stimulus.
Further studies will be necessary to clarify the cell(s) responsible for sP2X7R release in subjects with obesity and if this "transferrable" sP2X7R has a role in obesity and, in particular, in the pathogenesis of the well-known important systemic complications of this disease.
Patients
We retrospectively analyzed data from fifteen obese patients (6 males and 9 females), with a mean age of 46.1 ± 7.6 years, previously treated with laparoscopic sleeve gastrectomy (LSG).All subjects had their clinical history recorded and underwent and a physical examination, routine blood tests, the measurement of body weight, the measurement of BMI and abdominal circumference and an evaluation of biochemical status.Three weeks before surgery, all patients were put on a balanced very-low-calorie diet (800 kcal/day).Following surgery, all patients were regularly scheduled for post-bariatric follow-up with clinical and biochemical assessments after 1, 3, 6 and 12 months, to monitor any potential complications and, in particular, nutritional deficiencies.In the early stages of recovery after surgery, a liquid diet was recommended before gradually incorporating semi-liquid foods approximately one month after surgery before the transition to a balanced diet.Additionally, all patients were prescribed multivitamins and mineral supplements for the post-surgical period.LSG was performed by the same surgical team in adult obese patients (>18 and <60 years old), with a BMI greater than 35 kg/m 2 in the presence of complications or with a BMI greater than 40 kg/m 2 with or without complications, according with the NIH consensus criteria for bariatric surgery [35].Further exclusion criteria regarded patients with previous bariatric or gastric surgery and active gastric ulcer disease.Absolute exclusion criteria included alcohol addiction and severe psychiatric disorders.Clinical and anthropometrical parameters and plasma levels of sP2X7R, CRP, IL-6 and TNFα in obese patients were evaluated before and one year after bariatric surgery.
Blood Sample Collection
Fasting venous blood samples were obtained in all participants pre-operatively in the morning between 8 a.m. and 9 a.m., after 8 h fasting.Plasma collected in K3-EDTA tubes was separated after centrifuging at 3000 rpm for 15 min and then transferred and stored in sterile Eppendorf tubes at −80 • C for the subsequent analyses.Sample collection and processing were performed in a similar manner in each obese subject at a one-year follow-up visit after LSG.
Plasma Measurements
The plasma concentration of sP2X7R was determined utilizing the P2X7R ELISA kit (CUSABIO, Houston, TX, USA), following manufacturer's instructions.Optical density was measured spectrophotometrically at 450 nm.
Tumor necrosis factor alpha (TNFα), interleukin-6 (IL-6) and hsCRP plasma levels were determined at the Central Laboratory of the University Hospital of Padova using commercial kits.Each determination was run in duplicate.
Statistical Analysis
Mean values with standard deviations (SD) were calculated to describe the data obtained before and after weight loss.All the variables were tested for normality with a graphic method (histograms) using Jamovi (version 2.3.21).All continuous variables were analyzed using a Wilcoxon or paired t-test according to their distribution.Simple linear regression analysis and Spearman's correlation coefficients were calculated for the association between sP2X7R and CRP, IL-6 and TNFα plasma levels before and after weight reduction via bariatric surgery.A p value < 0.05 was considered statistically significant.The statistical analysis was carried out using GraphPad PRISM software (version 9.5.1.GraphPad Software Inc., La Jolla, CA, USA).
Conclusions
In this study we demonstrated that obese subjects present detectable plasma levels of sP2X7R that are significantly reduced after weight loss via bariatric surgery.The possible role of the release of sP2X7R within the plasma in the pathogenesis of the different complications of obesity requires further experimental and clinical studies.
Informed Consent Statement:
The aim of the study was illustrated to the patients and written informed consent was obtained from all subjects involved in the study.
Figure 1 .
Figure 1.Plasma levels of sP2X7R in obese subjects before (PRE) and after (POST) body weight reduction via laparoscopic sleeve gastrectomy (LSG).Data analysis was performed with Wilcoxon matched pairs test.A p-value < 0.05 was considered as statistically significant.
Figure 1 .
Figure 1.Plasma levels of sP2X7R in obese subjects before (PRE) and after (POST) body weight reduction via laparoscopic sleeve gastrectomy (LSG).Data analysis was performed with Wilcoxon matched pairs test.A p-value < 0.05 was considered as statistically significant.
Figure 1 .
Figure 1.Plasma levels of sP2X7R in obese subjects before (PRE) and after (POST) body weigh duction via laparoscopic sleeve gastrectomy (LSG).Data analysis was performed with Wilc matched pairs test.A p-value < 0.05 was considered as statistically significant.
Figure 3 .
Figure 3. (A) Spearman's correlations between plasma levels of shed P2X7 receptor (sP2X7R) and BMI before bariatric surgery.(B) Spearman's correlations between the difference in sP2X7R and BMI values before and after surgery (∆sP2X7R and ∆BMI, respectively).A p value < 0.05 was considered statistically significant.
Figure 3 .
Figure 3. (A) Spearman's correlations between plasma levels of shed P2X7 receptor (sP2X7R) and BMI before bariatric surgery.(B) Spearman's correlations between the difference in sP2X7R and BMI values before and after surgery (∆sP2X7R and ∆BMI, respectively).A p value < 0.05 was considered statistically significant.
Author Contributions: M.R. designed the experiments; M.F. and L.P. performed the laparoscopic sleeve gastrectomy; F.C., M.G. and M.C. performed serum measurement and collected the data; M.R., A.D.V. and F.C. analyzed the data; M.R., M.C., A.D.V., A.G. and R.V. prepared the figures and wrote the manuscript.All authors have read and agreed to the published version of the manuscript.Funding: This study was funded by PRIN-Research Projects of National Relevance-by the Italian Minister of University, project #20178YTNWC_004, and by a donation to the University of Padova (grant no.ROSS_PRIV20_01) from SAFAS Group SpA, Altavilla, Vicenza, Italy.Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and approved by the Ethics Committee of the University Hospital of Padova (approval no.RF-2016-02363566, approval date: 6 June 2016).
Table 1 .
Body weight, body mass index (BMI), waist circumference (WC), plasma levels of Creactive protein (CRP), tumor necrosis factor alpha (TNFα) and interleukin-6 (IL-6), measured before (PRE-LSG) and after body weight reduction via sleeve gastrectomy s(POST-LSG).Data have been reported as medians.Interquartile range values have been reported in brackets.* Data analyzed with paired t-test; # Data analyzed with Wilcoxon matched pairs test.A p-value < 0.05 was considered statistically significant. | 2023-11-29T16:15:13.532Z | 2023-11-25T00:00:00.000 | {
"year": 2023,
"sha1": "2682da1f5712c3d181d3c9f6bad610aaf9268dd3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1422-0067/24/23/16741/pdf?version=1700886749",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1e92f068644651b9b282facf4980d44db3d10f5f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
} |
218953794 | pes2o/s2orc | v3-fos-license | The Attitudes of Emergency Physicians in Turkey towards the Snakebites
Melih Yüksel1, Veysi Eryiğit2, Mehmet Emre Erimşah3, Ulas Karaaslan2, Tunç Büyükyılmaz4, Eylem Ersan5 1Sağlık Bilimleri Üniversitesi Bursa Yüksek İhtisas Eğitim Ve Araştırma Hastanesi,Acil Tıp Kliniği, Bursa, Türkiye 2Balıkesir Devlet Hastanesi, Acil Servisi, Balıkesir, Türkiye 3Bolu İzzet Baysal Devlet Hastanesi, Acil Servisi, Bolu, Türkiye 4Balıkesir Atatürk Şehir Hastanesi, Acil Servisi, Balıkesir, Türkiye 5Balıkesir Üniversitesi Tıp Fakültesi, Acil Tıp Anabilim Dalı, Balıkesir, Türkiye
INTRODUCTION
Throughout the world, about 30% of the 3000 snakes are venomous and accepted as dangerous for humans (1). It is known that at least 421,000 envenomation cases are encountered and 20,000 deaths occur throughout the world each year. Snake bite is one of the major public health problems encountered especially in rural tropical areas (2). Most of the poisonous snakes in the world are seen in South America, Africa and East Asia. The most poisonous species are grouped as Elapidae, Viperidae, Hydrophiida, Antractaspidida and Colubridae (3). These snakes cause neurotoxic, myotoxic and cardiotoxic effects. In our country, 41 snake species are known. 13 of those snakes are poisonous. From those poisonous species, 10 are Viperidae (Vipers), 2 are Colubridae and one species is Elapidae. These snakes are mostly seen in Eastern and Southeastern Anatolia, Eastern Black Sea and in northwestern Thrace. Viperidae, the most abundant species in our country, causes more haemotoxic and local tissue poisonings (3)(4)(5)(6). Emergency departments(ED) are the first contact units for patients subjected to snake bites. Accurate and effective interventions in the EDs are lifesaving. The aim of this study is to contribute the literature by investigating emergency physicians' knowledge and skills regarding snakebites, the problems encountered in emergencies and whether the patients follow current guidelines. In the literature, studies are mainly on clinical and laboratory findings of the patients exposed to snake bites.
MATERIALS and METHODS
The participants of this study were the physicians working in EDs in Turkey. This study includes a questionnaire aiming to investigate emergency physicians' information and experience and experience regarding snakebites as well as demographic characteristics. It also aims to identify the causes of shortcomings in the management of EDs.In this survey, the participants were queried on their age, gender, work period, job descriptions and institutions; and whether they had witnessed patients with snake bites, whether they used antivenom, the way they used them, and whether they encountered any difficulties during their use in the emergency service, and finally whether they used tetanus or antibiotics. Data were gathered by answering the questionnaire form traditionally, or sending the link of questionnaire installed on Google Drive through mail or WhatsApp between December 2015 and June 2016. The study has been approved by the Ethics Committee of Balıkesir University School of Medicine.
Statistical Analysis:
For the statistical analysis, SPS 21.0 is used. We have checked whether the data fits the Gaussian distribution through the Kolmogorov-Smirnov and Shapiro-Wilk tests. The demographic properties and the investigations of the general answers are determined through identification tests and reported as percentages. The categorical variables have been analyzed through the Chi-Square and Fisher exactness tests. The continuous variables, in the case of Gaussian type distributions, have been identified with the mean and the standard deviation; and in the case of non Gaussian type distributions, have been identified with the median and IQR, and p < 0.05 is assumed to be statistically meaningful.
RESULTS
A total of 611 physicians participated in the study. 63.8% (n = 390) were 34 years and under. In addition, 71.7% of the participants (n = 438) were male. As for title, 34.9% (n = 213) of the physicians were emergency medicine residents (EMR) 34.5% (n = 211) were emergency medicine specialists (EMS) and 30.6% (n = 187) were general practitioners (GPs). 40.4% of participants (n = 247) were working in state hospitals, and 50.6% (n = 309) were working in the emergency department for less than 5 years. The highest participation rate of the questionnaire was in the Marmara region (27.5% (n = 168)), whereas the lowest participation rate was in the Eastern Anatolia region (8.0% (n = 49)) ( Table 1). 19.0% of the physicians (n = 116) were regularly checking the snake antivenoms in the emergency room and the snake antivenoms were being mostly checked regularly by EMSs (55.2%) (p <0.001). 71.4% of the physicians (n = 436) intervened to a snake bite patient before.
DISCUSSION
Snake antivenom is the primary treatment for poisoning (7). Antivenoms are mainly used for some of the systemic and local complications (3). Mortality rates decreased under 1% in well-treated patients after snake antivenom were applied even if they were 5% to 25% before the use of snake antivenom (5), Antivenoms commonly used in our country are mostly derived from horse serum, and are effective against the viper, which is the most common type of snakes in Turkey. Two of the most popular antivenoms that are used in Turkey are produced in Egypt and Croatia, and one other is made in Turkey. Although antivenoms are mandatory to be held in emergencies according to the regulation of health ministry, the rate of "regular control of antivenoms" and "knowing the commercial name of the antivenom" was low. We think this situation may be due to the fact that snake bites are not common. Complications which may occur during the use of antivenoms are divided into two as early and late reactions. Early reactions can be classified as anaphylactoid reactions with urticaria, bronchospasm, and hypotension as well as simple febrile reactions during the application of antivenom resulting from pyrogens in poorly produced antivenoms (8). Up to 40% of patients who have early reactions also develop systemic anaphylaxis (9). Adrenaline, antihistamines and corticosteroids should be available for allergic reactions / anaphylaxis depending on the use of antivenom (6). Late reactions, which are commonly related to serum sickness, include lymphadenopathy, proteinuria, fever, itching and urticaria, and arthralgia. They develop one to two weeks after treatment. After the treatment with some antivenoms, the frequency may be as high as 75% (10). In this study, 82.3% of the participants stated that necessary measures should be taken to prevent possible complications before using antivenoms. This rate was the highest in EMSs, while it was the lowest in GPs, which was found to be statistically significant. (p = 0.001). We think that this situation may be related to emergency medical training and clinical experience. In the user manual of antivenoms used in our country, it was stated that antivenoms can be used as IV and HWE / HI. However, in the literature, the use of antivenoms as IV is recommended ((8, 11)8, 23). IV administration is a more effective method (12,13). Additionally, IV use is advantageous in controlling the infusion rate and enables easier cessation of antivenom administration (14). Subcutaneous or IM injection is not suggested (15) as IM use causes delayed and incomplete neutralization of venom components, lower bioavailability, and a longer time to reach peak concentration (16,17). Also, as well as local injections' slower neutralization of the poison, used antivenoms impair circulation by increasing pressure as the bitten regions are mostly feet or hands in which pressure is already high (18). We think that one of the most important results of this study was the answer to "How should antivenoms be used?" question. Only 48.9% of the participants expressed that antivenoms should be used as IV, while other participants stated the need for various methods of application. According to the gender of the participants, no statistically significant difference was found. However, when age, length of work, titles, institutions and geographical regions were considered, statistically significant results were obtained. (Table 2). No data or suggestions regarding IM use of antivenoms were encountered in either the user guide of the manufacturer or the literature. High rates of this application are quite thought-provoking. We believe that this application might be confused with applications such as rabies and tetanus immune globulin application. In addition, IM suggestions of the manufacturers may also mislead the physicians. HWE / HI application increases with the age. Additionally, HWE / HI use is higher in GPs in terms of the title, state hospitals in terms of institution, and in Marmara in terms of region. We believe that those results may be related that GPs and the older physicians do not follow current guidelines. In addition, the reason why HWE / HI use was high may be that GPs commonly work in state hospitals and snake bites are rare in the Marmara region because of the increasing industrialization and urbanization. Another result of this study is that most of the physicians (65.0%) hesitated to start antivenom. Additionally, 41.1% of the physicians need consultations to start antivenoms. This situation can be explained by the rarity of a snake bites and clinical inexperience. No consensus has been reached regarding which department should intervene the medical condition in emergency services (Burns, tendons, blood vessels and nerve injuries, etc.). That situation may cause a conflict between physicians in emergency services and other related physicians. This has a negative impact on patient care and emergency operations. To solve this problem, local solutions are usually adopted on the basis of hospitals. As snake bites require a multidisciplinary approach, problems are experienced in the management of after emergency. This study also confirmed this fact. 48.8% of the participants stated that problems were encountered in the hospitalization of those patients. Most of the admission problems were encountered in university hospitals (62.2%) which was statistically significant (p <0.001). We believe that the limited number of beds, and the time consuming consultation process are some of the factors which cause problems in University Hospitals. 40.6% of physicians stated that such patients were hospitalized in ARICU in their institutions. That rate was found to be quite lower in University Hospitals compared to other health institutions (28.7%). Additionally, these patients were mostly followed in emergency services in University Hospitals (40.2%), which was statistically significant (p <0.001). Which department should follow the patients is not clear. This situation can be considered as one of the reasons for the high rate in university hospitals in addition to bed and consultation problems mentioned before. To handle the situations which require hospitalization, intensive care units have been established in emergency departments of some universities and teaching and research hospitals. Routine use of tetanus are recommended for the treatment of snake bites (5). 94.4% of the participants of this study stated that tetanus prophylaxis should be questioned. According to the title, this rate was found as the highest in EMSs and the lowest in GPs, which was statistically significant. The routine use of antibiotics in the treatment of snakebite is controversial. Some sources suggest routine use of antibiotics in patients initiated antivenom (5), while some sources advice antibiotics to the patient to undergo surgical procedure (19). 78.2% of the participants suggested the use of antibiotics. The rate of antibiotic use was the highest in EMPAs, while it was the lowest in EMSs, which was statistically significant (P <0.001). A clinical staging has been established for the treatment of poisonous snake bites in emergency rooms (20). However, there is an uncertainty regarding the care of those patients after emergency services as snake bites require a multidisciplinary approach. So, we believe that a clinical algorithm should be established by the Ministry of Health and other specialty associations.
LIMITATIONS
The most important limitation of this study was the number of participants. The reason of this situation might be related to the reluctance of physicians to fill out a questionnaire on this issue and the misbelief "questioning the knowledge ". In addition, that the physician distribution is not homogeneous and the participation rates across regions are not balanced, and does not cover all of the geographic regions, and the absence of the Cronbach's alpha calculation of the survey, may be viewed as further limitations and deficiencies.
CONCLUSION
As a result, physicians working in emergency services are inadequate in the diagnosis and treatment of the patients who are exposed to snakebites and they experience various problems in the management after emergencies. Antivenom use is the most important method in treating those patients. However, the wide misuse of antivenoms is highly thought-provoking. Thus, these issues should be re-examined and addressed in detail in undergraduate and post graduate trainings. | 2020-05-07T09:10:28.110Z | 2020-01-01T00:00:00.000 | {
"year": 2020,
"sha1": "8719c03c8e6f4246c9c096c02e13e3becb5162ae",
"oa_license": null,
"oa_url": "https://doi.org/10.5505/ktd.2020.03764",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "bbf3c99e462f124e09fe179973de6d17ec9f1a25",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Medicine"
]
} |
15195500 | pes2o/s2orc | v3-fos-license | Interference Alignment with Asymmetric Complex Signaling - Settling the Host-Madsen-Nosratinia Conjecture
It has been conjectured by Host-Madsen and Nosratinia that complex Gaussian interference channels with constant channel coefficients have only one degree-of-freedom regardless of the number of users. While several examples are known of constant channels that achieve more than 1 degree of freedom, these special cases only span a subset of measure zero. In other words, for almost all channel coefficient values, it is not known if more than 1 degree-of-freedom is achievable. In this paper, we settle the Host-Madsen-Nosratinia conjecture in the negative. We show that at least 1.2 degrees-of-freedom are achievable for all values of complex channel coefficients except for a subset of measure zero. For the class of linear beamforming and interference alignment schemes considered in this paper, it is also shown that 1.2 is the maximum number of degrees of freedom achievable on the complex Gaussian 3 user interference channel with constant channel coefficients, for almost all values of channel coefficients. To establish the achievability of 1.2 degrees of freedom we introduce the novel idea of asymmetric complex signaling - i.e., the inputs are chosen to be complex but not circularly symmetric. It is shown that unlike Gaussian point-to-point, multiple-access and broadcast channels where circularly symmetric complex Gaussian inputs are optimal, for interference channels optimal inputs are in general asymmetric. With asymmetric complex signaling, we also show that the 2 user complex Gaussian X channel with constant channel coefficients achieves the outer bound of 4/3 degrees-of-freedom, i.e., the assumption of time-variations/frequency-selectivity used in prior work to establish the same result, is not needed.
Introduction
The notion of degrees-of-freedom of a communication network, also known as the capacity prelog/multiplexing-gain/effective-bandwidth etc., is a fundamental concept in communication and information theory. Intuitively, it measures the number of independent signaling dimensions that are accessible in the network. Degrees-of-freedom characterizations are well known for Gaussian point-to-point, multiple-access, and broadcast channels (with no common messages), with or without multiple-antenna nodes. Much less is known about the degrees of freedom of interference networks, where the distributed nature of the network precludes joint processing of transmitted or received signals. With recent focus on capacity approximations for interference networks, the degrees of freedom characterizations have become especially important. This is because the degrees of freedom characterization also provides a first order capacity approximation, whose accuracy approaches 100% as the signal-to-noise power ratio (SNR) approaches infinity. The high SNR regime is where all desired and interfering signals are much stronger than the local noise at each receiver. The regime is of interest because it directly addresses the problem of interference, believed to be the principal limitation to the capacity of wireless networks.
The study of degrees of freedom of interference networks was pioneered by Høst-Madsen and Nosratinia [1], who showed that the two user interference channel has only one degree of freedom, even with cooperation between transmitters and/or cooperation between receivers, provided this cooperation takes place over Gaussian channels as well. For K user interference channels, Høst-Madsen and Nosratinia showed that it is not possible to achieve more than K/2 degrees of freedom. However, it was also conjectured in [1] that the outer bound is loose in general and interference networks have only 1 degree of freedom, regardless of the number of users. Intuitively, the conjecture supports the optimality of orthogonal medium access schemes (e.g. TDMA/FDMA) where each user is assigned a fraction of the channel's degrees of freedom (signaling dimensions) so that the sum of these fractions is equal to one.
Reference [2] showed that the intuition behind the Høst-Madsen-Nosratinia conjecture does not apply to complex (or real) Gaussian K user interference channels with time-varying/frequencyselective channel coefficients. It was shown that for these channels the total number of degrees of freedom is almost surely K/2. The key to this surprising result was the new idea of interference alignment, introduced for the X channel in [3,4] and for the interference channel in [2]. In particular, [2] introduced an explicit interference alignment scheme for K-user time-varying/frequencyselective interference channel, which comes arbitrarily close to the outer bound of K/2 degrees of freedom by coding over sufficiently long symbol extensions. However, since the Høst-Madsen-Nosratinia conjecture was made in the context of interference channels with constant, complex channel coefficients, the conjecture remained open. Evidently, the difficulty lay in determining the feasibility of interference alignment over constant channels.
Interference alignment, as defined in [3], refers to the construction of signals in such a manner that interfering signals cast overlapping shadows at each receiver while the desired signals remain distinct. The key to this construction is the relativity of alignment, i.e. signals align differently at each receiver. Because each receiver sees a different picture, it is possible for the same set of signals to align at one receiver where they are not desired and remain distinct at another receiver where they are desired. The interference alignment schemes proposed in literature can be broadly classified into two categories.
1. Signal Vector Space Alignment Schemes -Linear transmitter precoding and receiver combining operations are used to transform the interference channel into multiple non-interfering Gaussian channels. The relativity of alignment exploited here is the distinct linear transformation (channel matrix) between each transmitter-receiver pair, which makes sure that signal vectors are rotated differently on each link. The strength of this approach is that these schemes work for all values of channel coefficients with the exception of a subset of measure 0. The limitation here is the need for the assumption that each receiver sees different relative rotations of the input signal vectors. With multiple antennas, the different channel matrices provide these distinct rotations [3,5]. If multiple antennas are not present, the distinct rotations come from the diagonal channel matrices resulting from multiple channel uses over time-varying/frequency-selective fading channels [2,6]. However, if the channels are constant (i.e., not time-varying or frequency-selective) then the effective channel matrix resulting from multiple channel uses is simply a scaled identity matrix, which does not rotate the signal vectors at all. Since the signal vectors align the same way at each receiver, interference alignment is not possible without aligning the desired signal with the interference as well. Therefore these schemes have not been effective for interference channels with constant channel coefficients.
Signal Level Alignment
Schemes -This approach relies on structured coding, e.g., multilevel or lattice codes, to align interference in the signal "level" space. The relativity of alignment exploited here comes from the distinct scaling of signals between each transmitter-receiver pair. Due to the different scaling factors, signal levels align differently at each receiver. Examples of this approach include [7,8,9,10], all of which address constant interference channels. The strength of this approach is its ability to achieve alignment for some constant channels. An apparent weakness may be that since these approaches are derived from the deterministic channel models of [11], they inherit some of the limitations of the deterministic models as well. In particular, deterministic channel models have proven very useful for studying channels with, essentially, real channel coefficients. However, for channels where channel phase and vector alignments play an important part, deterministic models have not been as useful.
Another limitation of interference alignment over signal levels is that so far these approaches have been shown to have degrees-of-freedom benefits only for channel coefficients over a subset of measure 0, i.e., only for special cases.
We summarize here the key degrees-of-freedom results for signal level alignment schemes. A multilevel coding based interference alignment scheme was proposed in [9]. The scheme was shown to achieve more than 1 degree of freedom for interference channels where desired channel coefficients are of the form Q e and interfering channel-coefficients are of the form Q o , where e, o are any even, odd integers, respectively, and Q is a large number (relative to the number of users K). However, the special structure assumed on the channel coefficients meant that this scheme was only restricted to channel coefficient values that constitute a subset of measure 0. Taking the idea of interference alignment in signal-level further, Etkin and Ordentlich [10] proposed a sophisticated lattice alignment scheme for the class of interference channels where the direct channel coefficients are algebraic irrationals and the cross-channel coefficients take rational values. Using results from diophantine approximation theory they showed that a lattice scaled by an algebraic irrational factor "stands out" from a lattice scaled by a rational factor allowing a separation of signal and interference. This scheme proved the achievability of the full K/2 degrees of freedom for a dense set of channel coefficients. However, the assumptions on the channel coefficient values (e.g. rationals and algebraic irrationals) restricted its scope to, once again, a subset of measure zero, and the validity of Høst-Madsen-Nosratinia conjecture remained unknown for almost all channel coefficient values.
While much of the interference alignment work for constant channels has focused on achieving more than 1 degree-of-freedom, i.e., there are at least two results that provide a counterpoint by identifying conditions under which the degrees of freedom of a K user interference channel may be limited to values less than K/2. Reference [12] provides conditions under which a fully connected 3 user complex Gaussian interference channel with constant coefficients can have only 1 degree of freedom. The conditions are re-stated in this paper in Theorem 3 to provide a comparison with analogous conditions that emerge out of this work. Another limiting result, shown in [10], is that for real Gaussian interference channels where all channel coefficients are rational, K/2 degrees of freedom are not achievable. Like the alignment schemes, these results are also limited to channel coefficient values over a subset of measure 0.
With one exception, the importance of channel phase is ignored in nearly all of the interference alignment schemes presented so far. The exception comes from [2] where the following example is presented to introduce the concept of interference alignment.
Phase Alignment Example [2]: Consider a symmetric interference channel where all direct channel-coefficients are equal to 1 and all cross-channel coefficients are equal to j = √ −1. If the additive white Gaussian noise power at each receiver is normalized to unity and the transmitted signal power at each transmitter is limited to SNR, then the exact sum-capacity of this interference channel is shown to be K 2 log(1 + 2SNR) bits/channel-use. The phase-alignment example described above, is the starting point for a new direction that we pursue in this paper. First, we note that wireless channels are invariably modeled with complex (instead of real) channel coefficients, inputs and noise to capture both the in-phase and quadraturephase signaling dimensions. Moreover, the Høst-Madsen-Nosratinia conjecture is made in the context of complex channels and therefore must be proved or disproved in the same setting. Further, the complex model offers a richer signal space and therefore may not suffer from some of the degrees-of-freedom limitations associated with real models. For example, while [10] shows that K/2 degrees of freedom are not achievable with rational coefficients in the real Gaussian interference channel, the phase-alignment example shows that the complex Gaussian interference channel, even with coefficients that have rational (in fact, integer) real and imaginary parts, can still achieve the full K/2 degrees of freedom. Unfortunately, like all other examples described earlier, the phase alignment example of [2] is also restricted to a special choice of channel coefficients and leaves the Høst-Madsen-Nosratinia conjecture open for almost all channel coefficients.
A New Idea -Asymmetric Complex Signaling
In this section we summarize an important new idea that emerges from this work -the need for asymmetric complex signaling.
Consider the input optimization problem for interference networks. Suppose we restrict the achievable schemes to Gaussian inputs. Because we do not have multiple antennas, there is no input covariance matrix to optimize. The input optimization therefore appears to be limited to only a power optimization. Now consider the following two questions.
• Symbol Extensions -Can we do better by transforming the complex scalar input optimization problem to a complex vector input optimization problem by considering multiple, say M , channel uses as one M -dimensional complex super-symbol?
Input optimization in this case becomes the problem of optimizing the M ×M input covariance matrix of the M dimensional complex input Gaussian vector.
• Asymmetric Complex Signaling -Can we do better by transforming the M dimensional complex system to a 2M dimensional real system and optimizing inputs over the 2M real dimensions?
Input optimization in this case becomes the problem of optimizing the 2M × 2M input covariance matrix of the 2M dimensional real input Gaussian vector.
Since the channels are constant across channel uses, the intuitive answer here may be that symbol extensions are not going to be useful. Indeed we do not see any benefits of symbol extensions in point to point, MAC or BC channels, even in the MIMO setting. Interestingly symbol extensions do help, even with constant channel coefficients, in the MIMO compound broadcast channel [13], the MIMO X channel [3] and the MIMO interference channel [2]. However, in our case since we do not have multiple antennas it is not immediately obvious if symbol extensions will be useful.
The second possibility, of asymmetric complex signaling, goes against the generic intuition that favors circularly symmetric Gaussians. In wireless communication theory we typically come across only circularly symmetric complex Gaussian random variables. The additive noise is invariably modeled as circularly symmetric complex Gaussian. The most commonly studied channel fading model, Rayleigh fading, refers to circularly symmetric complex Gaussian channel coefficient values. More importantly, since our interest is in optimal (capacity-achieving) input distributions, circularly symmetric complex Gaussians are omnipresent there as well. For complex Gaussian point-to-point, multiple-access and broadcast channels with constant channel coefficients, with or without multiple antennas, capacity-achieving input distributions are circularly symmetric complex Gaussian. Intuitively, the reason is that circularly symmetric complex Gaussian random variables maximize entropy for a given second moment [14]. We are not aware of any works on capacity/rate optimization for complex Gaussian wireless networks where asymmetric Gaussian inputs outperform circularly symmetric Gaussian inputs -with one important exception, and that brings us back to the phase alignment example of [2].
The capacity achieving scheme for the phase-alignment example requires each transmitter to use only real valued Gaussian inputs, as opposed to circularly symmetric complex Gaussian inputs. This choice of input signals ensures that interference at each receiver aligns in the imaginary dimension while the desired signal is received free from interference in the real dimension of the complex received signal space. However, since the phase-alignment example assumes very specific values of channel coefficients, it is also not obvious if it extends to arbitrary values of channel coefficients.
Aside from settling the Høst-Madsen-Nosratinia conjecture, the main contribution of this work is to establish the need for asymmetric complex signaling (and symbol extensions), not only for some special cases, but for almost all values of complex channel coefficients. The achievable scheme proposed in paper relies on both channel extensions and asymmetric complex signaling, and is shown to achieve at least 1.2 degrees of freedom for almost all complex Gaussian interference channels with 3 or more users. Notably, circularly symmetric Gaussian inputs can only achieve 1 degree of freedom on this channel. Further, because our achievable scheme uses only simple beamforming with every receiver treating interference as noise, it shows that asymmetric complex signaling and symbol extensions are important not only for capacity characterizations but also for practically motivated rate optimization problems where the receivers do not have multi-user detection capabilities, e.g. [15].
Channel Model
For the complex Gaussian interference channel with K users, the signal received at receiver r during the n th channel use, is expressed as Z r (n) represents independent identically distributed (i.i.d.) zero mean unit variance circularly symmetric complex Gaussian noise terms. X t (n) is the signal sent from transmitter t. H rt = |H rt |e jφrt is the complex channel coefficient between transmitter t and receiver r, whose value is held constant across channel uses. All nodes have only a single antenna each, so that the signals, channel coefficients and noise are complex scalars. The transmit power constraint is represented as: As usual, in the K user interference channel, transmitter k has message W k for receiver k, k ∈ {1, 2, · · · , K}. All messages are independent. The probability of error P e , achievable rates R 1 , R 2 , · · · , R K and sum-capacity C Σ (SNR) of the interference channel are defined in the standard Shannon sense. The number of degrees of freedom d is defined as: We also use an alternative representation for equation (1) in terms of only real quantities as: Thus, bold font is reserved for complex quantities while the real representations of the same variables use normal font. Note that while the complex quantities, e.g., Y r (n), H rt , are scalars, the real counterparts, Y r (n), H rt , etc., are matrices. U (φ) is a rotation matrix with the properties: To avoid cumbersome notation, we will drop the channel-use index "n" unless necessary to avoid ambiguity.
With the exception of Corollary 1 (which is a trivial generalization of Theorem 2 to more than 3 users) in this paper we focus primarily on the K = 3 user interference channel.
Phase Alignment
For the 3 user constant MIMO interference channel where each node is equipped with M = 2 antennas, an explicit interference alignment solution is found in [2] that achieves the outer bound of 3/2 degrees of freedom. For our case, we have only single antenna nodes. However, viewing complex numbers as two-dimensional vectors, the channel input-output equations (4) are analogous to an interference channel where each node is equipped with two antennas. A natural question then is to determine if the MIMO interference alignment solution can directly translate into a generalized phase alignment scheme for all channel coefficients over a subset of non-zero measure. In this section, we answer this question in the negative.
In order to achieve a total of 3/2 degrees of freedom, each user in the 3 user interference channel must achieve 1 degree of freedom over a 2 dimensional space. Since the equations (4) already represent a 2 dimensional space, we only need to design the signal vectors along which each user can achieve 1 degree of freedom. In other words, we need to design the real vectors V 1 , V 2 , V 3 , each of dimension 2 × 1 such that: Here, V 1 , V 2 , V 3 are the precoding vectors, optimized for the channel coefficient values, but independent of the messages, while x 1 , x 2 , x 3 represent the real-scalar codewords which carry the messages. Now consider receiver 1. The desired signal is received along the vector H 11 V 1 while interference arrives along the vectors H 12 V 2 and H 13 V 3 . In a 2 dimensional signal space, in order to leave one interference-free dimension for the desired signal, the two interfering signals must span only a one-dimensional space. This means: Similarly, at receiver 2, interference from transmitters 1 and 3 must align, and at receiver 3, interference from transmitters 1 and 2 must align, (15) is obtained by substituting (10) into (14). (16) is obtained by substituting (12) into (15). The solution is formalized in the following theorem.
Theorem 1 The 3-user complex Gaussian interference channel with constant channel coefficients has 3/2 degrees of freedom if where a = 0 mod(π) if and only if a is an integer multiple of π.
Proof: Based on our channel model (4) the vector V 1 must have real elements. (16) requires that the real vector V 1 is an eigenvector of a rotation matrix U (φ). The rotation matrix U (φ) has real eigenvectors only if φ is an integer multiple of π. This gives us condition (18). The remaining conditions are easily verified to be necessary to make sure that the desired signal vector is linearly independent of the interference vector at each receiver. Remark: Because of the constraint (18), the solution is once again restricted to a subset of channel coefficient values that has measure 0 and the validity of the Høst-Madsen-Nosratinia conjecture is not determined.
Achievability of 1.2 Degrees of Freedom
Consider the 5 symbol extension of the 3 user complex Gaussian interference channel with constant channel coefficients.
Thus, we have a 5 dimensional complex signal space, or equivalently, a 10 dimensional real signal space.
where ⊗ indicates the Kronecker product operation. U (φ) is a block-diagonal matrix with the 2 × 2 block U (φ) repeated along the main diagonal. Clearly, U (φ) also satisfies the properties: Within this 10 dimensional real signal space, each transmitter sends 4 separately encoded real streams along 4 linearly independent real vectors.
1 are the four signaling vectors used by transmitter 1 to send 4 separately encoded scalar real codeword symbols x 1 1 (n), x 2 1 (n), x 3 1 (n), x 4 1 (n). The signal vectors for each transmitter are similarly defined. Since each transmitter sends 4 streams, the total number of streams sent is 12. Sending 12 real streams over 10 real symbols, or equivalently 6 complex streams over 5 complex symbols, means that if these streams can be separated from the interference and from each other, then a total of 6/5 = 1.2 degrees of freedom are achieved per channel use.
Interference alignment is needed to accomplish this objective. Consider receiver 1. Out of the 10 real dimensions available to the receiver, 4 are needed for his desired signals, leaving no more than 6 dimensions for interference. Since there are 8 interfering signals we need two alignments at each receiver to make sure that interference spans only 6 real dimensions. For receiver 1 we choose the following.
For receiver 2 we choose the following alignments.
Similarly, for receiver 3 we choose the following alignments.
Equations (25)-(30) ensure that at each receiver, interference cannot span more than 6 dimensions.
Suppose at each transmitter t = 1, 2, 3, we choose the first two 10 × 1 signaling vectors V 1 t , and V 2 t randomly according to some continuous distribution. The remaining two signaling vectors at each transmitter V What remains to be shown is that at each receiver the desired signal vectors are linearly independent among themselves and also from the interference. Without loss of generality we show this for receiver 1. The same argument applies at each receiver due to the symmetry of the signaling scheme.
Suppose the arguments of the sin(·) functions are not-integer multiples of π. Then all a i must be zero. This proves that the real signal vectors at receiver 1 are linearly independent among themselves and from the interference-subspace. Thus, each desired signal vector can be projected into the null space of the rest of the desired and interfering signal vectors to achieve one-degree of freedom per desired signal vector. By symmetry, the same arguments can be applied to each receiver. Overall, we are able to achieve 12/10 degrees of freedom. Thus we have shown the main result of this paper, as stated in the following theorem. 56) have interesting similarties to the following singularity conditions identified in [12] and re-stated here in our context.
It is interesting to note that the same phase expressions appear in theorems 2 and 3. Consider, for example, the interference channel where all channel coefficients have magnitude 1, i.e. h rt = 1, ∀r, t ∈ {1, 2, 3}. Then, theorems 2 and 3 can be used to identify all channels (i.e. the channels that have only 1 degree of freedom), except those cases where at least one of the phase expressions is an odd multiple of π and none of the phase expressions is an even multiple of π. One such scenario is the 3 user interference channel with H rt = 1 if r = t and H rt = −1 if r = t, ∀r, t ∈ {1, 2, 3}.
Upper bound
The best known degrees-of-freedom outer bound for the fully connected 3-user complex Gaussian interference channel with constant coefficients is 3 2 . Stronger outer bounds are only known for special cases, such as Theorem 3 and reference [10] where it is shown that 3/2 degrees of freedom are not achievable when all channel coefficients are real and rational. In this section we show that the class of linear interference alignment schemes considered in this work cannot achieve more than 1.2 degrees of freedom for almost all complex Gaussian 3user interference channels with constant channel coefficients. Note that this does not preclude the existence of other schemes that may surpass 1.2 degrees of freedom, and even achieve the outer bound of 3/2 degrees of freedom. In fact the existence of such schemes is already shown in [9,10] as well as in Theorem 1 in this paper. However, all these cases constitute a subset of measure 0 over the set of all possible values of complex channel coefficients.
Lemma 1 For any given complex vector V, and for any given angles φ, β, such that, there exist real constants (c 1 , c 2 ) ∈ R 2 such that Proof: It suffices to show that there exist real constants (c 1 , c 2 ) such that Writing the real and imaginary parts separately, we have the following equations.
is invertible, i.e., has a non-zero determinant. But the determinant of this matrix is sin(β − α), which is guaranteed to be non-zero by (63).
Limitations of the Linear Interference Alignment Scheme
Consider a generalization of the interference alignment scheme used in Section 3. Instead of a 5 symbol extension, suppose we take an S symbol extension, so that the total number of signaling dimensions available at each transmitter or receiver is equal to S complex dimensions or, equivalently, 2S real dimensions. Instead of every user sending 4 real, independently encoded streams along 4 linearly independent real signal vectors, suppose users 1, 2, 3 send d 1 , d 2 , d 3 real, independently encoded streams along d 1 , d 2 , d 3 linearly independent real signal vectors, respectively. As in Section 3, in order to achieve a total of d 1 +d 2 +d 3
2S
degrees of freedom, the received signal vectors for the desired signals must be linearly independent of the received signal vectors carrying interference. The following lemma states a limitation of the alignment scheme.
Also, suppose Then V 1 1 cannot be linearly independent of the interference at receiver 1. and In other words, any given signal vector cannot align with the interference at more than one undesired receivers without becoming aligned within the interference-space at its own desired receiver. Note that if the signal vector becomes aligned within the interference-space at its own desired receiver, then it is useless from a degrees of freedom perspective, i.e., it cannot provide an interference-free signaling dimension. Proof: Given (69) and (70), we wish to show (78). We can express (69) and (70), equivalently, as From Lemma 1 we know that there exist real constants (c 1 , c 2 ) such that because the condition is satisfied by assumption. Substituting from (79), (80) into (81) we have, with a s = c 1 a s , s ∈ {1, 2, · · · , d 3 } which proves the statement of Lemma 2 Lemma 2 highlights a key limitation of the type of linear alignment schemes described in this section. This limitation is formalized in the following theorem.
Theorem 4 With the class of linear interference alignment schemes described in this section, the 3 user complex Gaussian interference channel with complex channel coefficients, cannot achieve more than 1.2 degrees of freedom except over a subset of channel coefficient values of measure 0.
Intuitively, the significance of the number 1.2 can be understood as follows. Consider any signal vector that delivers a coded data stream with one degree of freedom to its desired receiver. It occupies one dimension at its desired receiver. It can share a dimension with an interference vector at one of the receivers where it is undesired, i.e. it can align with interference at one undesired receiver. However, as shown by Lemma 2, it cannot align with interference at the remaining undesired receiver. Thus, it occupies one dimension each at two receivers and half a dimension at the third receiver. The average number of dimensions needed to deliver one degree of freedom is, therefore, (1+1+0.5)/3 = 2.5/3. Conversely, the maximum number of degrees-of-freedom delivered per dimension is 3/2.5 = 1.2. A detailed proof is presented next. Proof: Consider the generalized linear interference alignment scheme, where users 1, 2, 3 send d 1 , d 2 , d 3 real, independently encoded streams along d 1 , d 2 , d 3 real signal vectors in a 2S (real) dimensional vector space created by an S symbol extension of the complex Gaussian interference channel with constant channel coefficients. Because of Lemma 2, we can divide each users' signal space into three disjoint sets. Consider user i. The d i dimensional (real) signal space occupied by transmitter i's signals is represented by the span of the d i linearly independent columns, This vector space can be divided into three disjoint subspaces, spanned by the columns of ∀i, j ∈ {1, 2, 3}. The partition of the signaling spaces is based on how they align with interference at their unintended receivers. Thus, V 12 is the part of the signal space from transmitter 1 that aligns with the interference from transmitter 2 at receiver 3, V 13 is the signal vector subspace from transmitter 1 that aligns with the interference from transmitter 3 at receiver 2, and the remaining subspace V 11 does not align with interference from any other transmitter at any receiver. The partitioning of signal spaces for transmitter 2 and 3 follows the same interpretation.
Since the signal vectors sent by each transmitter are linear independent among themselves and the channel matrices are invertible, it is easily seen that the following must be true.
Note that the partitions of signal spaces outlined above are disjoint. Thus, e.g., there is no subspace of user 1's transmitted signal space that aligns with transmitter 2's interference at receiver 3 and also aligns with transmitter 3's interference at receiver 2. This is because Lemma 2 states that such vectors will not be separable from the interference at the desired receiver 1. Since these vectors do not provide interference-free signaling dimensions for user 1, they do not contribute to the degrees of freedom and can be eliminated, as done in the formulation presented above. Now consider receiver 1. Let us count the total number dimensions spanned by the received signals. The desired signal must be linearly independent of the interference, so it occupies d 1 dimensions. The interfering signals from transmitters 2 and 3 have an overlap of d 23 = d 32 dimensions, so together they occupy d 2 + d 3 − d 23 dimensions. Since the total number of dimensions is 2S, we must have d 1 + d 2 + d 3 − d 23 ≤ 2S. Following similar arguments for receivers 2 and 3 we have the following conditions. (98) Adding these constraints we obtain Using (97) we bound the second term as follows.
Substituting (103) into (101), we obtain Thus, the total number of degrees of freedom achieved for the K = 3 user complex Gaussian interference channel with constant coefficients is no more than 1.2 except over a subset of channel coefficient values of measure 0.
Asymmetric Complex Signaling -Applications
While we introduce the asymmetric complex signaling scheme in this paper with the primary goal of settling the Høst-Madsen-Nosratinia conjecture, the new signaling scheme has broad applications beyond this immediate objective. In this section, we provide a few examples.
Rate Region with Interference as Noise
There is some interest in characterizing the rate region of the interference channel that is achievable by treating interference as noise. For example, [15] characterized the Pareto boundary of the MISO interference channel rate region under this assumption. Using our notation, the basic model for the interference channel used in [15] can be represented as following.
where Y k is the received complex signal vector, H kk is the matrix of complex channel coefficients, V k is a beamforming vector, Z k is the circularly symmetric additive white Gaussian noise vector, and x k is the circularly symmetric complex Gaussian codeword symbol. The achievable rates are then described as: The model described above does not allow the following possibilities.
Interference Alignment
As shown in this paper, all of these factors have a significant impact on the achievable rates of interference channels, even with every receiver treating all interference as noise. Since the single antenna interference channel model studied in this paper can be seen as a special case of the MISO interference channel, and the signaling scheme used in this work also treats interference as noise, it is clear that the rates (107) are suboptimal for the interference channel with single user receivers. In other words, the rate region of interference channels achievable while treating interference as noise is strictly larger than previous characterizations. Interference alignment, channel extensions and most importantly, asymmetric complex signaling will play an important role in solving this problem. Another related issue is the design of iterative schemes to optimize achievable rates for interference channels, often with the same assumption -treating interference as noise. Even for iterative schemes that do not ignore the possibility of interference alignment, such as the algorithms presented in [16] factoring asymmetric complex signaling into the iterative algorithm may provide higher rates, and as shown in this paper, possibly even higher degrees of freedom.
The 2 User X Channel
The 2 user X channel [17] is the same physical channel as the 2 user interference channel. However, in the X channel there are four independent messages, with a message from each transmitter to each receiver. The input-output relationship of the X channel follows from (1) as for r = 1, 2. Like the interference channel, the X channel can be equivalently represented using real inputs and outputs as The message from transmitter i to receiver j is indicated as W ji . The rates, capacity and the degrees of freedom of the X channel are defined in a manner similar to the interference channel.
The study of the degrees of freedom of complex Gaussian X channel with constant channel coefficients was pioneered by [4] who showed that if each node is equipped with M antennas then a total of 4M 3 degrees of freedom are almost surely achievable. This was a surprising result because the interference channel with the same number of antennas has only M degrees of freedom [18]. The additional degrees of freedom were attributed to an implicit overlap of interference spaces achieved by iterative optimization of transmitters and receivers in [4]. This observation lead to the first explicit interference alignment scheme, introduced in [3]. [3] showed that the constant X channel achieves (almost surely) 4M 3 degrees of freedom when the nodes have M antennas each. The improvement from 4M 3 to 4M 3 comes not only from the explicit interference alignment scheme but also from a novel idea of channel extensions that was introduced in [13] for the compound broadcast channel and in [3] for the X channel. For the case where each node has only a single antenna, M = 1, [3] introduced the idea of channel extensions over time-varying/frequency-selective channels to achieve the outer bound of 4/3 degrees of freedom -this idea was taken further in [2,6] to establish the degrees of freedom of interference and X networks. However, even with channel extensions the achievability of 4/3 degrees of freedom could not be shown for the constant X channel where each node has only a single antenna. The problem, as we show next, was that the achievable scheme was restricted to circularly symmetric signaling. The following theorem shows that with asymmetric complex signaling, the outer bound of 4/3 degrees of freedom is indeed achievable for the 2 user complex Gaussian X channel with constant channel coefficients, for almost all values of channel coefficients.
Theorem 5
The X channel has 4/3 degrees of freedom if Proof: The converse is proved in [3]. For achievability, we consider a 3 (complex) symbol extension of the channel. This channel may be expressed as where X t (n), Y r (n), Z r (n) are 6 × 1 vectors representing the input, output and additive Gaussian noise respectively over the extended channel. U (φ rt ) represents the block-diagonal channel matrix determined by φ rt -the phase of the channel gain between transmitter t and receiver r. Over this extended channel, 2 interference free streams are achieved for each of the 4 messages using beamforming. Now, let V ij be the 6 × 2 matrix whose columns are used by transmitter j as beamforming directions for message W ij . The achievable scheme mimcs the scheme provided for time-varying channels in [3], i.e., the vectors are chosen so that, all vectors meant for receiver 1 align at receiver 2 and vice-versa. Specifically, matrices V 11 , V 21 are chosen randomly from any continuous distribution. Then V 12 , V 22 are chosen to satisfy the following alignment conditions The above equations ensure that the 4 interfering vectors at receiver i ∈ {1, 2} align into the 2 dimensional space represented by U (φ i1 )V j1 where j = i. Thus a receiver can resolve the 4 dimensions corresponding to the desired streams, provided that the desired signal space is linearly independent of the interference signal space. Therefore, at receiver 1, we need to ensure that the following 6 vectors are linearly independent.
where V 1 ij and V 2 ij are 6 × 1 column vectors representing the two columns of V ij . To show that the above set of equations are linearly independent, assume the contrary, i.e., assume that there exist real constants a 1 , a 2 , . . . , a 6 not all equal to zero, so that Now, using (111) above and simplifying, we get Since φ 11 + φ 22 = φ 12 + φ 21 mod (π), noting that a i , i = 1, 2, . . . , 6 are real and taking the imaginary part of the above equation, we get 1 11 , V 2 11 are 6 × 1 vectors chosen randomly, they are linearly independent almost surely. Therefore, we get a 3 = a 4 = 0. Using this in (112), we get Thus we have a i = 0, i = 1, 2, . . . , 6. This implies that the desired signal dimensions are linearly independent of the interfering dimensions almost surely at receiver 1. Further, by symmetry of construction, we can claim that the 4 desired signal streams are linearly independent of the 2 interfering directions at receiver 2 as well. This ensures that 8 interference free streams are achievable over 6 real dimensions over the extended X channel. Thus the number of degrees of freedom achieved per channel use is 4/3.
Cognitive X Channel
Without loss of generality, an X channel with a cognitive receiver (transmitter) is one where, e.g., a genie provides receiver (transmitter) 2 with the message W 11 . It is shown in [3] that for the complex Gaussian X channel with a cognitive receiver (transmitter) and constant channel coefficients, if each node has M > 1 antennas, then the number of degrees of freedom is 3/2. The question is left open for M = 1 in [3] if the channel coefficients are constant. However, it is easily seen that using asymmetric complex signaling, 3/2 degrees of freedom are achieved even for M = 1, for both cognitive transmitter, and cognitive receiver X channel models. The achievable scheme is essentially identical to the one proposed in [3] for M > 1, except the multiple signaling dimensions come not from multiple antennas but are inherent in the complex symbols, and the alignment happens in channel phase. We summarize the asymmetric complex signaling based alignment scheme for the cognitive receiver case as follows. Set the rate for message W 12 to zero, and view a complex symbol as a two dimensional vector space of real vectors. Now, transmitters 1 and 2 send messages W 21 , W 22 encoded with real Gaussian codebooks, and phase rotated so that they are aligned at receiver 1. Because of the relativity of alignment, these vectors are almost surely separable at receiver 2. Transmitter 1 also separately encodes and sends W 11 so that it is received orthogonal to the aligned interference vectors at receiver 1. Receiver 2 is able to eliminate the interference from the codeword for W 11 because it knows W 11 . Thus, a total of 3 real signaling dimensions are created over one complex symbol, i.e. over two real symbols, i.e., 3/2 degrees of freedom are achieved. The cognitive transmitter case follows similarly by a combination of asymmetric complex signaling and the achievable scheme proposed in [3].
Cellular Application -Interfering Uplinks
Consider two interfering 2-user multiple access channels (See Fig. 3). This channel can be used to model two interfering cells in a cellular network [19]. The channel has 4 transmitters and 2 receivers with input-output relations as below.
Transmitters 1, 2 each have a message for receiver 1 and transmitters 3, 4 have a message for receiver 2. Let W i represent the message present at tramsnmitter i. The power constraint, rate-region, degrees of freedom of this channel are in the same manner as the interference channel. We indicate the rate and the degrees of freedom of message W i by R i and d i respectively, for i = 1, 2, 3, 4. We characterize the degrees of freedom of the interfering multiple access channels below.
Theorem 6
If Then the interfering uplinks model described above has 4 3 degrees of freedom. We show the following relations Summing all the relations above, we can bound the total number of degrees of freedom of the channel by 4/3. We only show the first inequality here. The remaining 3 inequalities follow from symmetry. Now, to show the inequality, set W 2 = φ. Note that setting certain messages to null does not decrease the degrees of freedom achieved by other messages [3]. Now, let a genie provide W 3 , W 4 to receiver 1. Receiver 1 can now cancel the signals from transmitters 3, 4 to obtain Y 1 (n) = h 11 U (φ 11 )X 1 (n) + Z 1 (n) Note that receiver 1 can decode W 1 using Y 1 . Now, using any achievable scheme, receiver 2 can decode W 3 , W 4 and therefore cancel the effect of X 3 , X 4 to obtain Since all noise variables are circularly symmetric, by reducing the noise variance of Z 2 sufficiently, we can ensure that Y 1 is a degraded version of Y 2 . Note that reducing the noise variance does not reduce the degrees of freedom region. Receiver 2 can now decode W 1 as well, so that the rates of the messages W 1 , W 3 , W 4 lie in a multiple access channel formed at an enhanced receiver 2. Since the multiple access channel has 1 degree of freedom, we can write This completes the proof of the converse
Proof of Achievability:
Consider a 3 symbol extension of the channel. This channel may be expressed as where X t , Y r , Z r are 6 × 1 vectors representing the input, output and additive Gaussian noise respectively over the extended channel. U (φ rt ) represents the block-diagonal channel matrix determined by φ rt -the phase of the channel gain between transmitter t and receiver r. Over this extended channel, 2 interference free streams are achieved for each of the 4 messages using beamforming. Let V j be the 6 × 2 matrix whose columns are used by transmitter j as beamforming directions for message W j . Like the X channel vectors are chosen so that, all vectors meant for receiver 1 align at receiver 2. Specifically, let V 1 , V 3 be two 6 × 2 real matrices chosen randomly from any continuous distribution. Then V i are chosen to satisfy the following alignment conditions Note that the above equations ensure that all 4 vectors at receiver i ∈ {1, 2}, align in a 2 dimensional space. Now, all we need to ensure is that at receiver i, the desired signalling directions are linearly independent of the interfering directions. Now, consider the vectors received at receiver 1.
U (φ 13 )V 3 , U (φ 11 )V 1 , U (φ 12 )V 2 Note that the above vectors can be equivalently represented as We need to show that all the 6 vectors are linearly independent of each other. The proof is similar to the X channel. In other words, consider real constants a i , i = 1, 2, . . . 6 such that a 1 U (φ 13 )V i are the two column vectors of V i . Now, the argument that a i = 0, i = 1, 2, 3 . . . 6 is almost identical to the argument presented for the X channel and we will only present the outline here for brevity. Now, we consider 2 cases. Case 1 : sin(φ 13 − φ 11 ) = 0 Then, mutliplying (118) by U (−φ 11 ) and equating the imaginary part to 0, we get 6 equations in a 1 , a 2 , a 5 , a 6 since sin(φ 11 + φ 22 − φ 21 − φ 12 ) = 0. Since vectors are chosen randomly, we can show that a 1 = a 2 = a 4 = a 6 = 0. Then, using this in (118) again, we can show that a 1 = a 2 = 0. Case 2 : sin(φ 13 − φ 11 ) = 0 Note that this means that cos(φ 13 − φ 11 ) = 0. Then, equating the imaginary part of the left-handside of (118) to 0, we can show that a 5 = a 6 = 0. Plugging this back into (118) and taking its real part, we can show that a 1 = a 2 = a 3 = a 4 = 0. This shows that the desired signals are linearly independent of the interference at receiver 1. By symmetry of construction, we can show that the desired vectors are linearly independent of the interference at receiver 2 as well.
We settle the Høst-Madsen-Nosratinia conjecture in the negative, by establishing that complex Gaussian interference networks with more than 2 users and constant channel coefficient coefficients have at least 1.2 degrees of freedom for almost all values of channel coefficients. The achievability of 1.2 degrees of freedom is based on interference alignment with only 3 simultaneously active users employing asymmetric complex signaling over supersymbols consisting of 5 complex channel symbols. The main limitation of this alignment scheme, for the 3-user case, is that each signal vector can only align with interference at no more than one undesired receiver, which translates into the maximum of 1.2 degrees of freedom for the 3 user interference channel. The scheme is shown to achieve the degrees-of-freedom outer bound for the 2 user complex Gaussian X channel with constant coefficients, thereby improving upon previous results which relied on time-varying/frequency-selective channel coefficients. An interesting feature of this alignment scheme is that it is concerned only with the phase and not at all with the magnitudes of the channel coefficients. Remarkably, this is the opposite of all previously considered signal level based alignment schemes for constant channels, which are concerned primarily with the magnitudes of the channel coefficients and are essentially restricted to real channel coefficients.
The degrees-of-freedom of real Gaussian interference channels with constant channel coefficients remain open for almost all channel coefficient values. However, because wireless channels are invariably modeled as complex Gaussian, the more interesting question may be to determine if more than 1.2 degrees of freedom can be achieved for non-negligible subset of complex Gaussian constant channels. For K = 3 users, it may require smart ways of combining signal level alignment schemes (that exploit the variety of channel magnitudes) and signal vector space alignment schemes (that exploit the variety of channel phases) . For more than 3 users, it will be interesting to determine the limitations of interference alignment with asymmetric complex signaling over constant channels. Using asymmetric complex signaling to improve existing interference alignment schemes [2] as well as to design more efficient iterative algorithms [16] are also promising directions for future work.
Beyond the degrees of freedom problem, the key new insight to emerge from this work is the idea of asymmetric complex signaling. We expect this fundamental idea may have a variety of applications and point out some examples in this paper. In conclusion, along with interference alignment [2], need for channel extensions [3], and inseparability of parallel interference channels [12], the need for asymmetric complex signaling can be added as yet another essential piece of the puzzle that is the capacity of interference networks. | 2009-04-01T23:11:04.000Z | 2009-04-01T00:00:00.000 | {
"year": 2009,
"sha1": "2f79b3a477ad81641c7c962d2506787515412d5f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/0904.0274",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8374295b5104e1dbc25497b6d4a7d458ac5f8d74",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
} |
18046419 | pes2o/s2orc | v3-fos-license | Comparative Genomic Analysis of the GRF Genes in Chinese Pear (Pyrus bretschneideri Rehd), Poplar (Populous), Grape (Vitis vinifera), Arabidopsis and Rice (Oryza sativa)
Growth-regulating factors (GRFs) are plant-specific transcription factors that have important functions in regulating plant growth and development. Previous studies on GRF family members focused either on a single or a small set of genes. Here, a comparative genomic analysis of the GRF gene family was performed in poplar (a model tree species), Arabidopsis (a model plant for annual herbaceous dicots), grape (one model plant for perennial dicots), rice (a model plant for monocots) and Chinese pear (one of the economical fruit crops). In total, 58 GRF genes were identified, 12 genes in rice (Oryza sativa), 8 genes in grape (Vitis vinifera), 9 genes in Arabidopsis thaliana, 19 genes in poplar (Populus trichocarpa) and 10 genes in Chinese pear (Pyrus bretschneideri). The GRF genes were divided into five subfamilies based on the phylogenetic analysis, which was supported by their structural analysis. Furthermore, microsynteny analysis indicated that highly conserved regions of microsynteny were identified in all of the five species tested. And Ka/Ks analysis revealed that purifying selection plays an important role in the maintenance of GRF genes. Our results provide basic information on GRF genes in five plant species and lay the foundation for future research on the functions of these genes.
INTRODUCTION
Growth-regulating factors (GRFs) are plant-specific proteins. The first identified GRF, was rice OsGRF1 (van der Knaap et al., 2000). Subsequent studies found that GRF genes played a critical role in the regulation of plant growth and development (Kim et al., 2003;Horiguchi et al., 2005;Kuijt et al., 2014;Liang et al., 2014;Vercruyssen et al., 2015). In recent years, with the sequencing of tens of plant genomes, many GRF genes have been isolated and identified. The N-terminal of Arabidopsis GRF9 protein and Chinese cabbage GRF12 contain two WRC (Trp, Arg, Cys) structure domains (Kim et al., 2003;Wang et al., 2014), whereas the N-terminal of GRF proteins have one WRC and one QLQ (Gln, Leu, Gln) structure domain in the species studied (van der Knaap et al., 2000;Kim et al., 2003;Choi et al., 2004). The QLQ structure domain is similar to the N-terminal of SWI2/SNF2 in yeast, which could combine with SNF11 to form the chromatin remodeling complex (Treich et al., 1995). In addition, this structure domain could interact with the SNH structure domain in GIF (GRF-interacting factor) to form a functional complex to perform a transcriptional activation function . The WRC domain consists of one functional nuclear localization signal and one DNA binding motif (zinc finger structure), which is mainly involved in DNA binding. Moreover, the C-terminals of some GRF proteins also include the TQL (Thr, Gln, Leu), GGPL (Gly, Gly, Pro, Leu) and FFD structure domains (Kim et al., 2003;Choi et al., 2004;Zhang et al., 2008;Wang et al., 2014). The transcriptional expression of the GRF gene was found to be regulated by GA. For instance, after celery cabbage leaves were treated with GA3, the transcriptional expression levels of the BrGRF5, BrGRF8,BrGRF9,BrGRF11,BrGRF12,BrGRF13,BrGRF15, and BrGRF16 genes were increased by more than fivefolds and those of BrGRF2, BrGRF4, BrGRF7 were increased by more than 2-5 folds compared with controls . Moreover, miR396 is also involved in the regulation of GRF gene expression. The ath-miR396a gene of Arabidopsis thaliana was over-expressed and caused decreased GRF gene transcription levels (FG137771, FG165999, FG194560) in tobacco. Moreover, the petal, stamen and carpel of transgenic plants increased and fertility was reduced (Yang et al., 2009).
Although the GRF gene family has been reported in plants such as Chinese cabbage , Cucurbitaceae (Baloglu, 2014), Brachypodium distachyon (Filiz et al., 2014), and Zea mays (Zhang et al., 2008), both the mechanism of GRF gene expansion and specific evolutionary relationships remain elusive. Comparative genomic studies in plants would clarify the genome evolution by microsynteny analysis across different species. In our study, we analyzed the GRF genes from five flowering plant species including pear (Pyrus bretschneideri), poplar (Populus trichocarpa), Arabidopsis thaliana, grape (Vitis vinifera) and rice (Oryza sativa). By analysis of the phylogenetic relationship, intraspecies and interspecies differences in five plant species, gene duplication, origins and evolution were revealed. These results may contribute to the extrapolation of GRF gene function from one lineage to another.
Database Searches for Highly Conserved GRF Genes
In our study, the genomic data of pear, Arabidopsis, and Oryza sativa, were downloaded from the GigaDB database 1 , TAIR 2 , and the Rice Annotation Project 3 , respectively. And the genomic data of both poplar and grape were downloaded from the Phytozome database 4 . The WRC (PF08879.8) and QLQ (PF08880.9) domains were downloaded from the PFAM database 5 (Punta et al., 2011) and were separately blasted against the corresponding plant genomes based on the hidden Markov model (HMM) using DNATools software. Subsequently, SMART software (Letunic et al., 2012) and the PFAM database (Punta et al., 2011) were used to identify the sequences that contain the WRC (Kim et al., 2003) and QLQ domains (Kim et al., 2003).
Phylogenetic Analysis of GRF Genes
To construct the phylogeny of the GRF genes from the five flowering plant species, multiple sequence alignments for all amino acid sequences of the full-length GRF proteins were conducted using ClustalX software with the default settings. A phylogenetic tree was generated using the neighbor-joining (NJ) method using MEGA 7.0 software (Kumar et al., 2016) with the following parameters: pairwise deletion mode, Poisson correction, and bootstrapping (1000 replicates).
Gene Structure Analysis and Motif Detection
The exon-intron structures of the GRF family were drawn using the GSDS website (Guo et al., 2007) (Gene Structure Display Server 6 ) by comparing the coding sequences with their corresponding genomic sequences.
The conserved protein motifs were analyzed using the MEME online tool (Bailey et al., 2015) (Multiple Expectation Maximization for Motif Elicitation 7 with the following parameters: maximum number of motifs of 20 and the optimum width from 6 to 200 residues. Additionally, we used the Pfam (Punta et al., 2011) and SMART databases (Letunic et al., 2012) to annotate the structural motif. All of the GRF gene functional annotations were obtained from Gene Ontology (GO 8 ) using Blast2GO software (Conesa et al., 2005).
Microsynteny Analysis
Microsynteny analysis of chromosome segments containing GRF genes can identify and classify the expansion pattern of the GRF gene family. The physical locations of all GRF members on the chromosomes from pear, Populus, grape, Arabidopsis and rice were determined. If more than one gene family member was located in the same or neighboring regions of the genome, they were thought to be tandem duplications. If the two GRF genes were located on duplication blocks and their flanking proteincoding genes were highly similar at the amino acid level (Maher et al., 2006), they were thought to be large-scale duplication events. First, all of the GRF genes were located in the genome as the initial anchor point. Then, the protein-coding sequences 100 kb upstream and downstream of each anchor point were analyzed using the BLASTP program (Deleu et al., 2007) to identify whether the duplicated genes existed between the two independent regions. Then, the number of protein-coding genes exhibiting the highest non-self match (E-value < 10-10) (Sato et al., 2008) between the two flanking sequences of the anchor points was calculated. When four or more gene pairs with the synteny relationship were detected between the two regions, then FIGURE 1 | Chromosomal distribution and percentage share of GRF genes in pear, Populus, Arabidopsis, grape and Oryza sativa. The outermost ring represents chromosomes of pear, followed by Populus, grape, and Arabidopsis, and the innermost ring represents Oryza sativa. these two regions were thought to originate from large-scale duplication events.
Environmental Selection Pressure and Duplication Event Dating Analysis
The Ks and the Ka/Ks ratios of gene pairs in the duplication blocks were calculated. The protein sequences of gene pairs were compared using MUSCLE software (Edgar, 2004). And results from protein sequence alignment were used to guide the comparison of nucleic acid sequence codons using the PAL2NAL program (Suyama et al., 2006). The comparison results of the codons were imported into DnaSP software (Librado and Rozas, 2009). Then, the Ka/Ks and Ks ratios were calculated. In addition, the parameters used in the sliding window analysis were as follows: window size 150 bp and step size 9 bp.
Functional Divergence Analysis
The functional divergence (I type and II type) between each GRF subfamily was calculated using V3.0B1 DIVERGE (Gu et al., 2013) software in accordance with the constructed phylogenetic tree. Type I functional divergence occurred after gene duplication, which usually led to a selectivity change in a specific amino acid, i.e., change in the evolution rate. The coefficient θ I fluctuates from 0 to 1, reflecting weak to strong functional divergence between gene categories. Type II functional divergence occurred after gene duplication but only resulted in a change in the physical and chemical properties of amino acids (Gu, 1999(Gu, , 2006. The evolutionary rate difference coefficient θ I of each GRF subfamily and amino acid physico-chemical properties of the divergence coefficient θ II and the corresponding posterior probability (Qk) were obtained. If Qk > 0.9, it was inferred that the amino acid site should have a functional differentiation after gene duplication.
Identification and Chromosomal Location of GRF Genes in Five Genomes
A total of 58 GRF genes were identified from the five species studied, with 10 in pear (PbGRF01 to PbGRF10), 19 in poplar (PtGRF01 to PtGRF19), 9 in Arabidopsis (AtGRF01 to AtGRF09), FIGURE 2 | Phylogenetic analysis of GRF genes in pear, Populus, Arabidopsis, grape and Oryza sativa. The species background for each GRF protein is represented by different colors. Based on the bootstrap values and evolutionary distances, the tree was clustered into five subfamilies. Gene names are listed in Supplementary Table S1. The scale bar represents 0.1 amino acid changes per site. 8 in grape (VvGRF01 to VvGRF08) and 12 in Oryza sativa (OsGRF01 to OsGRF12), respectively. In addition, we determined the physical location of GRF genes on the chromosomes according to the overall search in the complete genome sequences of the five plant species (Supplementary Table S1). The results showed that the distribution of the 58 GRF genes among the chromosomes of the five species was not even (Figure 1). In the Arabidopsis and Oryza sativa genomes, the GRF genes were mainly distributed on chromosome 2, chromosome 3 (2) and chromosome 4 (2). For grape, GRF genes were found on chromosomes 2, 8, 9, 11, 16, and 18. In poplar, GRF genes were distributed on chromosomes 1, 2, 3, 6, 7, 12, 13, 14, 15, 18, and 19. In pear, we found GRF genes on chromosomes 2, 6, 7, 9, and 15. Additionally, three GRF genes (VvGRF08, PbGRF08, and PbGRF09) could not be mapped to any chromosome in the grape or pear genomes (Supplementary Table S1).
Evolutionary Analysis of GRF Genes in Rice, Grape, Arabidopsis, Populus, and Pear Using the well-described GRF proteins in representative plant species, including the monocots Brachypodium distachyon, Oryza sativa, Setaria italic, Zea mays, Sorghum bicolor and the dicots Arabidopsis thaliana, Populus trichocarpa, Glycine max, Citrus sinensis, Vitis vinifera, Cucumis sativus, Brassica rapa and Chinese pear, the evolutionary relationships between members of the GRF families proteins were evaluated through phylogenetic analysis. According to the nodes of the phylogenetic tree, the NJ tree could be divided into five subfamilies, designated as I, II, III, IV, and V, respectively (Supplementary Figure S1). Subsequently, to further understand the similarity and evolutionary history of the GRF genes in rice, grape, Arabidopsis, Populus and Chinese pear, we built an unrooted phylogenetic tree using the NJ method in the MEGA7 software (Kumar et al., 2016). The NJ tree showed that 58 GRF proteins were divided into five subfamilies (Figure 2), which was consistent with the result from phylogenetic analysis (Supplementary Figure S1). The topology of these two phylogenetic trees and the distribution of GRF gene in each subfamily were basically the same (Figure 2; Supplementary Figure S1). Therefore, we focused our research on the evolution of the GRF family members in rice, grape, Arabidopsis, Populus and pear. As shown in Figure 2, subfamily III contained the minimal GRF numbers (2), and subfamily I has the maximal GRF numbers (21), followed by subfamily V (16) and subfamily IV (12). Each of the five species (rice, grape, Arabidopsis, Populus and pear) contributed at least one GRF gene to subfamily I, subfamily II and subfamily V, whereas, the members of subfamily III and subfamily IV included one, two or three species. Subfamily III consisted of only rice (monocots) and subfamily IV consisted of grape, Arabidopsis, Populus, and pear (dicots). Therefore, we deduced that this phenomenon may correspond to a special gene expansion event (lost or obtained) during the evolutionary process (Supplementary Figure S1; Figure 2). In addition, according to the phylogenetic tree (Figure 2), we identified pairs of orthologous genes among the GRF genes: PbGRF01 and PtGRF16, and PbGRF06 and VvGRF06, and PbGRF04 and PtGRF01.
Structural Analysis of GRF Genes from Pear, Poplar, Grape, Arabidopsis and Rice
To gain more insights into the structural diversity of GRF genes, exon-intron pattern maps of the individual GRF genes were generated. As shown in Figure 3A, the 58 GRF genes contained different numbers of exons, varying from 1 to 6. We found that 27 GRF genes had four introns and 25 had three exons, three genes had six introns and one had five exons, one gene had two exons, and vGRF06 had only one exon. These results implied that both of exon gain and loss occurred during the evolution of GRF genes, which may help to explain the functional diversity of closely related GRF genes. Exon-intron structures of the paralogous and orthologous GRF gene pairs were further analyzed. Among these gene pairs, the exon number of five gene pairs exhibits exon-intron gain or loss variations, including PtGRF04/PtGRF06, OsGRF02/OsGRF07, PbGRF06/VvGRF06, PtGRF01/PbGRF04, and PtGRF09/VvGRF02 ( Figure 3A). In comparing the five gene pairs, PtGRF06, OsGRF02, PbGRF04 and PtGRF09 lost one exon whereas PtGRF04, OsGRF07, PtGRF01 and VvGRF02 gained one exon during the long evolutionary period. We speculated that these differences are possibly due to a single intron gain or loss event during the long evolutionary period.
Due to no high similarity among the 58 GRF genes, thus we used MEME software to identify the conserved motifs in the 58 GRF proteins. Twenty motifs were found on the GRF proteins ( Figure 3B; Supplementary Table S2). Subsequently, the Pfam and SMART databases were used to annotate each of the putative motifs. Motif 1 and motif 17 were identified to encode the WRC domain (Figure 4), and motif 2 was found to encode the QLQ domain (Figure 4), while the remaining motifs did not have function annotation. As shown in Figure 3B, the most closely related members in each subfamily exhibit common motif compositions (e.g., PtGRF10 and PtGRF18, VvGRF05 and PtGRF13), implying functional similarities among GRF proteins. Both motif 1 and motif 2 were present in all 58 GRF proteins and thought as the most conserved motifs. In addition, some subfamily specific motifs, such as motif 18 and motif 20 in the subfamily V, were also found, implying that they might be important for the functions of GRF proteins in this subfamily. To further understand the function of different GRF genes, we searched the GO Database using Blast2GO software. The results show that the 58 GRF genes contain common functions such as regulation of metabolic process, biological process, organic cyclic compound binding, molecular function, intracellular organelles, and cellular components (Supplementary Table S3).
Conserved Microsyntenies Were Found in Five Plant Species
In previous studies, microsynteny analysis of several plant species was performed to identify the location of homologous genes Lin et al., 2014;Wang et al., 2015;Cao et al., 2016b). In our research, microsynteny was investigated to infer the relationship of the GRF genes between eudicots (pear, Populus, grape and Arabidopsis) and monocots (rice). Additionally, since apple (Velasco et al., 2010) and pear (Wu et al., 2013) belong to the Rosaceae species, apple was also considered in the following analysis. The members of the GRF family of the six plant species (pear, apple, Populus, grape, Arabidopsis and rice) were used as anchor genes to clarify the molecular history of the surrounding chromosomal regions. Through pairwise comparisons and comparison of all of the proteins in the GRF gene flanking areas, the conserved microsyntenies were found in the six plant species (Figure 5).
Firstly, we identified intraspecies relationships of the GRF genes. A total of 14 collinear gene pairs in Populus, six in pear, four in apple, four in rice, two in Arabidopsis, and one in grape were found, respectively (Figure 5; Supplementary Table S4). Additionally, 15 GRF genes were not distributed in any microsynteny, such as PbGRF05 in pear and VvGRF01 in grape. These results revealed that there was not only a whole-genome duplication event but independent duplication events as well.
Subsequently, we analyzed the corresponding interspecies microsynteny in the six plant species. Twenty-five GRF genes were not present in the interspecies microsynteny analysis, including seven OsGRFs, six VvGRFs, five AtGRFs, three PtGRFs, two PbGRFs and two MdGRFs. By microsynteny analysis, 91 conserved syntenic segments were found (Figure 5; Supplementary Table S4). Among them, fourteen orthologous gene pairs were identified from pear and apple, 8 orthologous gene pairs were identified from pear and Arabidopsis, 7 orthologous gene pairs were identified from pear and grape, and 3 orthologous gene pairs were identified from pear and Populus. However, we did not find any orthologous gene pairs between pear and rice. These results may reflect that the relationship between pear and apple was closer than that between pear and grape/Arabidopsis/Populus/rice. Remarkably, some collinear gene pairs detected between pear and Arabidopsis/grape were not identified between pear and Populus, such as PbGRF07/VvGRF03, PbGRF08/AtGRF02, and PbGRF07/AtGRF05 (Figure 5; Supplementary Table S4), suggesting that these orthologous pairs were generated after Populus diverged from the common ancestor of pear and grape/Arabidopsis. In addition, we observed that two or more GRF genes from apple, Populus, Arabidopsis and grape matched one pear GRF gene (Supplementary Table S4), such as VvGRF03 and VvGRF04 orthologous to PbGRF07 and AtGRF02 and AtGRF08 orthologous to PbGRF08 (Supplementary Table S4), implying that these genes are probably paralogous gene pairs.
Gene Duplication of GRF Genes
The GRF gene family may have experienced many duplication processes, including whole-genome duplication, segmental duplication and tandem duplication, during evolution (Moore and Purugganan, 2003). To further understand the evolution of GRF genes, the gene duplication events of the GRF family were identified in five plant species (pear, Populus, grape, Arabidopsis and rice). The similarity of GRF flanking sequences was searched. If four or more genes in the upstream and downstream 100 kb of the two corresponding GRF genes obtained the best non-self match using the BLASTP program (E-value < 10-10), then these two regions were thought as the result of a large-scale duplication event. To avoid the GRF gene pairs that were located in the duplication region with the larger genetic difference, a set of relaxed criteria for gene gathering was defined according to a pair of flanking sequences of the GRF gene containing two or three conserved genes.
The pear genome contained nine GRF genes, eight of which (approximately 88.9%) were found in the duplication region of the genome (Figure 6E). In these gene pairs, six conserved genes were found in the flanking sequences of PbGRF02/PbGRF08; thus, this pair of genes was thought to have evolved from a largescale duplication event. Nineteen GRF genes were included in the Populus genome, and 17 of these genes (approximately 89.5%) were found to be distributed on the duplicated segments of chromosomes ( Figure 6A). As these genes (PtGRF03/PtGRF07, PtGRF05/PtGRF14, PtGRF05/PtGRF15, PtGRF04/PtGRF06, PtGRF11/PtGRF12, PtGRF11/PtGRF14, PtGRF12/PtGRF17, PtGRF13/PtGRF19, PtGRF14/PtGRF15, PtGRF08/PtGRF16) were located in high synteny regions (Figure 6A), their pairs were speculated to evolve from large-scale duplication events. In addition, the gene pair of PtGRF14 and PtGRF15 were located in the adjacent positions of chromosome 14 (Figure 6A), and therefore might be produced by tandem duplication. The Arabidopsis genome contained 9 GRF genes, four of which were found in the duplication regions of the genome. The flanking sequences of two gene pairs (AtGRF02/AtGRF08, AtGRF03/AtGRF06) had remarkable synteny ( Figure 6C) and were inferred to have evolved from large-scale duplication events. Moreover, eight of 12 GRF genes were found in the duplication regions of the rice genome ( Figure 6D). Conserved gene sequences were found in adjacent sequences of two gene pairs (OsGRF01/OsGRF06 and OsGRF02/OsGRF07), indicating that they were evolved from large-scale duplication events. In contrast, as OsGRF04 and OsGRF05 were located in adjacent positions on the same chromosome, the gene pair should be produced by tandem duplication. Only two of 8 GRF genes were found in the duplication regions of the grape genome. The synteny of the gene pair VvGRF03/VvGRF04 was weak in the duplicated region of the genome, and only two conserved genes were found on the flanking sequences ( Figure 6B).
Strong Purifying Selection for GRF Gene Pairs in Pear
To understand how gene duplications evolved into distinct GRF genes with different functions, we investigated the nonsynonymous (Ka) and synonymous (Ks) substitutions and the ratio of Ka/Ks during the evolution of the GRF gene family in five plant species. In general, Ka/Ks < 1 indicates negative or purifying selection with functional constraint, Ka/Ks = 1 indicates neutral selection, and Ka/Ks > 1 indicates positive selection.
In our research, the Ka/Ks ratios of all pear GRF paralogous pairs were less than 0.2 (Supplementary Table S5), indicating that the GRF gene family evolved under strong purifying selection. Thus, we concluded that the GRF genes were slowly evolved at the protein level in pear. Remarkably, 27 GRF gene pairs appeared to be under purifying selection (Figure 7), because of their Ka/Ks ratios less than 1. Subsequently, we performed a sliding-window analysis of Ka/Ks between each pair of GRF paralogous and further clarified the selection pressures in pear. As shown in Figure 8, most Ka/Ks across coding regions were much less than one, with exception for one or several distinct peaks (Ka/Ks > 1). Compared with the other regions (peaks), the WRC and QLQ domains generally had lower Ka/Ks ratios (valleys), consistent with functional constraint being dominant in these domains. Together with the sliding window and Ka/Ks analysis (Figure 8), we deduced that strong purifying selections might have played a key role in the evolution of GRF genes, especially for the WRC and QLQ domains in pear.
Functional Divergence in the GRF Gene Family
To understand whether amino acid substitutions in the GRF gene family caused adaptive functional diversification, DIVERGE 3.0B1 software (Gaucher et al., 2002;Gu et al., 2013) was used to estimate type I and type II functional divergence between gene subfamilies in the GRF gene family based on posterior analysis. In addition, because subfamily III contains only two sequences, which is less than 4 sequences required by the DIVERGE software analysis (Gaucher et al., 2002;Gu et al., 2013), we did not analyze subfamily III. Type I functional divergence usually results in a selective change in the specific amino acid, that is, the rate of evolution changes. The results revealed that θ I was not gained between subfamily V and subfamily I or subfamily IV. This may be caused by an ML of less than 0 ( Table 1). A card test (x 2) was performed for groups with a θ I . The P-values of groups I and II, I and IV, II and IV, II and V were less than 0.05, reaching a FIGURE 7 | Scatter plots of the Ka/Ks ratios of duplicated GRF genes in pear, Populus, grape, Arabidopsis and rice. The X-and Y -axes denote the synonymous distance and Ka/Ks for each pair, respectively. FIGURE 8 | Sliding window plots of duplicated GRF genes in pear. The gray blocks, from light to dark, represent the positions of the WRC and QLQ domains, respectively. The window size is 150 bp, and the step size is 9 bp. significant level. To avoid the occurrence of false positives, we determined the sites of posterior probability (Qk) > 0.9 to be key amino acid sites leading to functional divergence according to previous research methods (Yin et al., 2013). The results showed there were significant type I functional differences in the 295th amino acid between I and II, in the 231st and 326th amino acid sites between I and IV, and in the 295th amino acid site between II and IV.
Type II functional divergence occurred after gene duplication, which only resulted in the change in the physical and chemical properties of amino acids. As shown in Table 2, the type II functional divergence coefficients between any two subfamilies are relatively small, some even negative (groups for which θ II is negative were not included in the detailed analysis). Subfamilies II and V had three key amino acid sites (207, 327, and 331), and subfamilies I and IV had a critical amino acid site (231) at a locus that is a key site in the type I functional divergence analysis, suggesting that this locus may have a very close relationship with the change in GRF function ( Table 2). Three key amino acid sites (207, 327, and 331) were detected in both subfamilies II and V. And another key amino acid site (231) in the type I functional divergence analysis, was detected in both subfamilies I and IV, suggesting that this site may have a very close relationship with the change in GRF gene function.
DISCUSSION
Growth-regulating factors are plant-specific transcription factors that play key roles in plant growth and development. In our research, by searching local genome databases, 19, 12, 10, 9, and 8 GRF genes were identified in Populus, rice, pear, Arabidopsis, and grape, respectively. The GRF genes were divided into five classes, and orthologous pairs of pear and grape GRF proteins were more common according to the topology of phylogenetic tree, which revealed that some ancestor GRF genes existed before the divergence of pear and grape during evolution.
There exist functional differences of GRFs between the five plant species, which might be related to the diversity both of GRF genes' exon-intron structures and motif components. In our study, the 58 GRF genes contain different numbers of introns/exons, implying that there is diversity in the GRF genes of the five plant species. For example, the GRF gene AtGRF07 contains five exons, while the VvGRF06 has only one exon. Nevertheless, the most closely related GRF genes shared similar exon-intron structure and motif composition in the same subfamily, either in their exon lengths or intron numbers. Furthermore, different conserved protein motifs were present in individual GRF proteins based on the MEME analysis. The differences in these features among the subfamilies revealed that the GRF members were functionally diversified. Interestingly, all known GRF proteins have motif 1 and motif 2, which encode a conserved WRC domain (containing a Trp-Arg-Cys structure) (Kim et al., 2003) and QLQ domain (containing a Gln, Leu, Gln structure) (Kim et al., 2003), respectively. Among these domains, the WRC domain is known as the zinc-finger structure (Rushton et al., 1995). As shown in Figure 4 and Supplementary Figure S2, zinc-finger structures are tightly connected in WRC motifs, implying that this domain functions in DNA binding.
Based on the comparative genome analyses, although the chromosome numbers and genome sizes of different plant species were diverse, gene orders among the related species were still highly conserved in the process of million years of evolution (Devos and Gale, 2000). Comparisons among the GRF genes across the five plant species' genomic sequences implied the presence of one or more large-scale genome duplications during early evolution. Strong microsynteny was detected in the five dicot (pear, apple, Populus, Arabidopsis, and grape) genomes. In contrast, little or no microsynteny was detected between the five dicots (pear, apple, Populus, Arabidopsis, and grape) and one monocot (rice). For example, the low microsynteny (two pairs) of GRF genes from five dicots (pear, apple, Populus, grape, and Arabidopsis) and a monocot (rice) was probably because these plants are not closely related. Remarkably, the synteny blocks (14) in the Populus genome were much higher than the synteny blocks of the monocot (rice) and four other dicot (pear, apple, grape, and Arabidopsis) genomes, revealing that Populus GRF genes might have undergone large-scale duplication events during evolution, as shown in Figure 6A. Interestingly, we did not observe microsynteny relationships among OsGRF01-05, OsGRF07-12, PtGRF03, PtGRF07, PbGRF09, PbGRF10, MdGRF03, VvGRF03 and VvGRF04 and other 18 GRF gene members in these genomes studied, implying that these genes were either formed through complete transposition and loss of their primogenitors or ancient ones without detectable linkage to other GRF genes. Gene duplications are of the major driving forces for generating novel genes, which would help organisms adapt to complex environments. Both events of tandem duplication and large-scale duplication are the main patterns of gene family expansion in plants, such as the MYB gene family in pear (Cao et al., 2016a), CHS gene family in maize (Han et al., 2016), or MYB gene family in Setaria italica (Muthamilarasan et al., 2014). In the present study, a high frequency of GRF genes was distributed in duplicated blocks, implying that largescale duplications (whole-genome or segmental duplication) contributed to the expansion of the GRF gene family in plants.
The Ka/Ks of the 27 paralogous gene pairs suggest that purifying selection may be largely responsible for maintaining the functions of GRF proteins from the four dicots (pear, Populus, Arabidopsis, and grape) and one monocot (rice). Furthermore, the Ka/Ks of pear GRF paralogous gene pairs were less than 0.2, suggesting that these genes underwent slow evolutionary non-diversification following duplication. In addition, we detected strong positive selection in coding regions in several pear GRF gene pairs, implying functional differentiation.
We used DIVERGE software to analyze subfamily I and subfamily IV. In the GRF sequence analysis, we detected 231 key functional divergences in sites, and the analysis of type I and type II functional divergence assayed important amino acid sites that may lead to functional differentiation of GRF decisive sites; thus, our study provides a reference for subsequent researchers exploring GRF functional divergence.
CONCLUSION
In the present study, 58 GRF gene members were analyzed, including their physical location, phylogenetic relationship, conserved microsynteny, gene duplication and Ka/Ks. By phylogenetic analysis, these GRF genes were divided into five subfamilies. In each subfamily, we found that gene structure and motif distribution features were relatively conserved. Based on genome sequences of the five species (pear, Populus, grape, Arabidopsis, and rice), a comprehensive analysis of GRF genes was performed and the results showed a wide range of synteny and the presence of one or more large-scale genome duplications during early evolution. Our results suggest that large-scale gene duplication was the major pattern of expansion for the vast majority of GRF genes. These genes were under strongly purifying selection and maintained their functional stability. The systematic analysis might contribute to the extrapolation of GRF gene function from one lineage to another.
AUTHOR CONTRIBUTIONS
YuC conceived of and designed the experiments; YuC and YH performed the experiments; YuC analyzed the data; YuC, YH, QJ, YL, and YoC contributed reagents/materials/analysis tools; YuC and YaH wrote the paper.
ACKNOWLEDGMENTS
This work was supported by the Natural Science Foundation of China (grant 30771483, 31171944 and 31640068). We extend our thanks to the reviewers for their careful reading of and helpful comments on this manuscript.
SUPPLEMENTARY MATERIAL
The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fpls.2016.01750/ full#supplementary-material FIGURE S1 | Phylogenetic tree of the GRF proteins in monocots Brachypodium distachyon, Oryza sativa, Setaria italic, Zea mays, Sorghum bicolor and the dicots Arabidopsis thaliana, Populus trichocarpa, Glycine max, Citrus sinensis, Vitis vinifera, Cucumis sativus, Brassica rapa and Chinese pear. For the thirteen-species GRF gene tree, the GRF proteins of Brachypodium distachyon, Setaria italic, Zea mays, Sorghum bicolor, Glycine max, Citrus sinensis, Cucumis sativus and Brassica rapa were obtained from PLAZA 3.0 database, The neighbor joining tree was constructed using MEGA7 software. S4 | Synteny data in pear, apple, Populus, grape, Arabidopsis, and rice. The apple GRF gene family was obtained from The Apple Gene Function and Gene Family Database . | 2017-05-05T05:31:54.903Z | 2016-11-24T00:00:00.000 | {
"year": 2016,
"sha1": "1980dc0cc98f3703da0221d049ca1988eda1a7bc",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2016.01750/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1980dc0cc98f3703da0221d049ca1988eda1a7bc",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
85563714 | pes2o/s2orc | v3-fos-license | A Semi-Classical View on Epsilon-Near-Zero Resonant Tunneling Modes in Metal/Insulator/Metal Nanocavities
Metal/Insulator/Metal nanocavities (MIMs) are highly versatile systems for nanometric light confinement and waveguiding, and their optical properties are mostly interpreted in terms of surface plasmon polaritons. Although classic electromagnetic theory accurately describes their behavior, it often lacks physical insight, letting some fundamental aspects of light interaction with these structures unexplored. In this work, we elaborate a quantum mechanical description of the MIM cavity as a double barrier quantum well. We identify the square of the imaginary part of the refractive index of the metal as the optical potential, and find that MIM cavity resonances are suppressed if the ratio n/\k{appa} exceeds a certain limit, which shows that low n and high \k{appa} are desired for strong and sharp cavity resonances. Interestingly, the spectral regions of cavity mode suppression correspond to the interband transitions of the metals, where the optical processes are intrinsically non-Hermitian. The quantum treatment allows to describe the tunnel effect for photons, and reveals that the MIM cavity resonances can be excited by resonant tunneling via illumination through the metal, without the need of momentum matching techniques such as prisms or grating couplers. By combining this analysis with spectroscopic ellipsometry on experimental MIM structures, and by developing a simple harmonic oscillator model of the MIM for the calculation of its effective permittivity, we show that the cavity eigenmodes coincide with low-loss zeros of the effective permittivity.
experimentally demonstrated that MIM nanocavities manifest ENZ response at their resonant modes, 35 which has been exploited to enhance the photophysical properties of fluorophores interacting with the MIM in a weak coupling regime. 35,36 Despite the experimental observations, the physical nature of the epsilon-near-zero response in MIMs remains still obscure.
In this work, we analyze the properties of MIM cavities with two novel and complementary approaches. First, we show that the MIM can be seen as a quantum mechanical potential well, with even and odd eigenmodes that correspond to tunneling maxima. Then we describe the MIM as a classical harmonic oscillator and derive a simple and useful equation for its effective dielectric permittivity, relating the eigenmodes and tunneling maxima with the ENZ frequencies.
Rationalizing the MIM cavity resonances as ENZ and resonant tunneling modes straightforwardly elucidates that they can be excited without the need of a grating coupler or prism for momentum matching, as it would be required for coupling to surface plasmon polaritons. 37,38 The homogenization of the MIM's dielectric permittivity allows us to treat it as an effective potential barrier, reducing the complex problem of the propagation of the photon through a real double potential barrier to tunneling through a single barrier whose height is given by the square of the effective refractive index. The vanishing wavevector at the ENZ wavelength reduces the wavefunction of the photon within the barrier to a constant, enabling resonant tunneling of photons, in analogy to an electron impinging on a potential barrier with energy equal to its height.
The models are applied to physically fabricated samples with silver (Ag) as metal and alumina (Al2O3) as dielectric, and compared with classical electromagnetic theory. We obtain a perfect match between all approaches and experiments that corroborates the quantum mechanical and harmonic oscillator interpretations. We put the classical-quantum analogy for metal/dielectric systems on solid ground by identifying the square of the imaginary part of the refractive index of the metal as the optical potential, and by discussing the necessary conditions to obtain a Hermitian system that is essential for quantum mechanical treatment.
Results and Discussion
The wave nature of the propagation of light in classical optics, and of electrons in quantum mechanics can lead to several analogies between these two fields that can provide additional insights. A classic textbook example is the propagation of light in a medium with refractive index n (typically glass) under total internal reflection conditions across a thin air gap that leads to frustrated internal reflection. 39 This behavior is to some extent analogous to the tunneling effect in quantum mechanics, and we can derive a similarity between the time-independent Schrödinger equation, expressed as (1) and the Helmholtz equation by choosing an optical potential whose associated wavevector is a function of the complex refractive index of the j th material in which the propagation takes place (see Supporting Information (SI) section 1). 40,41 The wavevector k of the photons travelling in the material perpendicular to the interface can be expressed as = 4 5 7 ( ) − 7 ( ):, where 4 = )< = is the wave vector in the vacuum, > 7 = 7 ( ) − 7 ( ) is the complex refractive index of the j th material composed by a real part 7 ( ) and an imaginary part 7 ( ). In eq (1) m is the electron mass, U the potential, and E the energy.
Such a similarity has been fruitfully used, for example, in describing the properties of a particular class of non-Hermitian systems constituted by the so-called parity-time symmetric 6 potentials. [42][43][44][45][46][47] The study of these systems unveiled the role of the square of the refractive index as the "optical potential" acting on photons. The further expansion of the quantum analogy to metal-insulator systems encounters the problem of hermiticity of the Hamiltonian, that is required to obtain physical solutions, but which is not necessarily satisfied for the given material system.
In particular, the Hamiltonian associated to eq 2 is non-Hermitian. However, if either the real or the imaginary part of the refractive index are negligible, then the second term in eq 3 becomes a real number, and hermiticity is fulfilled. Concerning metals and dielectrics, the imaginary part of the refractive index of most of the dielectrics is negligible, while for plasmonic metals the imaginary part dominates. 48 In dielectrics, the wavefunction is a propagating wave ! ( ) = Ae Dik G H I # , while in metals it is essentially an exponentially decaying wave * ( ) = Ae DJ G K I # . Consequently, the wave vector is ! = 4 ! for a dielectric and * = ik 4 * for a metal. This reciprocal behavior ensures hermiticity in eq (2), and justifies the quantum mechanical analogy for many MIM systems.
However, not all the metals have a refractive index that ensures hermiticity, as we demonstrate in Figure 1. For metals, where the ratio between the real and imaginary parts of the refractive index exceeds a certain value, hermiticity is strongly violated and the corresponding eigenvalues acquire significant complex parts. From Scattering Matrix Method (SMM) 49-51 modeling we find that the critical ratio is at 0.2 (+/-0.05); above this value, the resonant cavity modes cannot be sustained.
This restricts the spectral range in which resonances can be engineered, as shown in Figure 1a for different metals. For Ag (black curve), the ratio n/ is always smaller than 0.2 in the visible range, which makes this material an ideal candidate for the study of ENZ resonances. In the case of Au (green curve), the ratio n/ is very large in the inter-band transition range, while for Al (red curve), the non-Hermitian band lies within its well-known plasmon absorbance band, at 675 -1000 nm.
Interestingly, Mg (blue curve) is Hermitian throughout the visible range. In the following we will only consider MIM structures with Ag as a metal and Al2O3 as a dielectric.
8 MIM nanoresonators are the nanometric equivalent of the dielectric Fabry-Perot cavities. 32,34,52,53 As such, they can be treated as optical cavities with discrete resonant modes, albeit with a very short optical path. Within the quantum mechanical analogy, we can study a MIM system as a square potential well, where the metals constitute the barriers and the dielectric the well. A special case is a MIM cavity whose metallic layers that are much thicker than the skin depth, which can be treated as a finite square potential well, as depicted in the sketch of Figure 2a.
9
The height of the barrier is given by * ) . The wavefunction of the cavity mode exponentially decays to zero in the thick metals, while a standing wave is formed inside the cavity. The solutions of this system are well known, 40,41 and can be expressed in terms of the refractive index of the materials. For the symmetric modes we obtain: and for the antisymmetric ones: When the thickness of the metallic layer is decreased towards its skin depth, the tunneling probability of the photons through the barriers becomes significant. Therefore, the associated wavefunction accumulates a phase delay equal to the tunneling probability of the photon through one single metal barrier that can be approximated by the expression (D)J G K Q O Q ) (see SI section 2). 40 The analytical dispersion of such a "leaky" MIM can be found by adding the phase delay to the expressions for the finite square quantum well with infinite thickness of the barriers: Equations 6 and 7 are derived in section 2 in the SI. Figure 2c shows that the wavelength of the The double barrier system is schematically illustrated in Figure 3a, where the eigenmodes are depicted by solid lines, and the tunneling is sketched by the dashed lines. Tunneling through a double potential barrier with a probability approaching unity is known as resonant tunneling. [54][55][56][57] In real systems, however, the transmission is reduced due to losses and manifests a maximum at the resonant tunneling wavelength. [58][59][60][61][62] The wavelengths of the eigenmodes (from eqs 6 and 7) and the tunneling maxima as a function of the dielectric layer thickness are plotted in Figure 3b (see SI section 1 for details), together with the results for the transmittance and absorbance peaks from classical SMM calculations (as illustrated in Figure 3c), and show perfect matching. This agreement corroborates our quantum modeling, and furthermore demonstrates that the tunneling maxima correspond to the eigenmodes of the leaky quantum well, and transmittance maxima coincide with absorbance maxima, which at first seems counterintuitive. Finally, the odd modes are quenched for thin dielectric layers, when tan( 4 ! O P ) ) < [ is then the restoring force, and W * is the effective mass of the electron in the cavity.
The time dependence of the displacement [ can be readily obtained: The total dipole moment of the electrons inside the MIM cavity can then be expressed as: Where q is the electronic charge, and N is the density of free carriers. The electric displacement vector induced in the MIM cavity can then be seen as sustained by an effective permittivity εeff, MIM: c c⃗ MIM = 4 c⃗ ( ) + c ⃗ MIM = 4 eff,MIM c⃗ ( ); (11) and with eqs (10) and (11) we obtain a useful expression for eff,MIM : , and we rewrite eq (12): The response of the MIM is strongly related to the properties of the constituent metal, and eq 13 can then be integrated with the well-known Drude model, 63 leading to the effective permittivity of a MIM system with Ag as metal: 5k " Dk G,lml " nop lml k: ; here we used the classic parameters for Ag: γAg=0.021 eV and ωP=2200 THz (corresponding to 9.1 eV). 63 16 With the approach of an effective permittivity, the MIM structure can be treated as an artificial, homogenized layer that can have either metallic ( eff,MIM < 0) or dielectric ( eff,MIM > 0) properties, or which acts as an ENZ layer when eff,MIM = 0. This behavior, seen as photons impinging on the effective MIM has insightful quantum mechanical analogies. By comparing eqs 1&2, we see that the potential energy U(x) is the equivalent of −( 7 − 7 ) ) . By fixing ( ) = 4 we can express the optical potential in terms of wavevector as )
)
⁄ , and three scenarios can be compared, as illustrated in Figure 5. The photonic case of a MIM as an effective metal corresponds to electron tunneling through a potential barrier (Fig. 5a,b). Here k is purely imaginary in both cases and the wavefunctions are evanescent waves. The MIM as an effective dielectric is analogous to electrons scattered on a potential well (Fig. 5c,d), the wavevector is positive and the wavefunctions are real propagating waves. The case of corresponds to electrons with an energy equal to the barrier height ( Fig. 5e,f), consequently the wavefunction is constant inside the barrier, which enables resonant tunneling.
In this case, it is straightforward to conclude that the wavevector for an electron is zero, since And for a low imaginary part (which is always the case for the considered resonances): We demonstrated in Figure 3 that the resonant tunneling wavelengths coincide with the quasibound modes of the leaky cavity. Moreover, in correspondence of these modes a near-zero permittivity has been experimentally measured. A special case of an asymmetric MIM structure with a very thin top, and very thick bottom metal layer acts as a superabsorber. In the following, we will show that the quantum treatment is also capable of describing this, from an application point highly relevant, structure. Figure 6a shows the experimental reflectance of such a structure, where the top Ag layer has a thickness of 40 nm, the dielectric of 160 nm, and the bottom Ag layer acting as backreflector is 200 nm thick. Two pronounced minima in reflectance can be identified. We note that the minimum at small wavelengths corresponds to the odd mode that appears only for cavities with a thickness of the dielectric layer that exceeds a threshold value. We will show in the following that these minima in The red arrows are a guide to the eye, indicating the radiation direction perpendicular to the surface plane. Red (blue) color corresponds to high (low) electric field.
20
Although the symmetry in the superabsorber MIM structure is broken, we can find equations for the quasi-bound eigenmodes and apply our quantum mechanical model as follows: by considering an infinitely thick barrier for the backreflector in our model, as illustrated in Figure 6b, we can allocate the entire phase shift introduced by the tunneling to the thin barrier. Since the photon tunnels twice (in and out), the phase shift is twice as large as in the one in eqs 6 and 7 that were derived for symmetric barriers. With this consideration, we obtain the analytic dispersions for the symmetric and anti-symmetric modes as: The resonance wavelength of a superabsorber can be tuned via the thickness of the dielectric layer. Figure 6c plots the ellipsometrically measured and SMM simulated absorbance for dielectric layers with 85 nm and 162 nm thickness. The experimentally detected absorbance is above 95% for all the resonances, even and odd. The Q-factor of the even mode at long wavelength is 48, which is a very high value for plasmonic and ENZ resonances. [66][67][68] Figure 6d shows the electric field distribution at resonance under excitation with a monochromatic wave at λ = 690 nm incident at an angle of 40° as depicted by the white dashed arrow. Interestingly, we observe a lateral (Xdirection) distribution of the electric field for several microns, which indicates that the photons at the ENZ mode propagate in the cavity, which might be related to slow light trapping in ENZ materials, as reported by Ciattoni et al. 65 Following their analysis, we take the dispersion relation for transverse plane waves, ( ) = ( ⁄ • ( ) , and calculate the group velocity ~ ( ) = ⁄ . With the relation for the dielectric permittivity of the effective MIM at resonance in eq 14, we can express the group velocity as: where ( ) = [69][70][71][72][73] The response in Figure 7b is slightly non-linear, with a higher sensitivity for higher refractive index, which results in different working regions: for 1<ns<2.2, typical of the most common oxides, the sensitivity is around 25 nm/RIU, for refractive index higher than 3 it is around 45 nm/RIU. The sensitivity depends also on the top metal layer thickness, and can be tuned to more linear behavior with a slightly reduced sensitivity with an increased metal thickness, of for example 20 nm. We note that the performance of the superabsorber refractive index sensor cannot compete with highly sophisticated metamaterial sensors tailored for biomolecule detection as reported in literature. 74,75 However, our system can find an application range as sensors in thin film technologies, where larger changes in refractive index should be detected, or the effective refractive index of a 23 composite material should be measured, for example of a nanocrystal solid, or where the layer thickness of a dielectric with a known refractive index is of interest.
Conclusions
We analyzed the MIM system with a quantum mechanical approach that reveals the cavity modes as even and odd eigenmodes of a quantum well system. This treatment enables analytical solutions for the resonance frequencies in two practically highly relevant cases: symmetric MIMs,
Fabrication and characterization of the MIM structures.
The MIM structures have been fabricated by a multistep process that consists of deposition of (i) the metal (Ag), and (ii) the dielectric (Al2O3) layers. For (i), electron-beam induced thermal evaporation (Kurt J. Lesker PVD 75) of Ag on a glass substrate was employed to obtain a Ag layer with desired thickness, then followed by deposition of 10 nm Al2O3 inside the same system to prevent Ag from oxidation. For (ii), the Al2O3 was deposited in an atomic layer deposition (ALD) system (FlexAl from Oxford Instruments) using a thermal deposition process with a stage 25 temperature of 110 °C, resulting in an alumina deposition rate of 0.09 nm/cycle. Trimethylaluminate (TMA) and H2O were used as precursors. A heating step of 300 s was performed before starting the ALD cycles. Each ALD cycle consisted of a H2O/purge/TMA/purge sequence with a pulse durations of 0.075/6/0.033/2 seconds, respectively.
The characterization of the optical properties of all the fabricated multilayer structures has been performed by spectroscopic Ellipsometry with a Vertical Vase ellipsometer by Woollam, in the range from 300-900 nm. Spectroscopic analysis has recorded at three different angles (50°, 60° and 70°) with a step of 3 nm. P-Polarized reflectance and transmittance measurements have been measured via ellipsometry as well, in a broad range of angles, in which falls the case of 40° discussed in this work. The resolution of recorded spectra is 3 nm, and all spectra have been normalized to the intensity of the Xe lamp.
Modeling and Simulations.
Finite | 2019-03-30T13:04:45.394Z | 2019-03-28T00:00:00.000 | {
"year": 2020,
"sha1": "d84018bbbaa9447911ca402bfd05d35ea71a8eb4",
"oa_license": "CCBYNCND",
"oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acs.nanolett.9b00564",
"oa_status": "HYBRID",
"pdf_src": "Arxiv",
"pdf_hash": "64ccd2790c20bd93b00df969617b0a56525bcec8",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics",
"Medicine"
]
} |
118994561 | pes2o/s2orc | v3-fos-license | High-order density-matrix perturbation theory
We present a simple formalism for the calculation of the derivatives of the electronic density matrix at any order, within density functional theory. Our approach, contrary to previous ones, is not based on the perturbative expansion of the Kohn-Sham wavefunctions. It has the following advantages: (i) it allows a simple derivation for the expression for the high order derivatives of the density matrix; (ii) in extended insulators, the treatment of uniform-electric-field perturbations and of the polarization derivatives is straightforward.
I. INTRODUCTION
Linear response methods, 1,2 within the density functional theory approach (DFT), 3 have been successfully applied to compute a wide range of properties in real materials such as phonon dispersions, dielectric constants, effective charges, 1,2 and NMR spectra. 4 Beyond linear response, perturbation theory applied to the Kohn-Sham (KS) orbitals allows the calculation of the derivatives of the energy at any order. 5 This kind of approach has two disadvantages: (i) although the final result is gauge invariant, i.e. invariant with respect to an arbitrary unitary rotation in the space of the occupied KS-orbitals, 5 the formulation of the theory depends on the chosen gauge. This becomes apparent in the application of the KS-orbitals orthonormality constraints at high order. (ii) In the case of periodic systems, the treatment of a perturbation due to a uniform electric field is not trivial, because the position operator, necessary to describe such a perturbation, is ill-defined in periodic boundary conditions. Much effort has been devoted throughout the years to overcome this last problem. Early treatments of the electric field perturbation for the calculation of the second and third order susceptibilities are particularly complex. 6 A simpler formalism for the calculation of the second order susceptibility was obtained in Ref. 7, taking advantage of the 2n + 1 theorem and of a Wannier representation of the orbitals. Only very recently Nuñes and Gonze 8 were able to give an expression for the derivatives of the DFT energy, with respect to uniform electric fields, at any order, by introducing in the Hamiltonian an additional term depending on the polarization Berry phase. 9 We remark that, although a perturbation due to a macroscopic uniform electric field is ill-defined on an individual Bloch states, such a perturbation is well defined on individual Wannier states, 7,10 which can be obtained by a different choice of gauge. This consideration suggest that the two problems, mentioned in the previous paragraph, are related, and that both problems might possibly disappear using a perturbative approach which is not based on the perturbative series of the single KS orbitals, but is solely based on the properties of the electronic density matrix ρ, which is a gauge independent operator.
In a recent paper 11 we gave an expression for the second order derivative of ρ which allowed the efficient computation of Raman spectra. 11 In the present paper we derive a general expression for the n-th order derivative of ρ, using the two relations ρ 2 = ρ, and [ρ, H] = 0, being [, ] a commutator and H the KS Hamiltonian.
To fix our notation we define the electronic density matrix as where, throughout the paper, v or v ′ is an index running on the occupied valence states, and |ψ v are normalized KS-eigenstates, i.e. H|ψ v = ǫ v |ψ v . Given a perturbation associated with a small parameter λ, for a generic quantity F , we consider the perturbation series: The generalization to the case of different perturbations λ 1 , . . . λ n is straightforward. ρ and H stand for ρ(λ) and H(λ). We call P V , and P C , respectively, the projectors on valence and conduction band states, i.e. P V = ρ (0) , P C = 1 − ρ (0) . Given an Hermitean operator A we define The work is organized as follows. In Sec. II, we use the relation ρ 2 = ρ to express ρ (n) as a function of the operators ρ CV that can be easily computed using standard linear response techniques. In Sec. IV, we show that, within our formalism, the perturbations due to a uniform electric field are well defined in extended insulators. In Sec. V, we derive a simple expression for the derivatives of the polarization.
We decompose ρ in ρ CC + ρ V V + ρ CV + ρ V C , and we consider these four terms separately. The idempotency condition, ρ 2 = ρ, implies that P C ρP C = P C ρρP C = P C ρ(P C + P V )ρP C , or When all the eigenvalues of ρ CC are lower than 1/2, i.e. for λ sufficiently small, this relation between the two operators ρ CC and ρ CV ρ V C can be inverted to obtain: where the right-hand side denotes the operator obtained substituting ρ CV ρ V C in the Taylor series In a similar way, defining When all the eigenvalues of ∆ρ V V are larger than −1/2, i.e. for λ sufficiently small, ∆ρ V V can be expressed as a function of ρ V C ρ CV : Finally, ρ (n) can be expressed as a function of the {ρ and taking the n-th order variation of Eq. (2) and Eq. (4) through Eq. (3). As examples, observing that ρ Note that each ρ CV is a gauge independent operator. In Eqs.
V C plus a commutator, for n ≤ 4. This property is used in Sec. V to compute the derivatives of the polarization. It holds at any order n. Indeed, as we show in the appendix: where n ≥ 2, and In order to compute ρ (n) CV we introduce the wavefunction |η being an unperturbed KS eigenvector. We have: Equating to zero the n-th order term of the perturbation series of [H, ρ] = 0, we find: Multiplying this relation on the left by P C and applying to |ψ to the right, we derive: Solving the linear system of Eq. (13) one can obtain |η (n) v and, thus, ρ (n) . Since the right-hand side of Eq. (13) depends on H (n) , that in turn depends on ρ (n) , the system is to be solved self-consistently, e.g. by using an iterative procedure. Eq. (13) needs to be solved only for a finite number of |η (n) v functions, running the index v on the sole valence states. The linear system of Eq. (13) is analogous to the one that is to be solved in the standard density functional perturbation theory (DFPT), 1,2 thus Eqs. (10,12,13) give an efficient algorithm that can be easily implemented in available DFPT codes (as the PWSCF 12 or the ABINIT 13 code), to compute the derivatives of ρ at any order.
Alternatively, Eq. (13) can be written as whereG c ) is the unperturbed Green function operator projected on the conduction band, and the sum c is restricted to the empty conduction-band states. From Eq. (14) one can recognize that |η wavefunctions, for the three lowest order: We already used Eq. (16) in Ref. 11, to compute the Raman tensor.
IV. TREATMENT OF THE ELECTRIC FIELDS
Thanks to the commutators in Eq. (13), all the quantities needed to compute ρ (n) are well defined in an extended insulator, even if the perturbation λ is the component E α of a uniform electric field, i.e., if H (1) = −er α + ∂V Hxc /∂E α , 14 being r α the α th Cartesian component of the position operator r, e the electron charge, and V Hxc the Hatree and exchange-correlation potential. In particular, in an insulator, the commutator [r, ρ (n−1) ], which appears in Eq. (13), is a well-defined bounded operator, since the variation of the density matrix is localized ( r ′′ |ρ (n−1) |r ′ goes to zero exponentially for |r ′′ − r ′ | → ∞).
To prove the localized nature of ρ (n−1) in a periodic system, we notice that ρ (n−1) can be written (see Eq. (16)) as a sum of operators of the type where |α kv and |β kv are Bloch wavefunction, i.e. |α kv = e ik·r |α kv / √ N and |β kv = e ik·r |β kv / √ N , being N the number of unit cells, |α kv and |β kv wavefunctions periodic in the lattice, normalized on the unit cell. In an insulator, the operators are analytic in k and periodic in the reciprocal space. Cloizeaux has shown in Ref. 16 that an operator having the properties of D is exponentially localized.
The representation of ρ (n−1) in terms of D is also useful to obtain a practical expression for the calculation of the [r, ρ (n−1) ] commutator. In the limit of a converged kpoints grid, since the integral over its period of the derivative of a periodic analytic function is zero. Ω c is the unit-cell volume. From this it can be easily demonstrated that The terms required in Eq. (13), when the perturbation is a uniform electric field, can thus be computed using kv / √ N , and the bra-ket products on the right-hand side are performed on the unit cell. In practical implementation, the derivative with respect to k α in the right-hand side of Eq. (19) can be computed numerically by finite-differentiation, using an expression independent from the arbitrary wavefunction-phase, as in Refs. 8,15.
V. DERIVATIVES OF THE POLARIZATION
Finally, with the present formalism, the computation of the n-th order variation of the polarization density P (n) , becomes natural. The components of P (n) can be written as: where the factor two accounts for the spin degeneracy and T r{A} is the trace of the operator A. We substitute ρ (n) from Eq. (10) where n ≥ 2, Im(z) is the imaginary part of the complex number z, and we have written the operators O kv .
VI. CONCLUSIONS
Concluding, we presented a formalism for the calculation of the derivatives of the electronic density matrix at any order, within the density functional theory approach. Beside being simple, this formalism allows the treatment of extended systems in the presence of an external uniform electric fields in a natural way, without introducing in the Hamiltonian an additional term depending on the polarization Berry-phase.
APPENDIX
The operators defined in Eq. (11) are well defined for λ sufficiently small, since the series Eq. (10) of the text easily follows.
Writing O (n) V C as a function of ρ (i) CV , at the lowest orders we have: where * is a sum on positive integers. These equations allows to compute ρ (n) , with n ≤ 6. | 2019-04-14T02:03:12.211Z | 2003-07-24T00:00:00.000 | {
"year": 2003,
"sha1": "92cc43a7b5d2512810070f0731b483fd286bb2ab",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/cond-mat/0307603",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "92cc43a7b5d2512810070f0731b483fd286bb2ab",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
} |
54966180 | pes2o/s2orc | v3-fos-license | Responses of Marsilea minuta L. to Cadmium Stress and Assessment of Some Oxidative Biomarkers
In a hydroponic based experiment, the Cd toxicity is monitored with some cellular responses of Marsilea plant. Initially, plants were grown under varying concentrations (0, 50, 100 and 200 μM of Cd) of cadmium (Cd) with supplementation of 2 mM spermidine (Spd). The oxidative stress developed by Cd overaccumulation was measured with fall in Relative Growth Rate (RGR) by 27.11% to 59.83% growth reduction over control under varying Cd treatments. The retrieval of RGR was recovered by 1.59 folds as compared to the highest concentration of Cd (200 μM) when plants were fed with Spd. A concomitant degradation of chlorophyll was recorded in dose-dependant manner, however, the retrieval was not much pronounced with Spd. On the contrary, the nonoxidant thiol had borne more clarity with ongoing Cd concentration and appeared to be 40.51% increase maximally for GSH: GSSG at the highest concentration of Cd. Spd has minimized the ratio by 27.4%. The recovery of osmotic turgidity was indexed with a sharp rise in glycine betaine by 3.86 folds maximum at the highest concentration of Cd over control which declined by 30.9% with Spd. Another cellular response of treated plants was more evident from their isozymic profiles with regard to superoxide dismutase (SOD), catalase (CAT), guaiacol peroxidase (GPX). The intensity of protein expression was significantly variable but not in band numbers as evident from Cd treated plants. In vitro enzyme assay of catalase showed as declining trend within the limit of 33.13% to 43.22% which was reported by 1.45 folds when Spd was applied. Therefore, from the present study, the cellular responses of Marsilea plant which showed compatibility for their expression with Cd toxicity could be hypothesized as a case of bioindication. Malay Kumar Adak and Kingsuk Das contributed equally. Corresponding author.
Introduction
Cadmium (Cd), a divalent cation is widely accepted as an effective pollutant with its higher uptake, translocation and thus its bioavailability in plants.Cd with its wider availability in soil, particularly, in the industrial zones frequently poses a threat for the sustainability of plant species.Cd is not behaved as a typical redox metal, still, after absorption by the roots following translocation through the aerial parts, it affects the whole cellular metabolisms.Through the retardation of ion homoeostasis in roots, Cd becomes more vulnerable at the leafy shoots for inhibition of photosynthesis, non-functioning of stomatal behavior, inhibition of respiratory flux, inadequate photoperiodic responses and fall in growth that results in less yield [1]. Apparently, a number of physiological symptoms have been obvious for Cd toxicity in plants including chlorosis, mortling of veins, browning of the root tips, drying of shoot apex, immature defoliation etc.At the cellular level, otherwise, Cd is more offered as a pro-oxidant, since a series of reactive oxygen species (ROS) are accumulated in excess in affected tissues.Inhibitions of electron transport between photosystems and other electron transport pathways, Cd interferes in proper oxidation of oxygen into water and results reduction of oxygen into various ROS.Still, a number of plant species are frequently reported to withstand the Cd toxicity despite excess accumulation in the tissue level without altering the growth significantly.Commonly those species are regarded as Cd hyperaccumulator and are listed predominantly from angiosperm species [2].Since oxidative stress is a confirmatory index for Cd toxicity, higher plants which are quite common for biomonitoring and ecotoxicological determination for Cd toxicity.Therefore, monitoring of Cd phytotoxicity could employ the anti-oxidation pathways induced in concerned plant species.Cd with its fairly solubility in water is also regarded as relevant pollutant in water bodies and thus hydrophytes are more prone to susceptible of Cd toxicity.Most of the cases, food chain contamination and its magnification by Cd start from such hydrophytic species, particularly in the industrial zone [3].
In comparison to higher plants, non-angiospermic species are less explored in this regard where accumulation of heavy metals is tolerated with improved physiological traits for their sustenance.Non-vascular plants mostly pteridophytic species are very often referred to have their wider tolerance under heavy metal stress and thereby suggested as hyperaccumulator species.As example, Pteris is quite common to hyperaccumulate a few heavy metals like arsenic (As), nickel (Ni), cadmium (Cd), lead (Pb), zinc (Zn) etc. [4].With this fern species, the other aquatic fern species have also been attempted to exploit their hyperaccumulation property if any.In earlier communication, Salvinia natans L., a commonly occurring aquatic fern recorded significant accumulation of metal in its biomass with wider range.The distribution of metal in intercellular spaces had shown its strategy to avoid contamination.Thus, in continuation we report the potential of Marsilea minuta L., another aquatic fern species which has been chosen to decipher the impact of Cd mediated oxidative stress under Cd toxicity.The cellular responses were taken in the present experiment in terms of compatible solutes like glycine-betaine, antioxidative enzymes including guaiacol peroxidase, catalase and superoxide dismutase which were employed to assess the potentials of Marsilea plant.
It is quite obvious that cellular responses are modulated under metal toxicity or any other abiotic stresses in compulsion with any signals due to some elicitors.A number of chemical compounds either endogenous in origin or applied exogenously are reported to be inductive in metal tolerance in plants.Out of those polyamine might be an effective moiety in consideration of mitigation of oxidative stress.Oxidative stress with regards to polyamine metabolism has been dealt in new aspect though not fully understood the regulation of ROS generation and its concommiant mitigation by polyamine application has defolded in many aspects of oxidative stress sensitivity in many plant species [5].Marsilea minuta L., a tropical fern species has extended their adaptability in contaminated water bodies where Cd appears to be one of the serious pollutants.Whatever the possibility, the amplification of cellular responses adhered to oxidative sensitivity out of metal stress in any hyperaccumulating species has been wider opportunity for phytoremediatory aspects also.
Therefore, here in, we briefly summarize the impact of spermidine application in interaction of Cd mediated oxidative stress in Marsilea plants.In the present communication, we, perhaps the first time to report the Mar-silea plant, a fern species reacted with polyamine (spermidine: a triamine) under Cd stress with the illustration of some cellular responses.Polyamines play a significant role in modulating different types of abiotic stresses [6].We assume that the Marsilea plant and the exercised physiological parameters could be defolding the aspects of pointing biomarker for Cd rich soil.Therefore, a plant species, particularly, from non-angiospermic fern groups would be a new material for biomonitoring purposes in Cd affected soil.Moreover, Marsilea species which is abundantly and widely grown vegetation in tropical atmosphere could also be hypothesized to be potential material for Cd hyeraccumulation.
Plant Growth and Treatment
Marsilea minuta L., the experimental material was collected from the industrial belts of Kalyani, Nadia, W.B., India.The plants were thoroughly washed and were grown for 7 days in Hoagland's solution.Thereafter, plants were supplemented with varying concentrations (0 μM, 50 μM and 100 μM and 200 μM) of cadmium chloride dissolved in the same solution in different sets.Moreover, in another one set, 2 mM spermidine was supplemented with 200 µM of CdCl 2 solution.The plants were then grown for 7 days in growth chamber (37˚C ± 1˚C), 85% relative humidity and 14 h light (irradiance 72 -80 µM/m 2 /s) and 10 h in dark.The nutrient solution was changed at every two day.On completion of the incubation period, the plants were sampled and frozen in liquid nitrogen and stored at −80˚C for further biochemical assays.
Determination of Relative Growth Rate
For the analysis of relative growth rate, the plants were recovered from Cd doses, washed thoroughly with deionized water and made completely dry in hot air oven under 80˚C for 5 days for constant weights.On completion, the dried plants were recorded weights and computed relative growth rate (RGR) following [7].
Determination of GSH and GSSG Ratio
Estimation of reduced glutathione was done according to [8].The freezed plant tissue (1.0 g) was homogenized in Trichloroacetic acid (TCA) solution under cold condition.The extract was centrifuged at 15,000 × g for 10 min at 4˚C.The supernatant was taken in 0.1 M phosphate buffer (pH 8.0).The pH was adjusted by adding 5 M NaOH and 5 M EDTA.The final pH was recorded at 8.0.The assay mixture contained an aliquot of diluted supernatant, 0.1 M phosphate buffer (pH 8.0), and 0.1% (w/v) O-pthaldehyde.The assay mixture was incubated at room temperature for an hour.The fluorescence intensity was monitored at 420 nm (excitation) and 350 nm (emission).
For determination of oxidized form of glutathione (GSSG), the extract was diluted with 0.1 M NaOH.The diluted supernatant was incubated with 0.4 M N-ethylmaleimide (NEM) for 30 mins.The mixture was diluted by 0.1 N NaOH and adjusted to pH 12.0.An aliquot of the mixture was taken and reacted with same buffer as taken for GSH except 0.1 M NaOH.The fluorescence intensity was monitored at 420 nm (excitation) and 350 nm (emission).
The GSH and GSSG ratio were calculated from above data.
Assay of Catalase
Assay of CAT (EC 1.11.1.6) of the sample was done according to [9].1.0 g of tissue was homogenized in 0.5 mM potassium phosphate buffer (pH 7.0) and and centrifuged at 17,000 × g for 25 min at 4˚C.For in vitro assay of CAT, the reaction mixture containing 100 mM phosphate buffer (pH 7.0), 10 mM H 2 O 2 and equivalent amount of protein from enzyme source was added.The activity was determined by reading the decreasing absorbance at 240 nm (extinction coefficient 0.036/mM/cm).The activity of enzyme expressed as µmol of H 2 O 2 oxidised/min./mgprotein.
Estimation of Chlorophyll
The chlorophyll content was estimated according to [10].The metal treated samples was crushed thoroughly with 80% acetone and were centrifuged at 3000 g for 10 min at 4˚C (Hermle, Model No. Z323K).Supernatant was taken as the source of chlorophyll and was estimated by reading the absorbance at 645 and 663 nm with a UV-V Spectrophotometer (Cecil, Model No. CE7200).
Estimation of Glycine Betaine
Glycine betaine estimation was performed according to [11]. 1 g of plant sample was finely ground, dried and mechanically shaken with 20 ml of deionized water for 24 hour at 25˚C.The sample was filtered and diluted with 2 N H 2 SO4 and the filtrate is stored in freezer until analysis.The extract was cooled in ice (1 h) and reacted with potassium iodide reagent (0.2 ml) followed by vortexing.The sample was kept in 4˚C for 16 hours followed by centrifugation at 15,000 ×g for 15 min.the supernatant was taken, dissolved in 1,2-dichloroethane and incubated for 2 hours.The absorbance was read at 365nm with UV-visible spectrophotometer and expressed in mg/g dry mass.
In Gel Analysis of GPX, CAT and SOD
For in gel studies of isozymes of GPX, 80 μg of protein was run in a non-denaturing 10% polyacrylamide gel at 10 V/lane under cold condition [12].The detection of specific band of polypeptide was resolved in an incubation mixture 50 mM Potassium phosphate buffer, 0.5 mM O-dianisidine and 0.5% H 2 O 2 .
For in gel studies of CAT [13], 80 µg protein was loaded in non-denaturing 10% polyacrylamide gel at 10V/lane under cold condition.Then the gel was incubated in 0.05% H 2 O 2 and the bands were developed in solution containing 1% (w/v) potassium ferricyanide and 1% (w/v) ferric chloride sequentially.
For in gel staining of isozymes SOD [14], 80 μg of protein was loaded in a 10% native PAGE which was incubated in two successive buffers: 50 mM sodium phosphate (pH 7.5) with 2.45 mM NBT, then transferred to 50 mM sodium phosphate (pH 7.5) buffer with 26.5 mM TEMED and 26.5 μM NBT in dark for 40 min, after that the gel was exposed to fluorescent light for the development of bands.
Statistical Analysis
All the observations were recorded with three replications (n = 3) and the statistical analysis was performed by one-way ANOVA analysis, taking P ≤ 0.05 as significant.The data in the figures were presented as mean value ±SE.
Result and Discussion
From the present experiment, it is found that Marsilea plants are affected with Cd toxicity according to the doses of Cd as appeared from phenotypic responses.From the observation, a distinct decolorization of leaves was recorded under various Cd concentrations.The plants were achlorophyllous maximum at 200 µM of CdCl 2 as compared to control (0 µM) (Figure 3(b)).The ranges of chlorophyll loss were 21.48% to 41.18% through various concentrations of Cd against control and the loss of chlorophyll, however, was resumed by 1.0505 folds with 2 mM spermidine when compared to highest concentration of Cd (i.e.200 µM).
From the ongoing results, Marsilea plant showed a distinct loss of its growth and development under ongoing concentration of cadmium (Cd) (Figure 1).Within the stipulated period of experiment, plants, however, recorded to be subdued in some physiological performances.Still, the cellular responses attributing Cd sensitivity or even its tolerance are supposedly to be a bioindication.Therefore, the traits adhered to Cd toxicity for Marsilea needs to be clarified from the viewpoint of hyperaccumulation of heavy metals in such non-vascular plants.Initially, the Marsilea plants responded well to Cd toxicity with its foliage decolorization.Thus, the loss of chlorophyll could be indexed preliminary for Cd sensitivity.
The loss of chlorophyll under heavy metal has already been clarified in relation to both its synthesis as well as turnover.Regardless of plant species the inhibition of the biosynthesis of chlorophyll is related to rate-limiting steps by enzymes under Cd induction [15].It is evident that the retardation of chlorophyll could be experienced by the non-availability of magnesium (Mg) which is an essential metal for chlorophyll biosynthesis.As already documented in our earlier findings, Marsilea plants may undergo a serious wilting at acute Cd toxicity [16].Thus, in the present experiment, we studied one of the concomitant effects of Cd toxicity in relation to water potential of plants.Contextually, glycine-betaine, a frequently occurring osmolyte has been found to be overexpressed significantly in Marsilea plant (Figure 3(c)).The elevation of glycine-betaine in a dose-depen- dent manner of Cd as well as with the moderation of Cd undoubtedly is suggestive to involve itself as reliever of osmotic status.The increase in glycine-betaine was in the order of 1.8986 folds, 2.6149 folds and 3.8649 folds under 50, 100 and 200 µM of Cd as compared to control (0 µM).It is interesting to note that the elevation of glycine-betaine content was significantly moderated with Spd application by 30.94% folds against 200 µM of Cd.The retardation of chlorophyll accumulation is based on inhibition of enzymes (δ-aminolevalinic acid dehydratase, ALA dehydratase, protochlorophyllide reductase), non availability of adequate Mg ++ and Fe ++ , replacement of Mg ++ from tetrapyrorrole ring by Cd interference etc. [17].The loss of chlorophyll accumulation in leaves, correlated with RGR has strikingly drawn the attention for Marsilea plants as suitable of bioindication to Cd. Marsilea plants with its habit in aquatic environment has undergone a serious deficit of water stress that is indirectly evident from glycine-betaine accumulation.
The spermidine in the present experiment has proven its efficacy to stabilize the osmotic imbalances as that of proline and other osmolytes.The interaction of polyamine with other quarternary amines has synergistically operated in the water relation paths of plant and thereby the spermidine may compensate itself for recovery of water stress behaving as osmolyte.In few communications, it is reported that polyamines have emerged as a substitutions of osmolytes or otherwise behaving as any elicitors that reduces the osmotic shock by protecting the cell membrane [18].Use of polyamine can also alter the cellular permeability for access of more hydration under metal stresses in many crop species.Cadmium being a prooxidant in nature has been encountered in many angiospermic species for its efficacy of ROS generation, induced mechanism of antioxidation as well as evocation of some cellular traits adhered to such oxidative responses.However, to our best of knowledge, the findings of Cd toxicity with reference to analysis of antioxidation pathway, particularly, under interaction with polyamine would be a new citation with reference to Marsilea plant (an aquatic pteridophyte).On account of antioxidation pathways, we have already evaluated the different reactive oxygen intermediates or species to damage the Marsilea plant under stress.In our earlier communication, we have demonstrated the generation of various ROS in Marsilea plant as well as its concomitant mitigation with its application of spermidine.In the present communication, plants had exercised its antioxidation pathway with defolding of its genetic plasticity by differential gene expression.The latter includes the few antioxidation enzymes like superoxide dismutase (SOD), peroxidase (GPX), catalase (CAT).The activity of anti-oxidative enzymes is regarded as most suitable measures to justify the plant's potential under oxidative stress.Cd is a heavy metal undoubtedly proven its potential to induce oxidative stress in plants [19].Therefore, in the present experiment, we observed a significant variation in GPX and SOD activity in contrast to CAT.
The lysis of superoxide is the first enzymatic antioxidation which initially prevents the oxidative damages of membrane by superoxide.In our earlier communication, a rise of superoxide dismutase activity in vivo was studied in the Marsilea plant under cadmium (Cd) stress.This has been more proliferated with the identification of distinct variation in SOD gene expression according to Cd concentration.SOD being a multigene family have been reported to be expressed and separated by native polyacrylamide gel which resolves three distinct bands.Those bands are commonly featured in many crops are adhered to identified sensitivity of copper (Cu), zinc (Zn), magnesium (Mg), and iron (Fe) isozyme [20].In the present experiment also a distinct increase in Fe and Mn-SOD, particularly at highest concentration of cadmium (Cd) (200 µM) may clearly be indicative of superoxide sensitivity in Marsilea plant (Figure 4(c)).Any of isoforms of SOD so revealed from the present experiment has also been a selective trait for biomarker study from the cellular responses under cadmium (Cd), more so, when a dose-dependent relationship is obtained other heavy metal induction.Therefore, detoxification of superoxide and its induced efficiencies in Marsilea plants appears to confirm a functional relationship of cadmium toxicity in aquatic species.
Peroxidase (GPX) which is required to lysis the peroxide predominantly H 2 O 2 is essentially based on electrophilic reactions with the aid of some phenolic residues as guaiacol, in the present case.Thus, in response to Cd mediated oxidative stress, plants display its cellular expression of GPX in regulated manner.From the figures (Figure 4(a)) it appears a quite consistent increase in gene expression for GPX as compared to control under Cd stress as indicative of the fact for Cd detoxification with reference to H 2 O 2 .Unlike angiospermic species GPX appears with only a single band to show its expression when partial purified protein was run on native gel may be scored a possible bioindication for excess of H 2 O 2 accumulation in the tissues of Marsilea plant and thereby the activity is overexpressed.However, no such significant variation was noted according to concentration gradient of cadmium (Cd) applied to Marsilea.The establishment of GPX activity with a number of isozymic band in crop species might be linked to subcellularly expression of these genes according to plant's genotypic configuration.
To compensate the depleted redox, more towards the oxidised state of the tissues, plants develop some non-enzymatic antioxidants small molecule in nature.Glutathione is the tripeptide that is composed of glutamic acid-cysteine-glycine.This compound with its two alternative forms reduced (GSH) and oxidised (GSSG) is actively involved in scavenging of free radicals.In fact, glutathione is an integral constituent to replenish another major antioxidant, ascorbate in ascorbate-glutathione pathway.In the present experiment, the increase in GSH content (as depicted in GSH:GSSG in Figure 2) is clearly indicative of the fact of plant's oxidative stress under cadmium(Cd).Thus the ranges of glutathione ratio (GSH:GSSG) varied from 1.13 to 1.45 folds as compared to control against cadmium (Cd) concentration.The overaccumulation of ROS, particularly, the H 2 O 2 is lysed by ascorbate mediated peroxidation by ascorbate peroxidase (APX).In ascorbate-glutathione pathway, the reduced form of ascorbate is replenished by donation of electron from GSH, so that the former could be efficiently acting on H 2 O 2 [21].In a steady state pool of GSH and GSSG, interconversion is facilitated by glutathione reductase, the enzyme present with multiple forms however, in subcellular specific way.In a similar way, the activity profile of GR has already been mentioned in earlier communication where Marsilea plant had significantly overexpressed the GR isoforms under metal stress.Therefore, precisely, the decrease in GSH content may be suggestive for bioindication of oxidative stress either by directly scavenging of free radicals and improving the antioxidation pathway through regeneration of ascorbate.In more insight, the behaviour of glutathione in plant system with its decrease under metal stress may be suggestive for phytochelatin biosynthesis, the non-enzymatic protein with high chelating efficiency metal ion.It is more interesting to see that Marsilea plant has responded well with regards to polyamine metabolism to increase the glutathione (GSH) content.This is somehow related to sustain the GSH activity [22].
Catalase, the enzyme which has a similar activity pattern like, peroxidase, however, it does not involve any phenolic derivatives as electron donor.The Marsilea plants are characterised with a clear expression of CAT isozymic profiles as a function of Cd concentration compared to control.Though there recorded no such variation in band numbers but plants could vividly tune the genetic makeup to antagonize the Cd sensitivity with CAT expression.The downregulation of CAT activity as reported in many crop species under metal stress could be explained either impairment of de novo synthesis of the enzyme or the overload of cellular H 2 O 2 , if any, that creates toxicity to denature the proteins [23].In conext to bioindication the apparent no change of CAT expression to Cd toxicity could thus find hardly any possibility in Marsilea plants.However, the application with spermidine there is some fold increase in CAT expression in the present experiment (Figure 3(a) and Figure 4(b)).This is interesting to support the earlier postulation that deactivation of CAT by overaccumulation of H 2 O 2 and that could be erased by polyamine application.In an alternative approach to polyamine encountering tenance according to Cd concentration.The Relative Growth Rate (RGR) value ranges from 21.77% to 59.83% decrease as compared to control.However, a significant retrieval of RGR value may be indicative of the fact for sustenance of photosynthetic carbon assimilation and thereby allocations of carbon in different plant parts.Similarly, effect of metal toxicity and thereby its concomitant effect on dry matter accumulation has also been reported in other aquatic macrophytes like Pistia, Lemna etc. [24].The decline in photosynthetic carbon accumulation out of many related attributing factors like chlorophyll fluorescence, energy depletion and carbon reduction metabolism has been commonly featured in such plants.It reflects plant's non-sustenance under metal even of those studied parameters related to photosynthetic activities somehow could be no less for bioindication of the concerned metal [25].Still, spermidine may be granted as protecting the cellular membrane stabilizing photosynthetic mechineries and thereby sustaining the carbon acquisition which finally may be supportive for dry matter accumulation.In tropical vegetation, particularly in submerged aquatic condition where illumination becomes a limiting factor for photosynthetic ill performances may be aggravated with different toxic material in aquatic system [26].
Figure 1 .
Figure 1.Relaive growth rate (RGR).Effects of varying Cd concentrations (0, 50, 100, 200 μM of Cd salt) and 200 μM of Cd salt with 2 mM Spermidine (200 μM + 2 mM Spd) on growth of Marsilea plant.The data represented as mean value of observations (n = 3) ±SE and put by the vertical bars.Different letters indicate significant differences and similar letters indicate insignificant differences among mean values within each treatment (Student t-test, P ≤ 0.05).
Figure 3 .
Figure 3. (a) Changes in activity of Catalase (b) chlorophyll content (c) and glycine betaine content in Marsilea plant grown under varying concentrations (0, 50, 100, 200 μM) of Cd and 200 μM of Cd supplemented with 2 mM Spermidine (200 μM +2 mM Spd).The data represented as mean value of observations (n = 3) ±SE and put by the vertical bars in each bar.Different letters indicate significant differences and similar letters indicate insignificant differences among mean values within each treatment (Student t-test, P ≤ 0.05).theoxidative stress with a threshold concentration of H 2 O 2 that may not have any impart on CAT denaturation.From the distribution pattern of dry matter Marsilea plant under interference of Cd toxicity, the allometric analysis of growth was depicted in Figure1.It clearly shows that plant failed to maintain a steady growth sus- | 2018-12-07T20:38:55.531Z | 2014-05-06T00:00:00.000 | {
"year": 2014,
"sha1": "52bc49459a27199c9ea47aa88b30bf9773b17816",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=45743",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "52bc49459a27199c9ea47aa88b30bf9773b17816",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Biology"
]
} |
249822816 | pes2o/s2orc | v3-fos-license | Support for mechanical advantage hypothesis of grasping cannot be explained only by task mechanics
Successful object interaction during daily living involves maintaining the grasped object in static equilibrium by properly arranging the fingertip contact forces. According to the mechanical advantage hypothesis of grasping, during torque production tasks, fingers with longer moment arms would produce greater normal force than those with shorter moment arms. Previous studies have probed this hypothesis by investigating the force contributions of individual fingers through systematic variations (or perturbations) of the properties of the grasped handle. In the current study, we examined the validity of this hypothesis in a paradigm wherein the thumb tangential force was constrained to a minimal constant magnitude. This was achieved by placing the thumb on a freely movable slider platform. The total mass of the handle was systematically varied by adding external loads directly below the center of mass of the handle. Our findings suggest that the mechanical advantage hypothesis manifests only during the heaviest loading condition when a threshold difficulty is reached. We infer that the support for the mechanical advantage hypothesis depends not only on the physical parameters but also on the individual ability to manage the task.
of tasks the applicability of MAH depends on. Therefore, it is necessary to investigate whether the applicability of mechanical advantage is task-specific and which kind of tasks/scenarios support the MAH.
In our previous study 20 , we had attempted to investigate the applicability of MAH by introducing torque changes to the handle. Rather than implementing external torque changes by suspending the load at a distance from the center of mass of the handle, we incorporated the torque changes by reducing friction between the thumb platform and the handle interface. This was made possible by placing the thumb on a slider platform that could freely translate vertically over a railing. In this way, the tangential force produced by the thumb was kept constant and less than the virtual finger 21 (an imaginary finger whose mechanical output is equal to the combined output of the individual fingers except for the thumb). This had resulted in introducing a residual pronation torque to the handle. Since the instruction was to maintain the handle in static equilibrium, a compensatory supination torque was required to avoid the tilt caused as a result of the residual pronation torque. Ulnar finger normal forces and thumb tangential forces are major contributors to this compensatory supination torque. However, by our design, it was not possible to increase the tangential force of the thumb as it had to hold the slider platform steady at the HOME position (midway between middle and ring fingers). Therefore, only the normal forces produced by the ulnar fingers became the primary source of this compensatory supination torque.
Between the ulnar fingers, the little finger has a larger moment arm for normal force when compared with the ring finger. Hence it was expected that the little finger would produce greater normal force. Contrary to this expectation, ring and little fingers were found to share comparable normal forces while grasping the handle of mass 0.535kg 20 . Therefore, in the current study, we expected that MAH would be corroborated if the mass of the handle is increased systematically by adding different external loads. As per the design of this grip device, the tangential force of the thumb was constrained to a constant minimal magnitude. So, with an increase in the mass of the handle, only the tangential force of the virtual finger increases, which is accompanied by an increase in the residual pronation torque. As a corrective effect, the magnitude of compensatory supination torque required to be produced would also increase. Hence, we expected that with a systematic increase in the mass of the handle, the little finger would produce correspondingly higher normal force than the ring finger during compensatory supination torque production.
In line with such an expectation, in the current study, the mass of the handle was systematically increased by employing external loads of mass 0.150, 0.250, 0.350, and 0.450 kg as different experimental conditions. Our previous study had shown comparable normal forces between the ulnar fingers for a handle mass of 0.535kg 20 . In another study on investigating the role of grasp force magnitude during multi-finger prehension 22 , by suspending an external load of mass 0.160 kg eccentrically at various distances under a handle of mass 0.415 kg, the contribution of digit forces in terms of percentage of total normal force of the virtual finger was examined. It was found that even for a small external torque of 0.14 Nm, during natural grasping, the percentage share of the little finger normal force (approx. 45%) was greater than the ring finger normal force (approx. 33%).
The total mass of the handle (0.450 kg) with the minimum external load (0.150 kg) used in our current study was approximately close to the total mass of the grip device in the afore-mentioned multi-finger prehension study 22 in which MAH was supported. Hence, we hypothesized that the mechanical advantage hypothesis would be supported for all our experimental conditions (0.150, 0.250, 0.350, and 0.450 kg) of external load starting with a minimal mass of 0.150 kg (Hypothesis H1).
Methods and materials
Participants. Twelve young, healthy right-handed male volunteers participated in this study. The mean and standard deviation of the participant's age, height, weight, hand length, and width were measured as follows: Age: 26.75 ± 3.9 years, Height: 172.02 ± 5.7 cm, Weight: 75.21 ± 17.7 kg, Hand-length: 18.93 ± 1.1 cm, and Handwidth: 8.92 ± 0.7 cm). Only participants with no history of neurological diseases and musculoskeletal injuries were chosen to participate in this experiment.
Ethics approval. The Institutional Ethics committee of the Indian Institute of Technology Madras approved the experimental procedures (Approval Number: IEC/2021-01/SKM/02/05). All the participants gave written informed consent according to the procedure approved by the institutional ethics committee of IIT Madras before the beginning of the experiment. The experimental sessions were conducted by strictly adhering to the procedures approved by the Institutional Ethics Committee of the Indian Institute of Technology Madras.
Experimental setup.
A five-finger prehensile handle was designed and custom-built for the experiment, as shown in Fig. 1 (refer Supplementary Video S6). The handle consists of a vertical railing of length 13.6 cm fitted on the thumb side to mount the slider platform, thus allowing its vertical translation along the railing. The handle was suspended from wooden support using a nylon rope housed within a hollow PVC pipe to restrict any undesirable lateral movement while it was suspended. The present study involves a prismatic precision grip of the handle of mass 0.450 kg. The mass of the slider platform was 0.100 kg. Thus, restricting the thumb tangential force to approximately 1 N. Five six-axis force/torque sensors (Nano 17, Force resolution: Tangential: 0.0125 N, Normal: 0.0125 N, ATI Industrial Automation, NC, USA) was mounted on the handle to measure the forces and the moments exerted by the individual fingers and thumb. For the thumb alone, the force sensor was placed on the slider platform, which enabled the smooth translation of the platform over the railing fitted on the handle's thumb side.
A laser displacement sensor (resolution, 5 μm; OADM 12U6460, Baumer, India) was mounted on a square flat piece made of acrylic, and the assembly was fitted on top of the handle towards the thumb side. The displacement sensor provided the displacement data of the thumb platform in the vertical direction while it translated along the vertical railing. On top of the handle, another acrylic block was placed in the anterior-posterior direction, www.nature.com/scientificreports/ which held an intelligent 9-axis absolute orientation sensor (Resolution: 16bits, Range: 2000°/s, Model: BNO055, BOSCH, Germany). This IMU (Inertial Measurement Unit) sensor provided the orientation data of the handle after appropriate pre-processing of the raw data. A spirit level was also mounted on the acrylic block towards the participant's side of the handle to aid the participant in ensuring the handle's vertical orientation while it was being held. Two horizontal lines were drawn on the participant's side of the handle, one at the center of the thumb platform and another midway between middle and ring fingers on the handle frame. The participants were asked to hold the handle in a way such that the two lines were aligned. Thirty analog signals from the force/torque sensors (5 sensors × 6 components) and single-channel analog laser displacement data were digitized using NI USB 6225 and 6002 at 16-bit resolution (National Instruments, Austin, TX, USA). This data was synchronized with four channels of processed, digital data from the IMU sensor. Sampling rates of all data were set to 100 Hz. Experimental procedure. Participants were asked to wash and clean their hands with soap, towel-dry and then sit comfortably on a wooden chair with their forearm resting on the table. The right upper arm was abducted at approximately 45° in the frontal plane, flexed 45° in the sagittal plane, and elbow flexed at approximately 90°. The natural grasping position can be achieved by supinating the forearm at 90°. The movements of the forearm and wrist were constrained by fastening with a Velcro strap to the tabletop.
The experiment involved four conditions. For these conditions, external loads of mass 0.150, 0.250, 0.350, and 0.450 kg were added at the bottom of the handle, i.e., exactly under the center of mass of the handle. A computer monitor displayed a solid horizontal target line with two dashed lines at 0.2 cm above and below the target line. These dashed lines represented an acceptable error margin. The target line shown on the monitor corresponded to the 'HOME position' of the thumb. The trial began only after the participant could hold the thumb platform steadily by aligning the horizontal line on the thumb platform to the line drawn midway between the middle and ring finger. Thumb displacement data measured using a laser displacement sensor was shown as feedback on the participant's screen. Once the trial started, the participants were required to keep the slider platform in the same position (HOME), by aligning the horizontal line on the platform to the line drawn between the middle and the ring fingers. Precise alignment of the two lines during the task essentially meant that the feedback line traced the actual target line. Acceptable performance or task success during the trial was defined to be within an error margin of ± 0.2 cm as mentioned above. Throughout the trial, the handle had to be maintained in static equilibrium in the frontal plane for all the external loads. This was ensured by having the bubble of the spirit level at the center throughout the trial. www.nature.com/scientificreports/ For each experimental condition, 25 trials were performed. Each trial lasted for six seconds. One minute of break was provided between trials. After every twelve trials, ten minutes of break was provided to eliminate the effect of over exertion, if any. The experiment was held in two separate sessions. Each session included two external load conditions with thirty minutes of break between conditions. The order of presentation of these two sessions was counterbalanced across all participants (see Supplementary Table S7). Six of the participants performed with the weight of 0.150 kg followed by 0.350 kg in their first session. The other six participants performed with the weight of 0.450 kg followed by 0.250 kg in their first session (refer Supplementary Note and Fig. S8).
Data analysis.
The data were analyzed offline using MATLAB (Version R2016b, MathWorks, USA). Force/ Torque data and laser displacement data of thumb were lowpass filtered at 15 Hz using second-order, zero phase lag Butterworth filter. In each trial, the data between 2 and 5 s were taken for analysis to avoid start and end effects.
The normal and tangential force data collected from the individual fingertips and the thumb were averaged over the time samples, trials, and participants for each condition separately, and the standard errors of the mean were computed.
Statistics. All Statistical analyses were performed using R. Two-way repeated-measures ANOVA was performed on the average normal force with the two factors being loads (4 levels: 0.150, 0.250, 0.350, 0.450 kg) and fingers (4 levels: index, middle, ring, little). Since the thumb normal force is dependent on the normal forces of the index, middle, ring, and little fingers, a separate one-way repeated measures ANOVA was performed on the thumb normal force with the factor as loads (4 levels: 0.150, 0.250, 0.350, 0.450 kg). Another two-way repeatedmeasures ANOVA was performed on the average tangential force with the factors being loads (4 levels: 0.150, 0.250, 0.350, 0.450 kg) and fingers (5 levels: index, middle, ring, little, thumb). Sphericity test was done on the data, and the number of degrees of freedom was adjusted by Huynh-Feldt (H-F) criterion wherever required. Pairwise post hoc tukey tests were performed to examine the significance within factors. Further, we performed equivalence tests for all the non-different pairs. The statistical equivalence was tested using the two one-sided t-tests (TOST) approach 23 for a desired statistical power of 95%. The smallest effect size of interest (SESOI) was chosen as the equivalence bounds.
Results
Task performance. All the participants were able to trace the horizontal target line shown on the monitor within the error margin during all four loading conditions (0.150, 0.250, 0.350, and 0.450 kg) as shown in Supplementary Fig. S1. Root mean squared error (RMSE) on the thumb displacement data was computed for the four different loads and is shown in Table 1. Throughout the trial, the participants attempted to maintain the handle in static equilibrium during all four loading conditions by positioning the bubble at the center of the bull's eye. Therefore, the average net tilt angles for the different loading conditions were found to be less than one degree, as shown in Table 1. Thus, the participants could trace the target line with minimal vertical displacement and minimal tilt during all trials in all loading conditions. Normal forces of the individual fingers and thumb during different loads. The normal forces of the ring and little fingers were found to be statistically comparable with the addition of external loads of 0.150, 0.250, and 0.350 kg. However, when an external load of 0.450 kg was added, the little finger normal force was found to be statistically (p < 0.0001) greater than the ring finger normal force and thus supporting MAH.
We observed a main effect of the factor loads (F (2.73, 30.03) = 8.571; p < 0.001, η 2 p = 0.43) when a two-way repeated-measures ANOVA was performed on the absolute normal force with the factors as loads and fingers. It was found that the normal forces of the individual fingers (excluding the thumb) under the loading condition of 0.450 kg were statistically (p < 0.001) greater than the normal forces produced under loading conditions of 0.150 and 0.250 kg. Further, the normal forces produced with a load of 0.350 kg were statistically (p < 0.05) greater than the normal force produced with a load of 0.150 kg. In addition to this, there was a significant effect of the fingers (F (3, 33) = 181.921; p < 0.001, η 2 p = 0.94) corresponding to a statistically (p < 0.001) higher normal force by the little finger than the index, middle and ring fingers on loading (refer Fig. 2). Also, the normal force of the ring finger was statistically greater than the index and middle fingers.
Tangential forces of individual fingers during different loads.
In the case of the tangential forces, a two-way repeated-measures ANOVA with the factors as loads (F (3, 33) = 390.575; p < 0.001, η 2 p = 0.97) and fingers (F (4, 44) = 44.205; p < 0.001, η 2 p = 0.80) showed significant effect of the factor loads corresponding to a statistically greater tangential force with the use of 0.450 kg than with the use of 0.150 kg (p < 0.001), 0.250 kg (p < 0.001) and 0.350 kg (p < 0.05). In addition, a significant effect of the factor fingers confirmed that the little finger tangential force was statistically (p < 0.001) greater than the index, middle, and ring finger tangential forces on loading.
In addition to this, on performing the pairwise post hoc tukey test, it was confirmed that the little finger tangential force (0.150 kg: 2.03 N, 0.350 kg: 2.80 N) was non-different from the ring finger tangential force during the employment of 0.150 kg (1.64 N) and 0.350 kg (2.26 N). TOST procedure performed on these dependent pairs confirmed that the comparisons were not statistically equivalent. However, little finger tangential force (0.450 kg: 3.22 N; 0.250 kg: 2.54 N) was statistically greater than the ring finger tangential force with the addition of load 0.450 kg (Ring: 2.52 N, p < 0.01) and 0.250 kg (Ring: 1.92 N, p < 0.05) (refer Fig. 4).
Further, interaction effect of loads x fingers was significant (F (12, 132) = 5.857; p < 0.001, η 2 p = 0.34) reflecting the fact that the little finger tangential forces (3. Little finger normal force (represented in black) was found to be statistically (p < 0.0001) greater than the ring finger normal force (represented in dark shaded grey) in the 0.450 kg loading condition. The ring and little finger normal forces were found to be statistically equivalent under remaining loading conditions. The columns and bars indicate means and standard errors of means. (Note: *** represents significance of less than 0.0001).
Discussion
The main objective of the present study was to investigate whether the applicability of MAH is dependent solely on task parameters like the total weight of the handle, moment arm of the suspended load, or is it affected by factors beyond these physical parameters. We tested the manifestation of MAH in our paradigm by systematically increasing the weight of the handle by adding external loads at the bottom of the handle below its center of mass.
The weight of our current handle with the minimal loading condition exceeded the weight of the handle in our previous study 20 . So, we hypothesized that MAH would be supported in all our loading conditions. Contrary to our expectation, we found that the ring and little finger normal forces were statistically comparable with the addition of 0.150, 0.250, and 0.350 kg loads. However, we noticed that MAH was supported for the external load of 0.450 kg. We discuss the implications of these findings in the following paragraphs. Ulnar finger normal forces were examined under four different external loading conditions i.e., 0.150, 0.250, 0.350 and 0.450 kg. In our previous study, with a similar unsteady thumb platform as used in the current study, the ulnar fingers produced comparable normal forces for a handle mass of 0.535 kg 20 . With the minimal external load of 0.150 kg, the total mass of the handle would become 0.600 kg (above 0.535 kg that was used in our previous study). So, our expectation was that MAH would be supported for all the loading conditions. In contrast to our expectation, the little finger produced statistically comparable normal forces to the ring finger for 0.150 kg load. With further increase in the external loadings with masses 0.250 and 0.350 kg, the ulnar fingers continued exhibiting statistically comparable normal forces. However, this trend did not hold true when the external load was increased to 0.450 kg, wherein the little finger exerted a statistically greater normal force than the ring finger.
Unlike the other studies on grasping with eccentrically loaded manipulanda, the current study involved maintaining a constant minimal tangential force by the thumb (approximately 1 N) at different loading conditions (see Supplementary Fig. S4). Therefore, with an increase in the total mass of the handle by adding an external load of 0.450 kg (comparatively larger than the mass of other loads employed in the present study), the virtual finger had to share greater tangential force to maintain the vertical equilibrium causing a greater pronation torque (counter-clockwise direction from the participants viewpoint). This, in turn, necessitated progressively greater compensatory supination moments to maintain the rotational equilibrium. Since the design of the handle prevents the thumb from contributing further to the supination moment, the ulnar fingers are required to compensate with their normal forces. In this regard, instead of exerting comparable normal forces, the little finger tends to produce greater normal force than the ring finger, thus supporting MAH. What could be the reason for this behavior of ulnar fingers with the addition of the heaviest external load as compared to the other loads?
The next natural question is whether the applicability of mechanical advantage depends on employing heavy masses while grasping. If that had been true, then MAH would have been supported when a large external load of www.nature.com/scientificreports/ 2 kg was suspended at a distance of 1.9 cm from the center of mass (COM) of the handle (for a torque magnitude of − 0.375Nm) in the grasping study investigating the contribution of peripheral and central fingers 17 . However, they found that ulnar fingers normal forces were non-different for this large load. Eventually, with a systematic increase in the compensatory supination torque magnitude (0.750, 1.125, and 1.50Nm), the little finger gradually started producing more normal force than the ring finger and validated the MAH. In another study investigating the role of grasp force magnitude during multi-finger prehension 22 , when an external load of mass 0.160 kg (much lesser than 2 kg) was suspended from a handle of mass 0.415 kg eccentrically at a distance of 8.9 cm from COM, MAH was supported. This result triggers another question as to whether the support for MAH depends on suspending the external load at large moment arms from COM of the handle? From the results of the multi-finger prehension study 22 , it was apparent that the applicability of mechanical advantage depends on using higher moment arms for the external load. Our current result forces us to re-evaluate this conclusion, as MAH was supported even when an external load of 0.450 kg was suspended directly below COM of the handle (having zero moment arm). This suggests that apart from the mass of the external load and moment arm of the suspended load, a latent factor governs the applicability of mechanical advantage. In other words, our data suggest that the applicability of the principle of mechanical advantage in biological systems depends not only on the mass or moment arm of the suspended load or both but also on more individual-specific components such as the individual ability of managing a task.
In the prehension study investigating the effect of grasp force magnitude 22 , the demanding aspect of the task might have been using an unusually high moment arm, thus allowing MAH to manifest. In a recent study 24 using a handle similar to the current study, the task was to trace trapezoid and inverted trapezoid patterns by displacing the thumb platform 1.5 cm above and below the HOME position. The mechanical advantage hypothesis was supported during the inverted trapezoid condition when the movable thumb platform was held steady while tracing the static portion 1.5 cm below the HOME (at the level below the center of the ring finger sensor). The carpometacarpal joint (CMC) of the thumb has a restricted range of motion in the downward direction 25 (flexion or radial adduction). Therefore, the task of maintaining the handle in static equilibrium with a movable thumb platform at the level below the center of the ring finger sensor might have been quite difficult to perform. We suggest that this biomechanical constraint which imparted difficulty in accomplishing the task may have caused the little finger to share greater normal force than the ring finger.
Following a similar rationale, in the current study, perhaps the task became fairly demanding, as the requirement was to produce compensatory supination torque with only the normal forces of the ulnar fingers. This was a direct effect of restricting the thumb tangential force to a constant minimal magnitude and essentially rendering it much less consequential in the supination torque production. Simultaneously, this also amplified the role of the ulnar fingers in the compensatory torque production. For the heaviest external load of 0.450 kg, the magnitude of fingertip forces required were much higher than the magnitude of fingertip forces in the relatively easier loading conditions i.e., for the 0.150, 0.250, 0.350 kg loads. According to a study that investigated the use of mechanical advantage in multi-finger torque production 15 , MAH is employed to reduce the total effort or force produced for the task without compromising to produce the required moment. Along similar lines, we speculated that to www.nature.com/scientificreports/ avoid higher exertion (higher force levels) of the ulnar fingers by sharing comparable and greater forces (due to tangential force restriction in the thumb), the participants used the mechanical advantage of the little finger to more efficiently manage the grasp after a threshold difficulty was reached. As per the instruction to participants in the current study, the participants were allowed to continue performing the trials only when they did not feel over exertion or pain. Therefore, to successfully complete the task, without indulging in straining the ring and little fingers, participants would have chosen to minimize the total force (or effort) in the ulnar fingers by employing the principle of mechanical advantage. Also, from literature 26 , it was found that the exclusion of the little finger from the overall grip pattern decreased overall grip strength by 33%, and exclusion of the ring finger from the overall grip pattern decreased overall grip strength by 21%. This shows that, among the ring and little fingers, little finger contribution is fairly higher than the ring finger when there is an increase in overall grip force requirement. Thus, an addition of heavier external load, which in turn increases overall grip force requirement, might have caused the little finger to contribute significantly greater than the ring finger. From an anatomical perspective, the little finger has an additional group of intrinsic muscles (hypothenar muscles) compared to the ring finger, which could be a supporting factor to employ little finger than ring finger when the task becomes difficult or demanding.
To further elucidate our result, it is important to emphasize that in the previous studies on object manipulation that introduced external torques to the handle, there was no restriction in the distribution of tangential forces among the fingers and the thumb while grasping. The tangential force of the thumb would have greatly contributed to the supination torque in addition to the normal forces of the ulnar fingers. This was evident from the previous study 17 , where the thumb tangential force increased during the supination efforts. Hence, the participants might have been able to share comparable normal forces by the ring and little fingers even with a larger load (2 kg) and with a greater torque magnitude of 0.375 Nm than in the current study.
In contrast, in our current study, the tangential force of the thumb was restricted to approximately 1 N by placing the thumb on a freely movable platform of mass 0.100 kg for all the loading conditions. This essentially creates a situation wherein the ulnar fingers are forced to contribute greatly to the compensatory supination torque. We posit that such a constraint in the tangential force contribution of the thumb is most exemplified under 0.450 kg external load. Note that this load is much less than the 2 kg load where MAH was not supported. We strongly believe that individual-specific components such as the individual ability of managing a task that is difficult to accomplish might have encouraged the use of the mechanical advantage of the little finger. The difficulty faced by the performer is not dictated merely by the external loads and torques but also due to the biomechanical constraint as in the previous study 24 , or it could be due to the individual's ability of managing a task.
In the present study, the participants could complete the task under loads of 0.150, 0.250, and 0.350 kg (resulting in the supination torques of 0.22, 0.23, and 0.25 Nm, respectively), which might not have been difficult enough than with a load of mass 0.450 kg (refer Fig. 5 and Supplementary Fig. S5). As under a load of mass 0.450 kg, the task of maintaining the static equilibrium of the handle by producing greater and comparable forces by the ulnar fingers might have been difficult. Therefore, for successful completion of the task, little finger having both www.nature.com/scientificreports/ mechanical and anatomical advantage would have produced greater force than ring finger. Whereas, due to the task simplicity, comparable normal forces would be produced by the ulnar fingers, with the addition of 0.150, 0.250, and 0.350 kg loads. In a study on producing maximum voluntary contraction MVC 27 , when the target finger is the little finger, the force produced by the little finger was found to be well above the adjacent ring finger force. Analogously, since our study involved very strong voluntary grasping of the handle for the external loading condition of 0.450 kg, greater activation of the little finger motor units could have caused a greater force in the little finger than the ring finger, thus enabling optimal distribution of forces within the ulnar fingers in line with the MAH. This is also supported by the study 28 wherein they found that the magnitude of force produced due to the little finger motor units under the ring finger was almost two-thirds of the force produced under little finger during voluntary grasping. Since we have not measured the actual activation pattern of the individual motor units, further research is required to tease out the underlying neural mechanisms through which mechanical advantage is manifested. Taken together, our results suggest that the applicability of the mechanical advantage hypothesis depends not only on the torque requirement or the total mass of the object but also on the individual's ability to manage the task.
Concluding comments
The current study was performed to validate whether the mechanical advantage hypothesis is task-specific and investigate the kind of tasks that lend support to the MAH. A five-finger prehensile handle with an unsteady thumb platform was utilized for analysing the applicability of MAH. The mass of the handle was systematically increased by using additional external loads of mass 0.150, 0.250, 0.350, and 0.450 kg. Ulnar fingers exerted comparable normal forces with the external loads of mass 0.150, 0.250, and 0.350 kg. However, the mechanical advantage hypothesis was supported with a load of 0.450 kg. With the addition of greater mass, under the constraint of using minimal thumb tangential force, establishing static equilibrium by the ulnar fingers becomes progressively more challenging. Therefore, we conclude that MAH as a strategy utilised in human grasping is not only employed when there is any change in the mass of the grasped handle or moment arm of the suspended load but also when a certain threshold difficulty is reached during the task. | 2022-06-19T06:17:37.873Z | 2022-06-17T00:00:00.000 | {
"year": 2022,
"sha1": "d961648f5b9f6d09cb9222b3bab358a3a060cdfa",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "c0f9843e3ffb6c2d9a962e6ee1aaf11d855a1127",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
219208420 | pes2o/s2orc | v3-fos-license | Tuberculosis
This article reviews the published literature on tuberculosis from September 2012 to August 2013 and describes important advances in tuberculosis epidemiology, microbiology, pathology, clinical pharmacology, genetics, treatment and prevention.
Introduction
A number of important contributions have been made to the literature on tuberculosis (TB) in the past year. This article reviews the literature and summarises the most important contributions in the following areas: epidemiology, microbiology, pathology, clinical pharmacology, genetics, treatment and prevention.
Epidemiology
Several important observations have been made about TB transmission in the past year. In an innovative investigation from South Africa, WOOD et al. [1] studied social mixing patterns among persons living in a township with a high incidence of TB. 571 persons completed a detailed 24-h diary of the duration and number of persons with whom they spent time indoors, as well as the location of the encounter. Surprisingly, most contacts were generated by sharing air on public transport (27.1%), while time spent sharing air in households generated 25.1% of contacts and time spent in the workplace and community buildings generated only 8% and 6% of contacts, respectively. There was substantial variation by age, with school contacts predominating in persons aged ,19 years. The low proportion of work contacts may reflect high unemployment in the township studied. These results indicate that social mixing is far from random, and that public transport may be a high-risk location for TB transmission.
A second study by this group further explored the relationship between public transport and TB transmission using a modified Wells-Riley equation to model the risk of transmission on minibus taxis, buses and trains [2]. Carbon dioxide concentrations experienced by riders were sampled using a portable continuous sampling device that sampled at 5-s intervals and the number of riders per trip was assessed. Carbon dioxide levels were 2.5 to 4.5 times higher in transport vehicles compared to outdoor air (mean concentration 1800 ppm in minibus taxis, 1150 ppm in buses and 1000 ppm in trains, compared with 410 ppm in outdoor air). Because nearly all rebreathed carbon dioxide comes from that exhaled by other riders, these levels were used as a surrogate for rebreathed air. Using the Wells-Riley equation, the risk of infection was estimated to be highest in minibus taxis; the model predicted that it could be as high as 3.5% to 5% per year. Thus, public transport must be seen as an important potential location for TB transmission.
A second topic on which an important contribution was made was the use of whole-genome sequencing of Mycobacterium tuberculosis in tracking TB transmission. In an elegant analysis of 86 isolates collected over a 13-year period, ROETZER et al. [3] determined that traditional genotyping failed to discriminate two distinct outbreaks with closely related strains, whereas whole gene sequencing clearly showed two distinct chains of transmission. This result was more closely aligned with the results of contact investigations. Moreover, the investigators were able to track single nucleotide changes in M. tuberculosis isolates over time in successive human hosts. They conclude that the organism accumulates mutations at a rate of approximately 0.4 mutations per genome per year. Thus, whole-genome sequencing provided better discrimination and correlated more closely with contact tracing information than IS 6110 or mycobacterial interspersed repetitive unit variable number tandem repeat typing. With the increasing availability and decreasing cost of whole-genome sequencing technology, it appears likely that we will see more advances in our understanding of the molecular events associated with TB transmission in the coming years.
Microbiology
CHIGUTSA et al. [4] used the number of days it took for individual patient sputum specimens to become positive in Mycobacteria Growth Indicator Tube (Beckton, Dickinson and Co., Sparks, MA, USA ) as a surrogate for initial organism burden. They studied a prospective cohort of 144 patients being treated for TB to describe the time course of reduction of bacteria in sputum from patients [4]. The authors showed that the data were best modelled by a bi-exponential decline, confirming the results of studies using serial dilution on solid agar, a time-consuming technique that is difficult to standardise. These results imply that there are two populations of bacteria in the sputum of patients with pulmonary TB, and the authors estimate that the rapidly killed subpopulation predominates early but is rapidly eliminated, with a half-life on treatment of only 1.8 days. In contrast, the slowly killed subpopulation has a half-life on treatment of 39 days and, therefore, predominates after week 2. The significance of these observations are that such modelling, using readily available data, may help design and test TB drug regimens that could target both populations, thus increasing therapeutic efficiency.
Pathology
It has long been believed that M. tuberculosis organisms persist in an inactive or latent state in specific loci in persons with a positive skin test but no evidence of disease; however, this fact has never been directly demonstrated. A study by BARRIOS-PAYÁ N et al. [5] provides direct evidence of this phenomenon. The investigators studied lung specimens from 49 persons in Mexico who died of causes other than TB. Tissue was hybridised with M. tuberculosis-specific probes to identify sites that harboured inactive TB using in situ PCR, real-time PCR and spoligotyping. 43 (88%) out of 49 persons had evidence of inactive TB in at least one location; M. tuberculosis DNA was identified in 36 lung samples, 35 spleen samples, 34 kidney samples and 33 liver samples, but not all subjects were positive at all sites. Using in situ PCR, a variety of cell types were found to harbour M. tuberculosis, including the endothelium, pneumocytes and macrophages in the lung, Bowman's parietal cells and convoluted proximal tubules in the kidney, macrophages and sinusoidal endothelial cells in the spleen, and Kupffer cells and sinusoidal epithelial cells in the liver. Mycobacterial 16S ribosomal RNA was also isolated, demonstrating that the mycobacteria were viable. Thus, in a TB-endemic area, M. tuberculosis was found to persist in multiple locations in the majority of persons without known active TB. Interestingly, none of the M. tuberculosis organisms demonstrated were associated with granulomas, inflammatory infiltrates or fibrosis. These results challenge the traditional assumption that latent M. tuberculosis persists largely, if not exclusively, in isolated granulomatous foci. Moreover, dissemination of M. tuberculosis after primary infection is most likely widespread, and the locus of reactivation TB disease may be more dependent on the local immune milieu than on the site of primary infection.
Clinical pharmacology
The relationship between serum anti-TB drug concentrations and clinical outcomes of TB treatment has puzzled investigators for the past two decades. Patients with serum drug concentrations that are ''subtherapeutic'' often respond well to treatment, while those with high levels may still fail to convert their sputum cultures. PASIPANODYA et al. [6] studied the serum drug concentrations of isoniazid, rifampin and pyrazinamide, and correlated them with TB treatment outcomes of 142 patients with drug-susceptible TB. Substantial variability in serum drug levels was seen among patients with standard drug dosing. Overall, 15 out of 142 patients failed to convert their sputum cultures after 2 months of therapy and 36 had a poor clinical outcome (two failed to convert their sputum cultures by the end of therapy, 19 relapsed and 15 died). Classification and Regression Tree analysis defined area under the curve cut-offs using the data, and showed a modest correlation between having one drug below the threshold and a poor long-term outcome (OR 2.65, 95% CI 0.99-7.18). However, when two or more drugs were below the cut-off, the odds ratio for a poor long-term outcome was 7.57 (95% CI 2.57-22.3). These cut-offs and their correlations with clinical outcomes need to be confirmed prospectively, but they promise to clarify some of the mystery about the relationship between serum drug concentrations and clinical outcomes.
Genetics
The clinical relevance of strain variation among M. tuberculosis isolates has long been a matter of dispute. While some TB strains appear to display differential growth kinetics or virulence in vitro and in animal models, these results have not, in general, predicted clinical transmission or disease manifestations. A new study by FORD et al. [7] examined strains of the Euro-American and Beijing lineage for genetic clues that might explain the increased rates of emergence of drug resistance that have been observed in Beijing strains. The overall rate of mutations was 0.3-0.5 mutations per genome per year, corroborating the results of ROETZER et al. [3]. Evolution of mutations was studied in vitro either with or without antibiotic pressure. Beijing family strains had higher mutation rates and a higher frequency of evolution of antibiotic resistance under pressure than Euro-American isolates. These results provide a biological mechanism for the increased frequency of evolution of drug resistance in Beijing family strains.
Treatment
Another study on the emergence of drug resistance was one of the clinical highlights of 2013, although it was not good news. DALTON et al. [8] performed a prospective study of the prevalence of and risk factors for drug resistance among 1278 patients in eight countries (Estonia, Latvia, Peru, the Philippines, Russia, South Africa, South Korea and Thailand) between 2005 and 2008. The authors demonstrated that, in addition to being resistant to isoniazid and rifampin, 49% of patient isolates were also resistant to ethambutol and streptomycin, while 43.7% showed resistance to at least one second-line drug, 12.9% were resistant to fluoroquinolones and 6.7% were extensively drug-resistant (XDR)-TB [8]. The strongest risk factor for XDR-TB was previous treatment with a second-line injectable drug. Other significant risk factors were female sex and not being in a Green Light Committee-approved treatment programme. These results confirm the global nature of the multidrug-resistant (MDR)-TB epidemic and indicate that we continue to manufacture drug resistance at an alarming rate. Moreover, quality approved treatment programmes (at least as assessed by Green Light Committee approval) had 55% lower rates of XDR-TB.
A clinical trial of linezolid for patients with XDR-TB was another treatment highlight of the year. LEE et al. [9] treated 39 patients with smear-positive XDR-TB with either 300 mg per day or 600 mg per day of linezolid in addition to an optimised regime of third-line drugs in an immediate versus delayed treatment design. The immediate treatment group had a remarkable 79% sputum culture conversion, compared with 35% in the delayed therapy arm (p50.001). However, 82% of patients had clinically significant adverse events attributed to linezolid (four had dose reduction and three discontinued therapy), and isolates from four out of 39 patients developed resistance to linezolid. These results demonstrate that linezolid, while relatively toxic, may have a role to play in the treatment of XDR-TB and, possibly, some cases of MDR-TB. How to best use the drug remains to be determined. Clearly, the addition of linezolid to an already ineffective regimen quickly generates drug resistance and is not recommended.
A second study also focused on the efficacy and toxicity of linezolid in MDR-TB treatment. The individual patient meta-analysis of SOTGIU et al. [10] describes experience with linezolid as part of a multidrug regimen in 121 patients with MDR-TB. Clinical responses were encouraging (99 out of 121 subjects had successful outcomes), but toxicity was substantial. 59% of patients had adverse events, and 54 such events were serious. These investigators saw increased rates of adverse events when .600 mg per day was given. Clearly, linezolid is not a first-line drug for treatment of MDR-TB, and further studies are needed to identify the best ways to use this drug effectively to cure XDR-TB.
An exciting new antimycobacterial agent, PA-824, was studied in an early bactericidal activity study by DIACON et al. [11]. Combinations of PA-824 with pyrazinamide, PA-824 with moxifloxacin plus pyrazinamide and PA-824 plus bedaquiline were compared to bedaquiline alone, bedaquiline plus pyrazinamide, or standard HRZE (isoniazid (H), rifampin (R), pyrazinamide (Z) and ethambutol (E)). The combination of PA-824, moxifloxacin and pyrazinamide had the best activity and may be an attractive new regimen for TB treatment.
Long-term outcomes of another new antimycobacterial agent, delamanid, were also reported in 2013. SKRIPCONOKA et al. [12] described the follow-up of patients treated in the previously reported randomised, placebo-controlled trial of delamanid for 8 weeks in the initiation phase of treatment for MDR-TB. Favourable outcomes were reported in 75% of patients who received delamanid for o6 months, compared with favourable outcomes in 55% patients who received delamanid for f2 months. While the randomisation of the original study was not maintained, this study adds to our knowledge of the tolerability and efficacy of delamanid in the treatment of MDR-TB.
Two additional studies from the individual patient meta-analysis of patients with MDR-TB were published in 2013, and provide important insights [13,14]. These reports confirm the poor prognosis of patients whose M. tuberculosis isolates are resistant to fluoroquinolones. Among patients with XDR-TB, treatment outcomes were better when patients received at least six drugs in the intensive phase and four in the continuation phase. These studies also confirmed that patients with XDR-TB with resistance beyond that to second-line injectables and fluoroquinones experienced worse outcomes than XDR-TB patients without resistance to additional group 4 drugs.
Prevention
Finally, an important TB vaccination trial was reported in 2013. The MVA85A vaccine, a modified vaccinia Ankara virus expressing M. tuberculosis antigen 85A, had shown protection in animal models and promising immunogenicity in human studies when given to previously bacille Calmette-Guérin vaccinated infants. Therefore, a phase 2b trial was initiated to assess efficacy. The results, reported by TAMERIS et al. [15], were disappointing. 32 out of 1399 MV85A recipients developed TB, compared to 39 out of 1395 controls, a vaccine efficacy of 17.3%. The vaccine was well-tolerated and induced ''modest'' cell-mediated immunity. Not only have hopes of having an effective TB vaccine available soon been deflated, but these results raise questions about the correct immunological response to target in TB vaccine development.
Conclusions
These reports represent the cutting edge of TB clinical and basic science, and are evidence of a resurgence in high-quality TB research. While not all of the studies gave the results we would like to have seen, taken together, they provide important insights for the next steps towards control and eventual elimination of TB. | 2017-09-06T20:59:16.592Z | 2014-03-01T00:00:00.000 | {
"year": 2014,
"sha1": "b5950d00e95f1e0201628730f37e4bedab85258a",
"oa_license": "CCBYNC",
"oa_url": "https://err.ersjournals.com/content/errev/23/131/36.full.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b5950d00e95f1e0201628730f37e4bedab85258a",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
} |
212737643 | pes2o/s2orc | v3-fos-license | Study of the Characterization of Side Population Cells in Endometrial Cancer Cell Lines: Chemoresistance, Progestin Resistance, and Radioresistance
Introduction: Radiotherapy, combined regimens as platinum-paclitaxel chemotherapy and/or endocrine therapy is an important adjuvant treatment after surgery for endometrial cancer (EC). While, the resistance to them remain unclear. In our study, to separate the characteristics of side population (SP) cells from EC cell lines, study the mechanism of Taxol-resistance, progestin resistance and radioresistanc, and provide the basic for EC. Methods: SP cells from EC cell lines HEC-1A, Ishikawa and RL95-2 were separated by Hoechst 33342 staining and flow cytometry analysis. The expression of breast cancer resistance protein (BCRP) in SP cells and non-SP cells from HEC-1A was examined by immunocytochemistry, and the radiation-resistant and Taxol-resistant characteristics of SP cells and non-SP cells were compared by MTS. Ishikawa, Ishikawa-SP, and Ishikawa-non-SP cells incubated with MPA were selected for cell apoptosis assays by using flow cytometry. The expression of caspase-3 was examined by immunocytochemistry, and autophagy was detected by MDC staining. Results: Small proportions of SP cells, namely, 1.44 ± 0.93%, 2.86 ± 3.09%, and 2.87 ± 1.29%, were detected in HEC-1A, Ishikawa and RL95-2, respectively. There was a stronger clone formation efficiency for the SP cells than for non-SP cells in HEC-1A [(6.02 ± 1.17) vs. (0.53±0.20)%, P = 0.001], and there was a significant difference in the rate of tumourigenicity between the SP cells and non-SP cells in HEC-1A (87.5 vs. 12.5%). There were higher levels of BCRP expression (P = 0.001) and resistance to Taxol and radiation (P < 0.05) in the SP cells than in non-SP cells. After MPA treatment, the apoptosis rates were significantly different among the Ishikawa, Ishikawa-SP and Ishikawa-non-SP groups [(4.64 ± 0.18)%, (4.01 ± 0.43)%, and (9.3 ± 0.67)%; (P = 0.05)], and the expression of Caspase-3 in the Ishikawa group was higher than that in Ishikawa-SP group. The autophagic activity of the Ishikawa-SP cells was the strongest, while the autophagic activity of Ishikawa-non-SP was the weakest. Conclusions: There is a significant enrichment in SP cells among different EC cell lines, and these SP cells be more resistant to Taxol, MPA and radiation therapy. The overexpression of BCRP among SP cells may be the cause of resistance to Taxol, progestin and radiotherapy, which may be related to apoptosis and autophagic activity.
Introduction: Radiotherapy, combined regimens as platinum-paclitaxel chemotherapy and/or endocrine therapy is an important adjuvant treatment after surgery for endometrial cancer (EC). While, the resistance to them remain unclear. In our study, to separate the characteristics of side population (SP) cells from EC cell lines, study the mechanism of Taxol-resistance, progestin resistance and radioresistanc, and provide the basic for EC.
Methods: SP cells from EC cell lines HEC-1A, Ishikawa and RL95-2 were separated by Hoechst 33342 staining and flow cytometry analysis. The expression of breast cancer resistance protein (BCRP) in SP cells and non-SP cells from HEC-1A was examined by immunocytochemistry, and the radiation-resistant and Taxol-resistant characteristics of SP cells and non-SP cells were compared by MTS. Ishikawa, Ishikawa-SP, and Ishikawa-non-SP cells incubated with MPA were selected for cell apoptosis assays by using flow cytometry. The expression of caspase-3 was examined by immunocytochemistry, and autophagy was detected by MDC staining.
Results: Small proportions of SP cells, namely, 1.44 ± 0.93%, 2.86 ± 3.09%, and 2.87 ± 1.29%, were detected in HEC-1A, Ishikawa and RL95-2, respectively. There was a stronger clone formation efficiency for the SP cells than for non-SP cells in HEC-1A [(6.02 ± 1.17) vs. (0.53±0.20)%, P = 0.001], and there was a significant difference in the rate of tumourigenicity between the SP cells and non-SP cells in HEC-1A (87.5 vs. 12.5%). There were higher levels of BCRP expression (P = 0.001) and resistance to Taxol and radiation (P < 0.05) in the SP cells than in non-SP cells. After MPA treatment, the apoptosis rates were significantly different among the Ishikawa, Ishikawa-SP and Ishikawa-non-SP groups [(4.64 ± 0.18)%, (4.01 ± 0.43)%, and (9.3 ± 0.67)%; (P = 0.05)], and the expression of Caspase-3 in the Ishikawa group was higher than that in Ishikawa-SP group. The autophagic activity of the Ishikawa-SP cells was the strongest, while the autophagic activity of Ishikawa-non-SP was the weakest.
INTRODUCTION
Endometrial cancer is one of the most common malignant tumors in women, and it is the most common malignant tumor of the female reproductive tract in European and American developed countries (1). In addition, the incidence rate is increasing, and the onset age is younger in China. Although radiotherapy, chemotherapy and/or endocrine therapy after surgery have achieved good relief and survival, these approaches still have risks for tumor recurrence, metastasis, chemoresistance, progestin resistance, and/or radioresistance. So, understanding the disease pathogenesis, overcoming its chemoresistance, progestin resistance, and/or radioresistance have important significance.
According National Comprehensive Cancer Network, Platinum-based combined regimens as platinum-paclitaxel (TC) is usually used chemotherapy regimen for advanced and recurrent endometrial cancer. Paclitaxel (Taxol) is a first-line chemotherapeutic drug for gynecological malignancies and exerts anti-tumor effects through various mechanisms, such as blocking the mitosis of tumor cells and inducing apoptosis. There were many studies of Cisplatin-resistant in endometrial cancer. While, there were few studies have focused on the mechanism of paclitaxel-resistance in endometrial cancer. Therefore, whether there is paclitaxel-resistance in endometrial cancer cells and its mechanism is our focused in the study.
The cancer stem cell hypothesis has led to a new theory in recent years. It is suggested that there is a small number of stem cells in the tumor, which may be play a decisive role in the formation, growth, invasion and metastasis of the tumor (2)(3)(4). Further studies have shown that these cells may also be resistant to radiation therapy and chemotherapy (5). Therefore, tumor stem cells will become hot topics of research and therapeutic targets in the future. There are few reports on endometrial cancer stem cells, especially the study between endometrial cancer stem cells and chemoresistance, progestin resistance or radioresistance.
Currently, there are two main methods to separate tumor stem cells: the first selects and separates cells based on the surface-specific markers of tumor stem cells (6)(7)(8), and the second selects and separates a small group of a Hoechst33342 tropochrome "side population" (SP) cells from the tumor cell line by flow cytometry, with the characteristic of higher ABC transport protein expression in tumor stem cells (9,10). Because there is a lack of surface-specific markers for tumor stem cells, the method to separate SP cells has become the main method which used to study tumor stem cells. Therefore, this study was designed to use this method to separate SP cells from endometrial cancer cell lines and preliminarily study the chemoresistance, radioresistance, and progestin resistance characteristics of these cells, as well as provide a basis for clinical treatment and provide a basis strategy for the study of drug resistance.
BCRP Protein Expression in SP and
Non-SP Cells of the HEC-1A Cell Line as Detected by Immunocytochemistry The SP and non-SP cells of the HEC-1A cell lines were separately seeded onto glass slides, fixed with paraformaldehyde for 30 min, washed with PBS, and incubated with BCRP monoclonal antibody at a 1:100 dilution. Simultaneously, an anti-dilution solution was used as the negative control; cells were incubated at 4 • C overnight, washed with PBS, incubated in 45-50 µl of universal IgG-HRP antibody, placed at room temperature for 30 min, washed with PBS, and subjected to conventional DAB staining, dehydration, mounting, and sealing. Finally, the cells were observed under a microscope. For each sample, 10 highpower lens fields were randomly selected, and 10 cells were counted in each field for grading according to the coloring depth of the cell as follows: no coloring was 0 points, light yellow was 1 point, brown was 2 points, and dark brown was 3 points. The number of colored cells was counted, and the scores were obtained by integral calculation. BCRP was located in the cell membrane, and the staining was the brownish yellow.
Sensitivity of the SP and Non-SP Cells of the HEC-1A Cell Line to Taxol as Detected With the MTS Method The SP and non-SP cells of the HEC-1-A cell line were separated by flow cytometry, and HEC-1A cells without any treatment were used as the control group, and the cells density was adjusted to 3 × 10 5 cell/mL. Cells were inoculated in 96-well plates. Each cell type was divided into 7 groups, and each group had five wells. At the same time, the zero well (i.e., the blank group) was included (only with medium) with a volume of 100 µl/well. The cells were placed in a humidified incubator at 37 • C with 5% CO 2 for culturing, and when the cells were attached, the medium was changed to 100 µl culture medium containing different Taxol (TAX) concentrations (drug concentrations were 1, 4, and 6 µg/mL). The medium of the blank group and control group was changed to fresh culture medium without TAX, and after culturing for 24 h, the number of live cells was determined with MTS method.
Sensitivity of the SP and Non-SP Cells of the HEC-1A Cell Line to Radioactive X-Ray Irradiation as Detected With the MTS Method The SP and non-SP cells of the HEC-1A cell line were separated by flow cytometry, and HEC-1A cells without any treatment were used as the control group. Each cell type was divided into five dosage groups, namely, 0 Gy (i.e., the group without exposure to X-rays), 1, 2, 4, and 6 Gy (the high-energy X-ray irradiation groups). Irradiation methods were as follows: the linear accelerator was used, the fixed distance from the source to the skin was 100 cm, the dosage rate was 4 Gy/min, and the range of irradiation was set to cover the culture dish. After irradiation, cells were seed to 96-well plates. Each well had 2 × 10 3 cells, and each group had five wells. At the same time, the zero well (i.e., the blank group) and control group were included (only with medium). The MTS method was used to determine the growth curve of the cells, as described above.
Sensitivity of the SP and Non-SP Cells of the Ishikawa Cell Line to MPA as Detected With the MTS Assay SP and non-SP cells if the Ishikawa cell line were separated by flow cytometry, and Ishikawa cells without any treatment were used as the control group. The cell density was adjusted to 1 × 10 5 cell/mL, and cells were inoculated into 96-well plates. Each cell type was divided into 5 groups, and each group had five wells. At the same time, the zero well (i.e., the blank group) was included (only with medium) at 100 µl/well. The cells were placed in a humidified incubator at 37 • C with 5% CO 2 for culturing, and when the cells had attached, the medium was changed to 100 µl of culture medium containing different MPA concentrations (drug concentrations were 0, 5, 10, 15, and 20 µmol/L), and the medium of the blank group and control group was changed to fresh culture medium without MPA. After culturing for 24, 48, and 72 h, the number of live cells was determined with the MTS method, and the inhibitor rate was calculated as follows: Apoptosis of SP Cells in Response to MPA as Detected by Flow Cytometry SP and non-SP cells in Ishikawa cell lines were separated by flow cytometry, and Ishikawa cells without any treatment were used as the control group. After inoculation into in 6-well plates, the cells were placed in a humidified incubator at 37 • C with 5% CO 2 for culturing, and when the cells had attached, the medium was changed to 2 mL of culture medium containing 10 µM MPA, and to fresh culture medium without MPA for the control group. After culturing for 48 and 72 h, the cells were collected, washed twice with cold PBS and resuspended in 1X Binding buffer at a concentration of 1 × 10 6 cells/mL. A total of 100 µl of the solution was transferred to a 5-mL culture tube. Five microliters of FITC Annexin V and 5 µl PI were added, and the cells were gently vortexed and incubated for 15 min at 25 • C in the dark. A total of 400 µl of 1X binding buffer was added to each tube, and cells were analyzed by flow cytometry.
Caspase-3 Protein Expression in the SP Cells of the Ishikawa Cell Line in Response to MPA as Detected by Immunocytochemistry
The SP and non-SP cells of the Ishikawa cell line (Ishikawa cell line as control) treated with MPA were separately seeded onto glass slides, fixed with paraformaldehyde for 30 min, and washed with PBS, and then caspase-3 monoclonal antibody was added at a 1:100 dilution. An anti-dilution solution was used as the negative control, and the cells were incubated at 4 • C overnight and washed with PBS. Then, 45-50 µl of universal IgG-HRP antibody was added to the cells, and the cells were placed at room temperature for 30 min, washed with PBS, and subjected to conventional DAB staining, dehydration, mounting, and sealing. Finally, cells were observed under a microscope. For each sample, 10 high-power lens fields were randomly selected, and 10 cells were counted in each field for grading according to the coloring depth of the cells as follows: no coloring was 0 points, light yellow was 1 point, brown was 2 points, and dark brown was 3 points. The number of colored cells was counted, and the scores were obtained by integral calculation.
Statistical Analysis
Each measurement result was expressed as the mean ± standard deviation (x ± s). SPSS 13.0 software was used for data analysis, and comparisons between two groups were analyzed with the t test. A P < 0.05 (P < 0.05) indicated a significant difference.
To Separate and Identify SP Cells and Investigate Their Characteristics
Proportion of SP Cells in the Three Kinds of Cell Lines: Human Endometrial Cancer HEC-1A, Ishikawa and RL95-2 Hoechst 33342 staining was performed, and the proportions of SP cells in HEC-1A, Ishikawa and RL95-2 detected by the flow cytometry were 1.44 ± 0.93, 2.86 ± 3.09, and 2.87 ± 1.29%, respectively.
Morphological Observation of SP Cells in Ishikawa, HEC-1A, and RL95-2 The SP and non-SP cells separated from Ishikawa, HEC-1A, and RL95-2 were cultured for observation every 6 h under an inverted microscope. The volumes of SP cells were smaller than those of non-SP cells, and SP cells were much more easily attached than non-SP cells. Twenty-four hours after inoculation, most of the SP cells were attached, showing colony growth, while the number of attached non-SP cells was significantly lower than that of attached SP cells. Cell morphology images of the SP and non-SP of the HEC-1-A cell line are shown in Figure 1.
Results of the Monoclonal Formation Experiment of the SP and Non-SP of the HEC-1A Cell Line
The separated SP and non-SP cells from the HEC-1A cell line were cultured for 14 day. The two groups of cells were grown in 6-well plates and formed clones. The rate of clone formation of the SP cells was (6.02 ± 1.17)%, while that of the non-SP cells was (0.53 ± 0.20)%. There was statistical significance in the difference between the CFE of the two groups (P = 0.001). In the SP group, seven nude mice had tumors at 3-4 weeks after inoculation (87.5%, 7/8), while only one mouse in the non-SP group had a tumor at 6 weeks after inoculation (12.5%, 1/8). Additionally, the size of the tumor was significantly smaller than that of the SP group. The immunocytochemistry staining results showed that there was BCRP expression in the cell membrane and cytoplasm of HEC-1A-SP and HEC-1A-non-SP, but BCRP expression in SP cells was significantly stronger than that in non-SP cells (P = 0.001), as shown in Figure 3.
Determination of the Sensitivity of HEC-1A-SP and HEC-1A-non-SP Cells to Taxol
After different concentrations (1, 2, and 4 µg/mL) of Taxol were added, the survival rates of HEC-1A-SP, HEC-1A-non-SP, and HEC-1A were decreased and showed dose dependence. The MTS results suggested that the SP cells might have Taxol resistance compared with the control HEC-1A cells and non-SP cells. The results are shown in Table 1. The results of the determination of the sensitivity of HEC-1A-SP and HEC-1A-non-SP cells to X-ray radiation therapy showed that the three kinds of cells were sensitive to X-ray radiotherapy to a certain extent; as the radiation dose increased, the cell growth of each groups decreased. However, the SP cells could better tolerate the low-dose X-ray irradiation of 1 and 2 Gy than the non-SP cells and other cells, but there were no differences in
*Comparison between SP cells and non-SP cells. **Comparison between SP cells and HEC-1A of the control group. ***Comparison between non-SP cells and HEC-1A of the control group.
the high-dose X-ray irradiation of 4 and 6 Gy, which suggested that the SP cells had a certain level of radiation resistance, as shown in Figure 4 (A formula, namely, Surviving fraction = group with irradiation/group without irradiation, was used to exclude the non-uniformity of cells during decking. All data were standardized; the light absorbance value of the first day was used as the baseline). Frontiers in Medicine | www.frontiersin.org (P < 0.05). The results showed significant differences between each group (see Table 2).
Determination of Caspase-3 Expression in the Ishikawa-SP, Ishikawa-non-SP, and Ishikawa Cell Lines in Response to MPA Treatment
The immunocytochemistry staining results showed that there was Caspase-3 expression in the cytoplasm of the Ishikawa-SP, Ishikawa-non-SP and Ishikawa cell lines, but caspase-3 expression in the SP cells was significantly weaker than that in the non-SP cells and Ishikawa cells.
Determination of the Autophagy Activity in the Ishikawa-SP, Ishikawa-non-SP, and Ishikawa Cell Lines in Response to MPA Treatment After 0, 48, and 72 h of MPA treatment, autophagy activity was detected by MDC staining. The results showed that autophagy activity increased in the Ishikawa-SP, Ishikawa-non-SP and Ishikawa cell lines. Of the Ishikawa-SP, Ishikawa-non-SP and Ishikawa cell lines, Ishikawa-SP cells showed the strongest autophagy activity, while Ishikawa-non-SP showed the weakest autophagy activity, as shown in Figure 5.
DISCUSSION
Recent research has shown that tumor stem cells constitute a small group of stem cells with the capability of self-renewal and therapeutic resistance. Tumor stem cells were found in tumors of the haematopoietic system (11) and in solid tumors [e.g., breast cancer (12), brain tumors (9,13), lung cancer (14), esophageal cancer, colorectal cancer (15), liver cancer (16), etc.]. Currently, studies on tumor stem cells have changed from the separation, purification, and cultivation of these cells to studies of their biological characteristics, including genomics and proteomics expression profiling, and the purpose was to provide new strategies for clinical treatment. At present, there are few reports and studies on endometrial stem cells.
In this paper, Hoechst33342 staining was used (9) to select and separate tumor stem cells in endometrial cancer, and the chemoresistance, progestin resistance, and radioresistance of these cells were studied.
Studies About the Separation and Basic Characteristics of Endometrial Cancer Stem Cells
Kato first isolated SP cells from the normal endometrium (17) In this study, under a microscope, the volume of SP cells was slightly smaller than the volume of non-SP cells in the HEC-1A cell line. In conventional culture, SP cells were more likely to attach than non-SP cells and more like to grow into clones; however, few non-SP cells were attached, and a larger number of these cells were in the suspended state.
Kato et al. (18) separated SP cells in the endometrial cancer HEC-1 cell line and discovered that the non-SP cells grew faster than the SP cells within 10 days after separation; however, over the next 50 days, the growth of the non-SP cells was apparently stagnated, and there was apoptosis. In the study of thyroid cancer by Mitsutake et al. (19), a similar phenomenon was observed. This phenomenon might have multiple causes, including the asymmetric division of SP cells, the one-way or two-way acceleration of the growth of each cell type between SP and non-SP cells, or the influence of separation on cell viability. In our study, the growth curve was determined based on the MTS method, and it showed that non-SP cells grew fastest within 7 days after separation and that the doubling time was shortest; however, there was no significant difference between the growth rates of the three kinds of cells. However, after 14 days of culturing, the rate of clone formation of the SP cells was significantly higher than that of the non-SP cells, suggesting that the SP had a stronger clone forming ability and could better self-renew.
In accordance with the theory of tumor stem cells, the occurrence and development of the tumor is dependent on its tumor stem cells. SP cells have a stronger ability of selfrenewal, which reflects the biological characteristics of the tumor stem cells to a certain extent, but animal experiments are still needed to verify the tumourigenicity. In this study, HEC-1A SP cells were inoculated into immunodeficient mice at 1 × 10 5 cells per mouse; the rate of tumor formation in the SP group was 87.5%, while that of the non-SP group was only 12.5%, suggesting that the SP cells had much stronger tumourigenicity.
Research on the Chemoresistance and Characteristics of SP Cells in Endometrial Cancer
In recent years, studies have shown that tumor stem cells have obvious resistance to conventional cytotoxic drugs and radiotherapy, which can induce DNA damage. Because tumor stem cells may be in the tumor, conventional chemotherapy, and radiotherapy may fail to kill all tumor stem cells. Therefore, these cells will become the sources of tumor recurrence and distant metastasis, and the treatment of tumor stem cells is expected to become a new target for cancer treatment.
In research of oral cancer SCC25 cell lines, it was found that SP cells showed more resistance to chemotherapy than 5-FU (20). According National Comprehensive Cancer Network, Platinum-based combined regimens as platinum-paclitaxel (TC) is usually used chemotherapy regimen for advanced and recurrent endometrial cancer. Paclitaxel (Taxol) is a firstline chemotherapeutic drug for gynecological malignancies and exerts anti-tumor effects through various mechanisms, such as blocking the mitosis of tumor cells and inducing apoptosis. There were many studies of Cisplatin-resistant in endometrial cancer. While, there were few studies have focused on the mechanism of paclitaxel-resistance in endometrial cancer. Therefore, we want to explore whether there is paclitaxelresistance endometrial cancer cells and its mechanism. In this study, the preliminary results showed that when the SP cells and non-SP cells of the HEC-1A cell lines were compared, the SP cells had significant resistance to paclitaxel, and further detection by immunocytochemical staining showed that the drug resistance may be related to the expression of BCRP. ABCB1 and ABCG2 are the most basic multidrug resistance protein in different tumor tissues, ABCB1 encodes p-gp, and ABCG2 encodes BCRP. These transporters actively pump drugs out of cells by using the energy generated by the breakdown of ATP, thus protecting themselves from cytotoxic drugs and making them insensitive to chemotherapy. Some studies suggested that the high expression of ABCG2/BCRP in the cell membrane was the necessary condition of SP cells to discharge Hoechest33342 dye and maintain the dry characteristics of SP cells (21). Additional studies found that the SP cells with high expression of ABCG2/BCRP isolated from a variety of cell lines also had stronger drug resistance (21,22), suggesting that drug resistance may be related to the expression of BCRP and that the relevant mechanism needs to be further studied, which is consistent with previous reports (23).
Research on the Radioresistance Characteristics of Endometrial Cancer SP Cells
In this study, the radiosensitivity of HEC-1A cells, SP cells and non-SP cells and the preliminary results showed that the three kinds of cells were sensitive to radiotherapy up to a certain dosage. The survival rate was negatively correlated with the irradiation dosage in the mid-to low-dose radiation group, while the survival rate of the SP cells was higher, which was similar to the findings of the literature (23)(24)(25)(26). Especially in the 2-Gy irradiation dose group, the survival rate of the SP cells was significantly higher than that of the non-SP cells and unsorted HEC-1A cells, suggesting that SP cells may be an important factor that causes radioresistance. However, in the high-dose irradiation groups, all cell growth was significantly inhibited among the three groups.
The specific mechanism underlying the radiation-resistance of stem cells is not clear. Kastan et al. (20) reported that there were many mechanisms that delayed or blocked the cell cycle for facilitating the repair of DNA when DNA was damaged. Such cycle blocking was a kind of favorable protective mechanism in normal cells, but cycle blocking may enhance the radiation-resistance of tumor cells. Multiple studies found that the expression of BCRP on the cell surface was highly related to the PI3K/Akt signaling pathway. The studies by Tappei Takada et al. (27) showed that the cell location of ABCG2/BCRP could be changed by the activity of Akt, consequently affecting the pumping capacity of ABCG2/BCRP. Generally, it is understood that the PI3K/Akt signaling pathway can be affected by the radiationresistance of the tumor via the anti-apoptosis mechanisms and the activation of DNA repair (28). In non-small cell lung cancer, the up-regulation of the expression levels of PI3K/Akt signaling pathway mediators was related to radiation sensitivity, and cell apoptosis and cell cycle G2/M-phase blocking were induced after an Akt phosphorylation inhibitor was used (29).
In this study, the expression of BCRP in SP cells was significantly higher than that in non-SP cells, suggesting that radiation-resistance may be associated with the expression of BCRP in SP cells; however, the radiotherapy resistance mechanism of endometrial cancer stem cells (e.g., the relationship between radiation resistance and the cell cycle/signaling pathway) needs further study.
Biological Characteristics and Mechanisms of Progesterone Resistance in Endometrial Cancer Stem Cells
Most of endometrial cancer and breast cancer are hormonedependent tumors. Studies have shown that the expression of breast cancer resistance protein, cytokeratin 8 and pheochromocytoma A may be related to multipotential differentiation stem cells, leading to drug resistance in breast cancer cell lines (30,31). Whether endometrial cancer shows endocrine therapy resistance that is similar to breast cancer and whether the endocrine resistance has a separate mechanism remain unknown. Therefore, the study of endometrial cancer stem cell progesterone resistance may provide the basis for endocrine cancer endocrine therapy.
Zhao and Liu et al. (32,33) carried out drug resistance studies of endometrial cancer cell lines by gradually increasing the MPA concentration in vitro, and as a result, progesteroneresistant cell lines of endometrial cancer were successfully established. Combined with reports in the literature, studies have shown that the mechanism of endometrial cancer resistance to progesterone may be related to an imbalance in the expression of progesterone receptor subtypes and to the abnormal expression or sustained activation of epidermal growth factor receptor (EGFR) and the activation its downstream signaling pathways. Resistance may also be associated with PR-A mRNA downregulation, PR-B/PR-A mRNA upregulation, and PR protein expression increases, which cause progesterone resistance in the endometrium (34). Cancer stem cells have a more powerful ability to repair DNA than the differentiated cells of the tumor, which makes tumor stem cells more adaptable to various changes in the environment, to ensure prompt repair of tumor tissue after injury and to favor the occurrence of drug resistance.
At present, there are few reports on endometrial cancer in terms of progesterone resistance or drug resistance. In this study, Hoechst33342 staining was used to isolate tumor stem cells, namely, SP cells, in endometrial carcinoma, and the cells were treated with different concentrations of MPA. The results showed that MPA had the lowest growth inhibitory effect in Ishikawa-SP cells. Apoptosis assays by flow cytometry showed that MPA treatment resulted in the lowest apoptotic rate of Ishikawa-SP cells compared with Ishikawa and Ishikawa-non-SP cells. The results of immunocytochemistry further showed that the expression of apoptotic protein casepase-3 in Ishikawa-SP was the weakest after MPA treatment. It is suggested that endometrial cancer stem cells with progesterone may be related to anti-apoptosis after MPA. The preliminary results suggest that the mechanism of progesterone resistance in endometrial cancer stem cells may be related to apoptosis, but the mechanism of action still needs further investigation.
CONCLUSION
In conclusion, this study showed that there were few SP cells in differentiated endometrial cancer cell lines, and the study of the biological characteristics of SP cells in the HEC-1A cell line showed that there was the capacity for clone formation, chemoresistance, progesterone resistance, and radioresistance in vitro. These observations still need to be further explored for providing a basis for the study of drug resistance and radioresistance to decreased disease recurrence.
DATA AVAILABILITY STATEMENT
The datasets used and analyzed during the current study are available from the corresponding author on reasonable request. | 2020-03-18T13:09:20.815Z | 2020-03-18T00:00:00.000 | {
"year": 2020,
"sha1": "8997ffca66742c99e83c19644f1be16b27509294",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmed.2020.00070/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8997ffca66742c99e83c19644f1be16b27509294",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
} |
234885111 | pes2o/s2orc | v3-fos-license | Using discourse analysis to understand professional music teacher identity
The purpose of this article is to discuss the use of discourse analysis in order to understand music teachers’ professional identities. This is done by elaborating on the theory and methodology of a study on professional identities of music teachers within the Norwegian municipal school of music and performing arts. Theoretical and methodological perspectives, including research design, analysis, results, validity and ethics, are discussed in the article. An argument in favour of discourse analysis is put forward: that it offers focus on the context, complexity and power relations of the field, as well as providing an understanding of how identities are constructed and negotiated. The use of discourse analysis in the study provided analytical tools which challenged taken-for-granted knowledge, discovered binary discursive oppositions, and unmasked power relations. The study found that teachers construct their identities within a contested discursive field where meanings are attached to the work they perform, as well as to the institutions they represent.
Introduction
There have been several studies on teacher identity, and music teacher identity in particular, using various methods of investigation. Based on findings from a study on professional identities of music teachers within Norwegian municipal schools of music and performing arts 1 (see also Jordhus-Lier, 2018), I argue that using discourse analysis to understand professional music teacher identity is fruitful. It offers an understanding of the field wherein identities are constructed, and of the connection between the field (structure) and the people working within it (agency). It can reveal how meaning is being constructed, which is crucial if taken-for-granted knowledge is to be challenged and potential struggles in the field are to be discovered. A discourse analysis of professional teacher identities opens up for describing the complexity and power relations of a field, and how and why identities are negotiated. Several previous studies (Angelo & Kalsnes, 2014;Bernard, 2005;Bouij, 1998;Broman-Kananen, 2009;Roberts, 2004;Stephens, 1995) have tended to focus on music teacher identities as primarily centred around a teacher-musician dichotomy. I assert that discourse analysis could reveal a richer complexity in music teachers' identities.
There is an increasing importance placed on breadth, versatility and social inclusion as key tenets of the Norwegian school of music and arts, while depth and specialisation continue to be highlighted in various policy documents and in the practice of many teachers (Berge et al., 2019;Ellefsen & Karlsen, 2019;Jordhus-Lier, 2018;Karlsen & Nielsen, 2020;Norsk kulturskoleråd, 2016). As a result, the school and its teachers are negotiating various and diverse tasks in a field that contains tension. Music teachers are often specialists and have comprehensive training in their speciality, but, simultaneously, they must relate to the school's mission, including the commitment to social inclusion and breadth. In my research, I used discourse analysis to explore the professional identities of those teachers, a method which helped me reveal how these issues were handled by the school and its teachers.
In this research, I built on a social science-related discourse analysis where discourse is perceived as a totality of language and practices, with the overarching theoretical and analytical framework being Laclau and Mouffe's discourse theory (Laclau, 1990;Laclau & Mouffe, 2001). I also built on theories of professions (Abbott, 1988;Freidson, 2001;Molander & Terum, 2008), which added a frame for discussing what it means for an occupational group to be a profession, and provided access to a better understanding of the tensions created. Methods and theory are closely connected in discourse analysis, but Laclau and Mouffe's (2001) discourse theory provides few methodological guidelines. When carrying out a discourse analysis based on Laclau and Mouffe, a brief outline of their theory has to be given, as well as a presentation and justification of how it was used methodologically.
In this article, I will thus describe some concepts from Laclau and Mouffe's discourse theory, and how I used them in my research. I will also account for why and how theories of professions were connected to the discourse theoretical framework. The aim of this article is not to discuss the results of the research, but in order to be able to discuss the relevance of using discourse analysis to study professional music teacher identity, I will provide a short overview of the analytical findings.
Relevant Nordic research on teacher identity, professionalism and discursive structures
Of relevance to the discussion about using discourse analysis to understand professional music teacher identity is research on (music) teacher identity, professionalism and discursive structures within the (music) educational field. Because my research is performed in a Norwegian context, research within the Nordic educational field is most relevant. Angelo's (2012) thesis about philosophies of work in instrumental music education is a thematic narrative analysis of three instrumental teachers' stories, and it addresses their professional understandings of their work, mandate and expertise: their "philosophies of work". Angelo (2012) did not perform a discourse analysis, but the main aspects of "philosophies of work" are, as she understands it, power, identity and knowledge. These elements are relevant to discussions on professional music teacher identity, and they are central in my research. However, in order to contextualise and get a greater understanding of the field wherein identities are constructed, I chose as my field of study the school of music and arts and the methodology discourse analysis. Holmberg (2010) recorded group conversations among music teachers in Swedish schools of music and arts, and analysed them discursively. She found changed conditions in the teachers' work, and discussed those findings in relation to tendencies in late modernity.
The relevance of Holmberg's study relates to the Swedish and Norwegian schools having much in common, and to its discussion of power relations within the field. She does not, however, address teachers' professional identities directly. Of Nordic studies on teacher identity, Søreide's (2007) thesis is relevant. She investigated how teacher identity is narratively constructed within the Norwegian elementary school system, drawing on interviews with female teachers, public school policy documents, and written material from Norway's largest professional association for teachers, the Union of Education Norway 2 (Søreide, 2007).
Public narratives about teachers were the unit of analysis, where the identity construction of "the teacher as pupil centred, caring and including" was identified as especially paramount (Søreide, 2007). She combines poststructuralist, discursive and narrative approaches to identity, where subject positions are central in the analysis. Her thesis forms a methodological backdrop to my research, although my research field and combination of discourse theory and theories of professions differ from her field of research and theoretical perspectives.
Most of the discourse-oriented studies within the Nordic music educational field build on Foucault. Krüger's (1998) research on teacher practice, pedagogical discourse and construction of knowledge was one of the first of these studies. Krüger (1998) followed the everyday life of two music teachers in a Norwegian compulsory school for six months, interviewed them, and investigated how they constructed their practice. He identified how the teachers' practices and norms were inscribed in discourses and relationships of power and knowledge. Nerland (2003) also built on Foucault, in her study on teaching practices in higher music education in Norway, where teaching was seen as cultural practices that are historically and socially constituted. Other research building on Foucault is Ellefsen's (2014) in-depth study of a Norwegian upper secondary educational programme in music called "Musikklinja". The study was ethnographic, with observations of, and interviews with, students, and it aimed at understanding how student subjectivities were constituted in and through discursive practices of musicianship (Ellefsen, 2014). Ellefsen and Karlsen (2019) have performed a Foucauldian discourse analysis of the curriculum framework for the Norwegian municipal school of music and performing arts, investigating the meanings of "diversity". They identified four nodal points showing different ways of understanding diversity, namely: diversity understood as difference in students' ethno-cultural backgrounds, diversity of educational opportunities and modes of expression, diversity and/or/ as deeper understanding, and diversity of learning arenas and contexts. These studies have laid the ground for seeing the music educational field discursively, and so have opened up for addressing power relations and struggles over definitions. My research contributes by using another approach to discourse analysis and another unit of analysis, namely professional identity.
Relevant research on professionalism and education includes Mausethagen's (2013) study on how the teaching profession constructs and negotiates professionalism in Norwegian national policy. Her findings suggest that the teaching profession in Norway has become more proactive in creating legitimacy for their work, and that both the teacher union and individual teachers try to resist external control, such as national testing (Mausethagen, 2013). Georgii-Hemming (2013) asserts in the concluding chapter of the anthology Professional knowledge in music teacher education (Georgii-Hemming, Burnard & Holgersen, 2013) that music teacher education has an important mission in educating professional music teachers, and that well-founded pedagogical knowledge, reflections on values, and interpretation precedence are important for working towards that mission. She also argues that a "carefully considered music-pedagogical philosophy" is crucial in order to develop professional knowledge not only for the individual teachers, but also for the music teacher profession as a whole (Georgii-Hemming, 2013, p. 210). Research on professionalism and education forms a backdrop for my understanding of music teaching as part of a professional field, which I built on and combined with a discursive approach and focus on identity.
Discourse theory and analysis
There are several approaches to discourse analysis, and some focus on the content of language in use while others on its structure (grammar) (Gee, 2014). Laclau and Mouffe's (2001) discourse theory represents the former group, focusing on both language and practices in order to analyse power relationships and identity construction. Laclau (1990, p. 100) emphasises that discourse is not a combination of speech and writing, "but rather that speech and writing are themselves but internal components of discursive totalities", and that the "totality which includes within itself the linguistic and the non-linguistic, is what we call discourse". Laclau and Mouffe (2001) understand everything as discursively constructed, meaning that social practices are fully discursive. This does not imply that they deny the existence of physical objects. Rather, they believe that our access to them is through discourses because we ascribe meaning to physical objects through discourses (Laclau & Mouffe, 2001). For example, anyone who sees a drum can declare that the thing they see exists, but perceiving it as a musical instrument, and not just a round object with skins, is discursively constructed. If one was outside any musical discourse, one might perceive of the "round object with skins" as, for example, a table. What is not part of a particular discourse belongs to the field of discursivity, which is "the necessary terrain for the constitution of every social practice" (Laclau & Mouffe 2001, p. 98). In this article, I speak of the "school of music and arts field" as a terrain for the constitution of social practices within, or related to, the school of music and arts.
Struggles over meaning are central in discourse theory, where meaning can never be completely fixed (Laclau & Mouffe, 2001). Accordingly, there will always be disagreements about definitions of identity and the social as we strive to fix the meaning of signs by placing them in relation to other signs (Jørgensen & Phillips, 2002). In order to identify these processes, Laclau and Mouffe (2001) provide a theoretical framework. In this framework, moments are signs that have their meaning (partially) fixed through articulation in a discourse, whereas elements are signs with several competing ways of understanding them, as their meaning has not yet been fixed (Laclau & Mouffe, 2001). The fixation of meaning in comparison to what-it-is-not is central in discourse theory. Laclau and Mouffe (2001, p. 92) emphasise that "all values are values of opposition and are defined only by their difference". A complete fixation of signs is never possible, though, because every fixation of a sign is contingent.
Nodal points are privileged signs that play a central role in (partially) fixing the meaning (Laclau & Mouffe, 2001). Nodal points are empty; there are several ways of interpreting them, which make them an arena for discursive struggle. They acquire their meaning by being related to other signs in chains of equivalence, where a discourse is formed by the fixation of meaning around a nodal point (Laclau & Mouffe, 2001). Floating signifiers are also empty and open; they are "incapable of being wholly articulated to a discursive chain" (Laclau & Mouffe, 2001, p. 99). They refer to the struggle between discourses to fix meaning of signs and create (temporarily) hegemony. An example of this is the different ways of seeing and articulating migrants. "Migrant" could be a floating signifier where various discourses try to define it: as refugees in need of protection, as social resources to our multicultural society, as a valuable workforce in the labour market, or as a potential threat to our security or cultural heritage. A discourse is a fixation of elements to moments within a specific domain, while hegemony refers to fixation across discourses. When one discourse dominates alone, a hegemonic intervention has been a success (Laclau & Mouffe, 2001).
If the world is perceived as discursively constructed, it follows that identities are discursive. Identity is understood as temporary attachment to subject positions where the subject is multiply constructed across different discourses and practices (Hall, 1996). To identify with something means there must be something with which you do not identify. Identity is thus constructed through difference, to what it is not (Hall, 1996;Laclau, 1990). Through chains of equivalence, signifiers are linked together around a nodal point of identity, which different discourses try to fill with content (Laclau & Mouffe, 2001). In my analysis, 'school of music and arts music teacher' ('musikklaerer i kulturskolen') was the nodal point for music teachers' professional identities, because it was not clear what it meant to be a music teacher in the school of music and arts; there were different ways of interpreting "music teacher" within the field. Different discourses tried to fill the nodal point with content, and they offered subject positions for music teachers to identify with. This nodal point was installed as part of the discursive "skeleton" which guided the analysis. Hence, it derived from theory/ methodology, but in combination with background knowledge and findings from the data.
Within a discursive approach, identities are seen as contingent since they could have been, and can become, different (Laclau & Mouffe, 2001). They are also seen as relational, since no identity can be fully constituted (Laclau & Mouffe, 2001). The subject thus has some degree of agency to identify, or not, with particular subject positions. The subject is also overdetermined, which means it is positioned by several conflicting discourses (Laclau & Mouffe, 2001). Overdetermination implies a rising of conflicts between subject positions, and a terrain for hegemonic articulation (Laclau & Mouffe, 2001). It also means that the subject might identify variously according to the situation. A teacher could identify with the subject position Music Teacher when s/he teaches music groups in compulsory schools and with Instrumental Teacher when s/he gives trumpet lessons. Although the subject is constructed within discourses, different discourse analytical approaches open up for more or less agency: the subject's degree of freedom of action (Jørgensen & Phillips, 2002). Within Laclau and Mouffe's discourse theory, the subject is, to a large degree, perceived to be determined by structures. I, however, perceive the subject as having some degree of agency, with the possibility to resist ideological domination, but with discourses limiting the subjects' freedom of action.
Discourse of professionalism and professional identity
My aim was to investigate music teachers' professional identity, which means the focus was on identities connected to their profession. Understanding music teaching as a profession, something for which I have argued in a previous article (Jordhus-Lier, 2015), was thus a prerequisite for discussing the findings in relation to theories of professions (Abbott, 1988;Freidson, 2001;Molander & Terum, 2008). This provided me with an entrance to a better understanding of the connections between the teachers and logics operating in their professional environment. These logics enable the teachers' possibilities for action, and can be understood as systems or ideologies -or professions or discourses. However, including theories of professions in a discourse-oriented study implies combining discursive and non-discursive theories. Laclau and Mouffe (2001) understand everything as discursively constructed, as opposed to most theories of professions. Common to both, however, is that they provide systems for understanding the social: discourse theory through discourses, and theories of professions through logics or ideologies.
In building my theoretical and analytical framework, I chose to integrate theories of professions in the concept of professionalism: understood discursively as a discourse of professionalism. A discourse could be understood as a system having a certain degree of regularity, which "constructs the reality for its subjects" (Dunn & Neumann, 2016, p. 4).
Seeing professionalism as a discourse thus requires understanding the system as socially constructed, as a system constructed through language that has influence because someone has spoken on its behalf. It is a system that is constitutive for occupations, work places and professional identities. In the literature on professions, "professionalism", "professionalisation" and "professions" are understood as, among other things, systems, logics, values, or ideologies -all of which have some kind of regularity. Abbott (1988) understands professions as a system and Freidson (2001) interprets professionalism as a logic. Evetts (2006) claims there has been a shift of emphasis in the sociology of professions, first from professionalism to profession understood as a generic category of occupational work, then via processes of professionalisation towards a return to professionalism -interpreted as a discourse of occupational change and managerial control.
My understanding of professionalism as a discourse is informed by theories of professions, in particular Freidson's (2001) interpretation of professionalism as the third logic of organising work, opposed both to the logics of the market, where work is controlled by consumers, and to the firm (bureaucracy), where managers are in control. The third logic is a description of an "ideal-type" profession and not of "reality" (Freidson, 2001), and I therefore assert that it could be understood as socially constructed. When building my theoretical framework, I understood 'organising of work' as a nodal point and floating signifier that the discourse of professionalism and other discourses tried to fill with meaning. These other discourses were, for instance, those Freidson (2001) describes as logics: those of the free market and bureaucracy. I found the nodal point 'organising of work' getting its meaning within the discourse of professionalism by being related to signifiers like 'monopoly' , 'freedom of judgement' , 'discretion' and 'autonomy' (Abbott, 1988;Freidson, 2001;Molander & Terum, 2008). Following from that, 'organising of work' is related to other signifiers in other discourses, for instance 'competition' in the discourse (or logic) of the free market, and 'efficiency' in the discourse (or logic) of bureaucracy. There is support for a discursive understanding of professionalism in the literature (Evetts, 2006;Evans, 2008). Evetts (2006) sees professionalism as a discourse of occupational change and social control, distinguishing between the discourse of organisational professionalism constructed from above (managers) and the discourse of occupational professionalism constructed from within. The latter is based on autonomy and discretionary judgement by practitioners in complex cases and depends on education, vocational training and development of strong occupational identities and work cultures (Evetts, 2006). This resembles the discourse of professionalism that my research builds on.
Professional identity involves both individual and collective identity. Heggen (2008) emphasises that, in professions, collective identity is constructed as members endorse a unified symbol and share a common understanding, whereas the members' individual professional identities concern self-identity in combination with the practice of a professional role. The collective identity could then be unified at the same time as individual identities are diverse (Heggen, 2008). Group formation is a reduction of possibilities, where some possibilities of identification are put forward and others are ignored (Jørgensen & Phillips, 2002). This relates to the idea that collective identity only exists when constructed as difference, which implies a we/they distinction (Mouffe, 2005). Professional identities of music teachers is thus about how they see themselves in the field, both individually and as part of groups (professions). This is something discourse analysis could reveal, because central to the development of professional identities is identification with subject positions (as identity is understood as temporary attachment to subject positions) and positioning vis-à-vis other groups. Both are connected and related to central discourses in the field.
Research design, validity and ethics
The data material for my research on professional identities of music teachers consisted of qualitative semi-structured interviews with sixteen music teachers from three different schools of music and arts, and document analysis of curriculum frameworks (Norsk kulturskoleråd, 2003(Norsk kulturskoleråd, , 2016 and analysed to answer the following research questions: i) Which discourses compete in the Norwegian municipal schools of music and performing arts? and ii) How are music teachers' professional identities constructed within these discourses? Brinkmann and Kvale (2015) emphasise the craftsmanship of the researcher, especially in research building on theories that dismiss an objective reality against which findings could be measured. A researcher's special qualifications to interpret data could thus help in justifying discourse analytical research and increasing validity (Taylor, 2001a). In-depth knowledge of the study field requires careful handling, however, encouraging the researcher to be critically reflexive in her/his role as researcher. This is crucial to a study's validity, and includes clarifying similarities and differences between oneself and the participants, and accounting for one's own position in, and experience with, the field of study (Alvesson & Sköldberg, 2009). The identity of the researcher is relevant as it can influence both the selection of topic, the data collection, and the interpretation and analysis (Taylor, 2001b). My choice of topic derived from my own experience as music teacher, an experience that also affected my interpretation and analysis as I often knew more about a topic than could be read from the data. I tried, however, to use this knowledge to contextualise and "see the bigger picture". This relates to Neumann's (2001) assertion that a researcher's general and cultural knowledge of the area of study is a central prerequisite which must be fulfilled before research can begin. However, focusing on similarities with participants could lead to differences being toned down and to the false belief that there are no power differences between the researcher and the participants (Taylor, 2001a). While I shared their experience as music teachers, I also performed the role of a researcher. I was "one of them" in the sense that I could understand and contextualise what the informants told me, but I was still not a colleague, and was going to use the information they gave. Although my experience was that the informants spoke openly, there may well have been things they did not tell me or issues they did not explain fully.
In order to maintain the anonymity of the participants, I gave fictitious names to teachers as well as to their schools. In addition, I deliberately tried to "blur" the teachers' narratives so they cannot so easily be traced. This was done by focusing more on discourses, subject positions, struggles and hierarchy rather than following the different teachers' stories and narratives. This way, I focused on structures in the field and how the various teachers manoeuvred within them. The school of music and arts field was at the centre of the analysis, not the individual teachers. Doing a discourse analysis, and not a narrative analysis, for example, was helpful in this regard. However, it also had its challenges. On the one hand, it required me to give thick descriptions and show how the data material had been used, and on the other, this had to be balanced against maintaining the informants' anonymity. I also did a member check, sending transcripts back to the informants for approval and comments.
Member check as a form of validity is, however, problematic in discourse analytical studies, because the transcript would be an interpretation and an analysis based on theory, and therefore could be difficult for non-academics to validate (Taylor, 2001a). I therefore limited this exercise to sharing the material used in the analysis with the informants, and simply sought active confirmation that they still wanted to participate. This exchange, or interplay, however, raises ethical questions rather than questions about validity (Riessman, 2008).
Teachers and schools of music and arts were selected purposefully in order to acquire rich information about the field and the teachers' professional identities. I selected schools and teachers i) according to a given set of criteria, and ii) according to the purposeful sampling strategy maximum variation sampling (Patton, 2015). The criteria for school selection ensured inclusion of schools with institutional collaboration and offering a wide range of arts options, and ensured that they were of an appropriate size. I selected middlesized schools in order to have a sufficient number of teachers to select from while being able to maintain the informants' anonymity. Maximum variation sampling was then used in further selection of the schools, based on a variety of institutional collaboration and geographical dispersion. In selecting teachers, I set the following criteria: all should have a degree in music education or music performance, be employed as a music teacher in the school of music and arts, and have at least a 40% employment position (in order to be able to give rich information). Maximum variation sampling (Patton, 2015) was used in the further selections, based on: instrument, age, seniority, genre, gender, pedagogical education, and collaboration with compulsory schools, upper secondary schools, and/or local community music and arts fields. Both men and women were represented, and their ages ranged from around twenty-five to nearly sixty. Three had background from popular music, while the rest were classically trained. Some also played and taught folk music or contemporary music. The informant selection included teachers of string instruments, piano, woodwind instruments, brass instruments and guitar, as well as vocal teachers. The interview guide consisted of four topics: background, understanding of professional identity, the school of music and arts as local resource centre, and the future of the teachers and the school.
Analysis and interpretation
The analytical process was circular: going back and forth between coding the material and analysing it, drawing preliminary conclusions which could lead to adding something to the matrix to test the conclusion. This can be described as an iterative process (Miles, Huberman & Saldaña, 2014;Taylor, 2001b). Some codes derived from theory, previous research and my background knowledge. Examples of such codes are 'tacit knowledge' , 'expertise' and 'collective identity' . Most of the codes, however, emerged from the material in an open coding process (Tjora, 2017). The code 'colleagues are the reason I work in a school of music and arts' is an example of a code that emerged from the data. Open coding is time-consuming and one could end up with a huge number of codes, but it contributes in the process of understanding what the data material says (Tjora, 2017). When the number of codes increased, however, I started merging and organising them into categories and hierarchies. An example of this is the organising of the codes 'being challenged' , 'collaborate in teaching' , 'different competences is good for collaboration' and 'easier when meeting often' under the category 'collaborate with other teachers' . This category was then, together with other categories such as 'change in degree of collaboration' , 'collaborate with teachers from other arts fields' and 'collaboration higher education' organised into the category 'collaboration' .
The concepts of elements, nodal points, floating signifiers, chains of equivalence, discourses and subject positions were central during the analytical process. Identifying competing discourses in the school of music and arts field was the point of entry. I searched for elements, which are open signs where their meaning has not yet been fixed (Laclau & Mouffe, 2001). Hence, I searched to find concepts which could be understood in various ways. This allowed me to focus on struggles over central issues in the field, as well as how they were contested and negotiated. In such processes, I was able to single out issues that were somehow hidden at first glance. After having singled out several elements in the documents, I aimed at identifying discourses that tried to fill these elements with meaning. 'School of music and arts for everyone' and 'quality' are examples of elements I singled out, because it was not clear how these concepts were understood in the documents. This relates, among other things, to whether the school should be easily accessible or organised for those who want to put in extra effort. 'Quality' was one of the terms that occurred frequently in the documents, but it was not clear what was meant. One understanding took quality as being opposed to breadth. In the further analysis, 'for everyone' became a floating signifier and 'quality' a signifier articulating the depth discourse (see Figure 1).
After having identified elements and discourses in the material, nodal points (privileged signs that play a central role in the fixation of meaning) were next in line. The most central elements from the analysis were identified as nodal points. Nodal points are empty; they get their meaning by being related to other signs (Laclau & Mouffe, 2001). Therefore, the further analytical process involved identifying other signs in relation to the nodal points. A discourse is formed by the (partial) fixation of meaning around a nodal point (Laclau & Mouffe, 2001), and the following analytical process led to "re-identifying" discourses by taking new findings back to the discourses identified earlier in the process. I found that there were several opposing discourses in the field, and most of the nodal points were therefore also identified as floating signifiers, which refers to struggle between discourses (Laclau & Mouffe, 2001). These floating signifiers became the "skeleton" of the findings (see Figure 1 and 2). The analytical process identified various signifiers in different chains of equivalence articulating the same floating signifier, which led to the identification of competing discourses. 'For everyone' , which was first identified as (part of) an element, became in the further analysis a nodal point because of its centrality in the meaning-making: it is significant when identifying what kind of institution the school of music and arts is or should be, and it is central because the curriculum framework (Norsk kulturskoleråd, 2016) states that the school should be for everyone who wants to enroll. When identifying other signs in relation to this nodal point, like for instance 'social inclusion' and 'collective values' , but also 'individualism' and 'specialisation' , I found two opposing discourses trying to fill it with content. 'For everyone' therefore became a floating signifier with the two binary discourses of breadth and of depth trying to articulate it (see Figure 1). The first step in searching for subject positions available to teachers was to identify the master signifier/nodal point of identity. What unified informants was their position as music teachers in schools of music and arts. Hence, 'school of music and arts music teacher'
Results of the research
By carrying out a discourse analysis based on Laclau and Mouffe's (2001) discourse theory, I revealed several binary institutional and teacher discourses within the field, as well as six distinct subject positions constructed within these discourses. The institutional discourses were trying to imbue 'institution' with meaning; hence 'institution' appeared as nodal point and floating signifier. The analysis did not identify any one single discourse as hegemonic; rather, there were several discourses at play in the field. These were the binary discourses of breadth and depth, of local autonomy and centralised governance, of 'the House' and decentralisation, of New Public Management and professionalism, and of school of music and arts as school and school of music and arts as leisure activity (see Figure 1). The teacher discourses concerned and articulated perceptions of what a teacher in the school of music and arts is; they tried to imbue 'teacher' with meaning, which made 'teacher' the nodal point and floating signifier. 'Institution' and 'teacher' were installed as nodal points as part of the theoretical and analytical framework guiding the analysis, but also because I found in the data that there were different ways of understanding those concepts. The teacher discourses were the binary discourses of versatility and specialisation, of collaboration and autonomy, and of music as a tool and music as experience (see Figure 2). These binary oppositions should be understood as struggles to fill distinct floating signifiers with meaning, and they were established by a set of signifiers linked in chains of equivalence. These floating signifiers emerged from the data. One example is the role of the school in society. In the data, there were several ways of understanding the role of the school, which made 'societal role' a floating signifier. The most prominent ways to understand it were i) the school of music and arts as a school focusing on students' progression, putting in effort, career pathway and continuation, and ii) the school as leisure activity with emphasise on having fun, being easy and receiving (as opposed to putting in effort). This way of doing the analysis contributed to revealing tensions in the field.
Six subject positions were identified through interview statements about teaching and the teacher role, and through the representations of 'school of music and arts music teacher' ('musikklaerer i kulturskolen') in the document material. This means these subject positions were constructed within the discourses in the field, through language used by teachers and in the documents. The subject positions were Music Teacher, Instrumental Teacher, Musician, Musician-Teacher, Coach and School of Music and Arts Teacher (Kulturskolelaerer). Most of the teachers identified with several subject positions, either at the same time or interchangeably, according to the situation (see also Jordhus-Lier, 2021). Subject positions constructed within opposing discourses, however, are difficult for teachers to identify with, especially at the same time. The aim of addressing these subject positions was to answer the second research question, namely how music teachers' professional identities are constructed within the discourses in the school of music and arts. This relates to the logic of subjects acquiring their identities through identification with subject positions (Laclau & Mouffe, 2001). The discursive relations between the six identified subject positions and the nodal point of identity is visualised in Figure 3 below, which also shows how each of these subject positions was established by a set of signifiers linked in chains of equivalence.
Discussion
Based on the findings of central discourses and available subject positions, I was able to address power relations, status hierarchies, and tensions within the school of music and arts. As for the field in general, tensions between the institutional discourses of breadth and depth and between the teacher discourses of versatility and specialisation were most prominent. The discourses in these two sets of binaries articulated the same signifiers (breadth/ depth 'for everyone' and versatility/specialisation 'teacher competence') in order to achieve hegemony. Revealing this tension between different ways of understanding the institution and the teachers led to the insight that it was difficult for the teachers to identify with subject positions constructed within binary discourses at the same time. As a consequence, their professional identities needed to be negotiated. The possible conflict between subject positions constructed within discourses in binary opposition could also lead to different views on the aim of teaching, who the school should be for, and what to prioritise. This relates to the relationship between structure and agency, where the school of music and arts field is understood as the structure and the music teachers as agents. By carrying out a discourse analysis, I was able to reveal how the field is structured by discourses, and further how subject positions are constructed within them. This led to an insight into how music teachers working in this field are conditioned by these structures, but granted agency to i) identify with various available subject positions, and ii) resist structural constraints (dominating discourses and available subject positions) to try to change the field. This is evident in how one of the teachers, Laura, explained the resistance to the system regulating quality. Visible in this statement is a conflict between the system's and Laura's view of ensuring quality through student progression. We do not know how she will handle this conflict, but she could identify with a subject position constructed within the school of music and arts as leisure activity discourse, where progression is not seen as important, or she could resist, and try to establish the school discourse as dominant in the school of music and arts field.
It would probably also be possible to reveal conflicts without using discourse analysis, but in order to understand where these conflicts are grounded, what they could lead to, as well as possible ways of handling them, a discursive approach was helpful.
In discourse theory, the fixation of meaning in comparison to what-it-is-not is centralthat is, where something is defined through its difference to something else (Laclau & Mouffe, 2001). This is also the case for identity. Mouffe (2005) claims that creation of an identity implies the formation of difference, and that difference is often constructed on the foundation of a hierarchy. Building on discourse theory when studying identity could thus lead to identifying hierarchies. My analysis revealed the subject positions to be hierarchically structured, where positions constructed within the discourses of depth and specialisation had a somewhat higher status than those constructed within the versatility and breadth discourses. This is evident in how one of the teachers talked about not seeing himself as a musician.
It [to be a musician] has higher status in music communities, […] and to be a top musician, then people start looking up to you. I feel I am not there, that I am not part of the gang. I would love to be counted in when someone is looking for freelance musicians or someone to hold a seminar. (Kristoffer) Kristoffer identified with the subject positions Music Teacher and Instrumental Teacher, but not with the subject position Musician.
The discursive structure also made it possible to discover tensions between different actors, because the analytical structure of floating signifiers and discourses makes it easier to reveal who understands what in which ways. An example of this is how I, through the analysis, found the institutional discourse of breadth to be dominant in policy documents, while the teacher discourse of specialisation was dominant in the interview material.
This is further visible in available subject positions and how professions are constructed and understood. The music teachers identified with several subject positions, but in the curriculum frameworks, School of Music and Arts Teacher was the primary subject position. This means that, within the documents, there were signs of a desire to unite teachers around common features, while the interview material, to a greater extent, supported an understanding of the school as a plurality of specialists and specialisations. This connects to collective professional identity: membership in professions. Some teachers linked their profession to their instrument, others saw music teaching as their profession, but none of them perceived of the broader school of music and arts teaching as their profession. However, School of Music and Arts Teacher is the dominating subject position in the curriculum framework, which could point towards the curriculum framework indicating school of music and arts teaching as the profession in which all teachers are members. Here, the institution is central in the profession and could provide the school and its teachers with a stronger political voice. On the other hand, perceiving music teaching as the profession means that members share a common subject, music. This would exclude teachers from other art forms than music, but could include music teachers who work outside the school.
My research also highlighted some possible pitfalls when using discourse analysis. The first pitfall regards the discursive understanding of non-discursive theories. Theories of professions contributed in explaining structures and processes connected to professions and the labour market, which is relevant and fruitful in itself. Understanding these theories as a discourse of professionalism implicated an extra layer in the theoretical framework, which complicates the analytical presentation. I also experienced the process of combining discursive and non-discursive theories challenging, and believe the greatest challenge is making the theoretical framework sufficiently explicit in order to communicate the analysis and its findings. The second pitfall regards distinguishing between the concepts of subject positions and identity. Unless subject positions as theoretical constructs are sufficiently explained, readers might easily believe that a particular informant is a music teacher, while another is an instrumental teacher. This misinterpretation would lend legitimacy to the assumption that the discourse analysis portrays a variety of teachers within the school, instead of identifying available subject positions, which each teacher can identify with (or reject). It is in enabling this latter task, showing how teachers negotiate their identities, that discourse analysis displays its merits. To be able to discover those issues, the structure of subject positions has to be clear all the way from setting up the theoretical framework to performing the analysis and interpretation. Clarity, structure and thick descriptions are crucial in order to avoid these pitfalls.
The analytical approach put forward conceives of discursive fields as built on binary oppositions. This is due to the importance of difference in constructing meaning and in identification within the theoretical framework of this research (Mouffe, 2005;Laclau & Mouffe, 2001), and the analytical model which focused on identification of floating signifiers which different discourses tried to imbue with meaning. This way of organising the analysis and findings is open to debate, of course, and it raises questions like: Why binaries?
Why cannot a third discourse attempt to imbue a floating signifier with meaning? Indeed, a third discourse could most likely attempt to imbue meaning to a floating signifier in an analysis that was differently structured. But the discursive structure did highlight not only the discourses in binary oppositions, but also their struggle over defining the institution and the teachers. And, within all those discourses, the various subject positions with which the teachers could identify were constructed, contested and negotiated.
Concluding remarks
The discursive approach of my research represents a theoretical and a methodological contribution to research within the field of music education, demonstrating how discourse analysis provides analytical tools that can open the research field and challenge takenfor-granted knowledge, discover binary discursive oppositions, and unmask power relations.
Before I elaborate on the contributions, I will point to some things this discourse analysis did not provide. It did not focus on the stories of the teachers in ways a narrative analysis would have done. Rather than honing in on each teacher's professional life story, this analysis focused on the discursive field, looking for meanings across teacher narratives. A combination of narrative and discourse analysis could have opened analytical possibilities, but also complicated the research further. One could also ask what thematic analysis would add to such a study? Identifying relevant themes across the teachers' stories would be beneficial, particularly if a comparison between different professional fields was a stated research objective. A thematic analysis structured by theory would arguably also have the potential of producing a clearer and less "messy" analysis. However, such a thematic analysis would come at the expense of discovering structures within the field and investigating how different teachers navigate them. In discourse analysis, the focus on meanings constructed in the field and the structure of the findings deriving both from theory and empirical data often make the analysis somewhat tangled. One of the methodological contributions of my research is thus its explicit embrace of complexity and "messiness". This is also evident in the construction of professional identities, where various discourses and subject positions are in a continuous struggle to fix meaning over central elements. This "messiness", I would argue, contributes to broadening a field of research that hitherto has conceived of music teacher identities as primarily centred on the teacher-musician dichotomy. My analysis also invites critical scrutiny of other dichotomies, such as that between versatility and specialisation.
Using discourse analysis to understand professional music teacher identity could not only benefit the research field, but also policy makers, the practice field and teachers in their further development of practice and negotiation of identities. The findings of my research can lead to a raised awareness about individualism versus collective values, the school as a school versus leisure activity, and music as a tool versus experience -all of which can contribute to an increase in teachers' reflections around, and development of, their practice. Both specialised and versatile teacher competences are important in the school, but teachers struggle to maintain both. My research points to other solutions than that of all teachers successfully straddling both ideals, namely i) encouraging a broad understanding of specialisation, for instance specialists in "group teaching" or "general music", and ii) emphasising both teacher and institutional collaboration (see also Jordhus-Lier, 2018). This is again relevant for higher music education institutions in their efforts to educate music teachers who are specialists, while also being flexible and equipped with knowledge about the school of music and arts. I suggest there is a need to educate "open-minded specialists" through broad-based teacher education that includes possibilities for specialisation and lays the groundwork for flexible music teachers who are able to develop their competence after graduation. Offering an education that is too broad could lead to all teachers becoming versatile while not allowing any to develop as specialists. This could make the school of music and arts less versatile, because the teachers would all have roughly the same competence. The methodological contributions for higher music education could, however, just as much be related to function as an inspiration not only for music education researchers, but also for music teacher educators in their discussion of professional identities with students.
In this respect, my analytical framework offers a lens through which to look at teachers, or oneself, within the practice field. In doing so, the objective should not be to identify the same discourses and subject positions in other settings, but for teachers, students and researchers to use the structure of nodal points, floating signifiers, signifiers, subject positions and discourses to discover new struggles and new knowledge -which can build on the findings of my research and further develop the music education research field and the field of practice.
Author biography
Anne Jordhus-Lier is associate professor of music at Inland Norway University of Applied Sciences. She is currently also a researcher within the project The social dynamics of musical upbringing and schooling in the Norwegian welfare state, DYNAMUS (funded by the Research Council of Norway). Jordhus-Lier is educated as a music teacher and flutist, and has worked in compulsory schools and schools of music and performing arts for several years. She has recently published in the journals Music Education Research and International Journal of Music Education on schools of music and performing arts. Her research interests also include music teacher identity, professionalism, discourse theory and the sociology of music education. | 2021-05-21T16:57:35.668Z | 2021-04-06T00:00:00.000 | {
"year": 2021,
"sha1": "24c512f214e8658f61b4ac1fa720d9a26c0e55b0",
"oa_license": "CCBY",
"oa_url": "https://nrme.no/index.php/nrme/article/download/3025/5255",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "87504b9035902e1d7923e1a940f573c139e37ad1",
"s2fieldsofstudy": [
"Education",
"Linguistics"
],
"extfieldsofstudy": [
"Sociology"
]
} |
218776868 | pes2o/s2orc | v3-fos-license | Mapping process safety: A retrospective scientometric analysis of three process safety related journals (1999–2018)
Abstract Over the last decades, process safety has been an important area of academic inquiry, aiming to build knowledge which can contribute to reduce the occurrence of industrial accidents in the process and chemical industries, or to mitigate their consequences. Knowledge in this interdisciplinary research domain is created using applied science, engineering, organizational, and social science approaches. This article provides a retrospective overview of the process safety research field, through the lens of three major journals contributing to the development of this knowledge domain: Journal of Loss Prevention in the Process Industries, Process Safety and Environmental Protection, and Process Safety Progress. An analysis of the articles in these journals, published between 1999 and 2018, provides insights in the structure, developments, trends, and highly influential works in this research domain, while revealing differences and similarities between these three core process safety journals. General publication trends, the geographic distribution of leading knowledge producers (countries/regions and institutions), their collaboration and temporal evolution patterns, topic clusters and emerging trends, and highly cited sources and articles, are identified and discussed.
Introduction
The chemical and process industries, like many other human activities, have been affected by large-scale accidents. Due to the presence of hazardous substances, these accidents often result in very high casualty rates and significant environmental and economic consequences (Kletz, 2009). Consequently, process safety and loss prevention has been an active area of industrial, regulatory, and academic work.
The first issue of the Journal of Loss Prevention in the Process Industries was published in 1988. Over the subsequent decades, it has become one of the leading outlets for the communication of knowledge about process-related injuries and damages, with a focus on chemical and process plant safety. It publishes applied research based on the physics and engineering of fires, explosions, and toxic releases, aimed at preventing losses. As the practice of loss prevention is highly interdisciplinary, the journal also addresses the related social, policy, and organizational aspects, including incident investigation, process safety and risk management, process safety culture, human and organizational factors, process security risk assessment and management, process safety education and training, and process safety decision-making and economic issues.
The primary aim of this article is to present a retrospective overview of the process safety research domain. Such retrospective analyses have recently been made for other leading journals in other areas of academic activity, for instance for operations research (Laengle et al., 2017), safety science , and transportation research . Such high-level overviews are of interest to academics and practitioners working in the topic domain, contextualize their own work, and serve to obtain insights in important knowledge domains and emerging trends. Retrospective overviews can also be instrumental to early career academics, to identify the main authors and highly-cited articles in a research domain, which can expedite their familiarization with a research domain (Li et al., 2020). Especially when comparing several journals, comparative overviews can furthermore be instrumental for prospective authors to select a suitable journal to which to submit their work (Li and Hale, 2016).
Several literature review articles have been published on particular topics within process safety and loss prevention, to provide detailed insights in the progress and knowledge gaps of specific research topics, for instance concerning liquefied natural gas risk analysis (Animah and Shafiee, 2019) emergency evacuation in chemicals-concentrated areas (Dou et al., 2019), inherent process safety indicators (Jafari et al., 2018), process safety education (Mkpat et al., 2018), boiling liquid expanding vapor explosions (BLEVEs) (Eckhoff, 2014), and risk assessment methods at work sites (Marhavilas et al., 2011).
Notwithstanding the great value of such narrative reviews to the development of specific research domains, the methods applied in classical review articles are not well-suited for providing a high-level overview of a journal as a knowledge carrier, primarily because of the large numbers of published articles, their very labor-intensive nature of classical review methods (Grant and Booth, 2009). Classical review methods are also limited as they do not easily translate to a visual representation of a research domain (Li et al., 2020), whereas visualizations are important for guiding human cognition, interpretation, and memory (Simoff et al., 2008).
Scientometric methods present a suitable alternative approach to obtain high-level insights in a research domain. By applying mathematical methods to quantitative metrics and information about journal articles, patterns, developments, and trends can be readily visualized and patterns, developments, and trends conveniently identified and interpreted (Li et al., 2020). Consequently, scientometric analyses have been performed to obtain high-level overviews of the structure and patterns of journals (Laengle et al., 2017;Merig� o et al., 2019;Modak et al., 2019). The techniques have also been used to analyze broad knowledge domains relevant to process safety, for instance domino effects in the process industries and pool fires (Liu et al., 2019).
Considering the above, the primary aim of this article is to present a retrospective overview of the process safety research domain, identifying publication trends, highly impactful contributions, authors, and geographic regions, dominant clusters of research, emerging research topics, and knowledge exchange patterns between leading academic journals. Three highly impactful journals in the process safety research domain are selected as a basis for this analysis: Journal of Loss Prevention in the Process Industries (JLPPI), Process Safety and Environmental Protection (PSEP), and Process Safety Progress (PSP). These are selected to allow broad insights in the structure, main themes, and influential works in the process safety domain, and to enable a comparative analysis of the development patterns and trends of these leading journals. To allow a retrospective comparative analysis which allows insights in recent developments, considering that these journals have different years of publication of their first issue, a 20-year period of analysis is chosen: from 1999 up to and including 2018.
The journals are selected based on the experience of especially the third author with these journals and supported by their high journal impact factor. Journals impact factor (JIF) represent the average number of citations to a journal in the past two years (Beatty et al., 2012), and it has become an influential and wildly used indicator to measure the quality and influence of a journal in a research domain. The annual trends of the JIF of these journals are displayed in Fig. 1, covering the period 1999-2018. A descriptive statistical analysis of the JIFs using boxplots is shown as well, giving further insights. The boxplots contain information about the medians and lower 25% and 75% quantiles using the boxes and show outliers and minima and maxima values using the whiskers extending from the central boxes. It is seen that in the period from 1999-to 2008, the three journals have quite similar JIFs, whereas after 2008 the JIF gap between each journal becomes increasingly obvious. The JIF of PSEP increased rapidly, reaching 4.384 in 2018. The figure illustrates that PSEP has become the most influential process safety journal in terms of JIF. JLPPI has seen a gradually increasing JIF since about 2008, whereas the JIF of PSP has remained relatively stable or at least has only slowly increased over this time period. By exploring topic clusters and areas of recent high research activity using scientometric methods, some underlying reasons underlying these different evolutions in JIF scores can be explored.
The remainder of this article is organized as follows. In Section 2, the data source and data extraction process are described, and a brief overview is given of the applied scientometric methods and analysis process. The results of the scientometric analysis are presented and discussed in Section 3. Overall publication trends, geographic distribution of and collaborations between main contributing countries/regions and institutions, topic clusters and their temporal evolution, knowledge communication between journals, and highly influential articles, are identified and interpreted. Section 4 concludes.
Data source
The bibliographic data of the three selected process safety journals were retrieved from Web of Science Core Collection (WOSCC). In the 'Advanced search' interface of WOSCC, a search strategy based on the publication name (source, SO) was applied: SO ¼ (Journal of Loss Prevention in The Process Industries), for the sub-dataset 'SCI-EXPANDED', and with the timespan set from 1999 to 2018. Article, review, and proceedings papers are more the important document types as carriers of scientific knowledge, with these three types accounting for 85% of the contributions in each analyzed journal. In the current work, these three types are selected as the final sample data for a depth analysis. Articles in the other analyzed journals are obtained in a similar way, by changing the journal title (i.e. SO ¼ (Process Safety Progress) and SO ¼ (Process Safety and Environmental Protection)). The detailed data retrieval Fig. 1. Annual trends of Journal impact factor of JLPPI, PSEP and PSP,JIF-data obtained from journal citation report. strategy and its results are show in Fig. 2. The data extraction was performed on 10-9-2019. Detailed summary information of each journal is listed in Table 1. There are 2405 records from JLPPI, which ranks first among these journals, followed by PSEP (n ¼ 1889), and PSP (n ¼ 1001). Both JLPPI and PSEP are published in England and release 6 vol per year (see Fig. 3).
A note is in place on the selection of the data sources. As stated in the introduction, the focus of the current research is to provide a retrospective overview of the process safety research domain. This is performed through three core process safety journals. A direct search of keywords related to process safety in WOSCC was attempted, but as this leads to a very high number of irrelevant articles (due to words such as 'safety' and 'process' being very generic), the choice is made to focus on the journals JLPPI, PSEP, and PSP. Other journals which also publish on closely related topics, such as Reliability Engineering and System Safety, Safety Science, and Journal of Hazardous Materials, are not accounted for. This is either because they include a lot of work on hazards and safety in other industrial domains, or because they focus on quite specific aspects of process safety. Based on the analysis results of the intellectual basis of the selected journals (Section 3.4), it is found that the selected journals are closely related to each other and separated from other journals. This supports the restriction to the analysis of the three selected journals.
Methods and analysis process
In the current work, bibliometric analysis methods are applied, and the bibliometric mapping tool VOSviewer is used to visually represent the scientometric analysis results, facilitating visual interpretations. Bibliometric analysis methods originate from information and library sciences, and can be characterized as "the application of mathematics and statistical methods to books and other media of communication" (Mingers and Leydesdorff, 2015). With the advent of the data science age, bibliometric methods have been combined with network analysis and data visualization techniques, which lead to the new research area known as 'bibliometric/scientometric mapping'. This research domain focuses on developing quantitative methods based on mathematical analyses and statistics, and tools for visually representing scientific literature based on bibliographic data.
Recently, bibliometric mapping analysis has become of interest not only inside the scientific communities of information and library sciences, but also in other scientific communities. There currently are more than 30 free used tools already developed for bibliometric mapping (Li, 2017), with VOSviewer being one of the most widely used tools among these. VOSviewer is short for 'Visualization of Similarity', which was developed by van Eck and . The tool has serval functions for bibliometric mapping, including collaboration analysis (e. g. authors, institutions, and countries/regions), topics analysis (e.g. keywords or terms), and citations-based analysis (e.g. bibliographic coupling and co-citations). The reader is referred to (Li et al., 2020)for an overview of the main concepts underlying these analyses.
Several papers have already applied VOSviewer for bibliometric mapping in safety related topics, e.g. output distributions and topic maps of safety related journals (Li and Hale, 2016), safety journals identification (Li and Hale, 2015), safety culture (van Nunen et al., 2018), construction safety (Akram et al., 2019;Jin et al., 2019), process safety (Amin et al., 2019;Yang et al., 2020), domino effects , laboratory safety in universities , and road safety research (Zou et al., 2018). (Li et al., 2020)provide a more comprehensive overview of bibliometric analyses on safety related topics.
In bibliometric analyses, scientific journals are considered as knowledge carriers, whereas publications are understood as knowledge units, which focus on a particular topic and are connected to the literature. The analysis of a specific journal or a group of journals is helpful to understand the evolution of the structure, main themes, and influential works in the research area. Several articles have performed bibliometric analyses on specific journals, e.g. Transportation Research journals , The Journal of Mechanism and Machine Theory (Flores, 2019), Computers & Industrial Engineering (Cancino et al., 2017), and European Journal of Operational Research (Laengle et al., 2017). In the safety research area, Li et al. (2013) have made a preliminary knowledge map of safety science based on the Safety Science journal. More recently, Merig� o et al. (2019) have used various bibliometric methods to analyze forty years of publications in Safety Science, including publications trends, leading producers (authors, institutions, and countries/regions), and highly-cited papers and references.
Some scientometric analyses which focus on the development of journals as knowledge carriers in a given research domain focus on only one journal, e.g. Cancino et al. (2017 and Flores (2019). Other analyses focus on identifying differences between journals focusing on closely related topics, e.g. Li and Hale (2016). The present work aligns with the latter approach, focusing on identifying similarities and differences between the developments of three core process related journals. In the current research, three journals focusing on process safety (i.e. JLPPI, PSEP, and PSP) are selected for analysis, as outlined in the introduction.
A flowchart describing the analysis process for these process safety related journals is shown in Fig. 4. The general workflow of scientometrics mapping research includes data retrieval, pre-processing including data cleaning and disambiguation (harmonizing data fields where different records may e.g. use a different abbreviation of a name), network extraction, normalization, mapping, analysis, visualization, interpretation by an analyst to obtain some insights from the results, see (Li et al., 2020) for a description of these steps. In the current work, four main analyses are performed: publication trends, geographic distribution of leading producers, terms co-occurrence clusters, and intellectual base analysis. These are briefly described next:
Publication trends
The publication trend in terms of the number of published papers is a quantitative indicator for the scientific activity and attention in a certain domain. Fig. 5 and Table 2 show the annual number of articles published in each of the three journals, where in Fig. 5 the horizontal axis shows the publication year, and the vertical axis the number of papers has published in each year. The results indicate an increasing trend for each journal, and especially for JLPPI and PSEP. This increase in annual output clearly shows the growth of the scientific production in process safety domain.
The marked increase in output of JLPPI began in 2009, when the annual number of published articles exceeded 100 papers, reaching over 150 papers per year from 2013 onwards. The annual output of PSEP shows a very significant change during the considered time period. There are about 50 papers per year before 2014, with 2014 marking a very sharp increasing trend, suddenly jumping to 98 in that year, and reaching 343 papers in 2018. Apart from a growing interest and activity in process safety research, this indicates that PSEP has changed the editorial policies, leading to a higher volume of papers being submitted and/or accepted in the journal. Compared to JLPPI and PSEP, the output of PSP has changed less significantly during the observed timespan, with a slowly increasing trend and an average output of nearly 50 papers per year, with the lowest standard deviation among the three journals. The cumulative number of publications of JLPPI (R 2 ¼ 0.9399) and PSEP (R 2 ¼ 0.9339) follow an exponential growth, whereas PSP is better characterized by a linear growth model (R 2 ¼ 0.9905).
Geographic distribution of and collaboration between leading knowledge producers
In this Section, the geographic distribution of the published articles during the period 1999 to 2018 in the three analyzed process safety journals is analyzed, taking countries/regions and institutions as levels of analysis. Collaboration networks and temporal evolutions are identified as well. Fig. 6 shows the countries/regions collaboration network in process (1) Publication trends of process safety: the annual outputs of process safety publications in the analyzed journals are shown and selected descriptive statistics are analyzed. This result provides a high-level overview of the research activity of process safety in the 20 years from 1999 to 2018. (2) Leading institutions and countries/regions in the collaboration network: the highly productive institutions and countries/regions are analyzed to show where the key knowledge producers originate from, what collaborations between these exist, and how these have evolved over time. This analysis is based on the visualization of similarities approach by van Eck and Waltman (2010), using VOSviewer. (3) Terms co-occurrence clusters: terms are noun phrases, which are extracted from titles and abstracts of the 5295 papers using a text mining and clustering algorithm described in van Eck et al. (2010a). Terms are labeled as 'co-occurring' if they appear together in the same paper. The terms co-occurrence network is clustered based on the co-occurrence strength using network clustering method in VOSviewer. (4) Intellectual base analysis: cited articles can be seen as the intellectual base of a research field, on which future knowledge seeking activities build (Persson, 1994). In the present work, a two-level intellectual base analysis is performed (cited sources and cited articles) is analyzed, using journals and references co-citation analysis as implemented in VOSviewer.
safety research, where the size of the nodes and labels are proportionate with the number of occurrences of a country/region. Three large groups are identified in the collaboration network, based on the collaboration strength of these countries/regions, see Fig. 6(a). The average publication year of each country/region is shown in Fig. 6(b). Table 3 lists the top 10 most productive countries/regions in international process safety research, corresponding to the network in Fig. 6. The results of Fig. 6(a) indicate that USA is the most productive country in process safety research, with 1230 papers, amounting to 23.23% of the total. The USA is followed by China (857,16.19%) and the United Kingdom (583, 11.01%). In this figure, the countries/regions in the same cluster are more closely connected in the process safety research. For instance, the Netherlands, the United Kingdom, Italy, Spain, and Germany form a cluster of European countries. As shown in Fig. 6(b), the average publication year of the countries/regions show that People R China, Iran, Malaysia, and Brazil currently are active Notes: NP¼ Number of publications | CNP¼Cumulative number of publications | % sum ¼ Number of publications in the year/total number of publications | %cum ¼ Cumulative number of publications/total number of publications | Stdev ¼ standard deviation.
countries/regions in process safety research. In terms of the average number of citations, Table 3 indicates that contributions by Canada have the highest average impact, followed by India and Italy. The average citation rate of countries with higher productivity, such as USA and Peoples R China, is comparatively lower. The process safety research was driven by the development of the chemical industry, and by high-profile industrial accidents in developed countries, such as the Flixborough disaster (UK, 1974) and the Seveso disaster (Italy, 1976). Developed countries such as the United Kingdom, USA, Germany, Italy, and Japan have had a mature chemical industry already for decades. Research on process safety in these countries has a longer history, as seen in the earlier average publication year of these.
On the other hand, People R China, Iran, Malaysia, Brazil and India are developing countries where the chemical industry and the process industries have been growing fast in recent years. It appears plausible that safety considerations have increasingly become more important in these developing chemical and process industries, and that process safety research has attracted more attention, and obtained more financial support, in these countries/regions.
International collaboration is a good way to transfer knowledge and expertise between different countries/regions. The developing countries/regions can learn process safety methods and techniques, and gain knowledge about technological, social, and organizational advances from developed countries/regions to improve the safety status of their process industries. Apart from collaborations between developed countries such as the United Kingdom, USA, Canada, Italy, and Germany, there are also emerging international collaboration networks between developed countries/regions and developing countries/regions, e.g. between USA, Canada, the Netherlands, United Kingdom, and People R China, and between USA and Canada, and Brazil. Fig. 7 shows the collaboration network between the key institutions' contribution to process safety research. Fig. 7(a) show the clusters of the institutions in the research domain, whereas Fig. 7(b) provides insights in the average publication year of each institution. Table 4 lists the top 10 most productive institutions in process safety research.
According to the results of Table 4, Texas A&M Univ has published 225 papers, amounting to 4.25% of the global total, thereby ranking first place in the international process safety research. It is followed by Mem Univ Newfoundland (120, 2.27%) and China Univ Min & Technol (75, 1.42%). Texas A&M Univ is the leading institution in process safety research and has a research center dedicated to process safety research: the Mary Kay O'Connor Process Safety Center. It hosted or hosts some of the outstanding researchers in process safety, including Sam Mannan, Hans Pasman, and William Rogers. Mem Univ Newfoundland from Canada also famous in process safety, with Faisal Khan being the group's key contributor. In terms of the average number of citations, the results indicate that contributions by Dalhousie Univ and Mem Univ Newfoundland by far have the highest average impact, with leading academics Paul Amyotte and Faisal Khan leading the process safety research efforts in these institutions.
As evident from Table 4 and Fig. 7(b), Chinese institutions have increasingly paid more attention to process safety research, with 4 universities from mainland China ranking among the top 10 most productive institutions. It is furthermore seen that there are also some companies which provide significant contributions to process safety research, e.g. Gexcon, Baker Engn & Risk Consultants Inc, Dow Chem Co USA, and Air Prod & Chem Inc. The average publication year shows that the institutions from mainland China (e.g. China Univ Min & Technol, China Univ Petr, Nanjing Tech Univ, Henan Polytech Univ, Beijing Inst Technol, Chinese Acad Sci, and Dalian Univ Technol) and from Europe (KU Leuven, Univ Antwerp, Delft Univ Technol) are recently active in the research field.
The network also shows that institutions from the same country/ region commonly have closer collaboration relations than institutions in different regions. For instance, Chinese institutions are mainly located at the top end of the collaboration network figures; whereas institutions from the USA are located in the center of the network, and institutions from Canada and Europe are found on the right bottom of the network. In the network, Mem Univ Newfoundland and Dalhousie Univ; and Delft Univ Technol, KU Leuven and Univ Antwerp have a significantly higher collaboration strength compared with other institutions. The latter is the case due to the leading academic Genserik Reniers being simultaneously affiliated with the three universities.
Terms co-occurrence analysis
Noun phrase in the titles and abstracts of papers from three process safety related journals are extracted based on the automatic term identification method by van Eck et al. (2010b). A terms co-occurrence network is created based on terms which occur at least ten times in the complete dataset. Finally, 1309 terms are extracted using this threshold of the terms' frequency.
The terms co-occurrence network of the combined dataset of the three target process safety journals (JLPPI, PSEP, and PSP) is shown in Fig. 8. The colors of each node indicate the different clusters to which the terms belong, whereas the node and label sizes are proportional to the terms' occurrence frequencies. A term is assigned to only one cluster (the one to which it links most strongly), but can also have strong links to other clusters, in which case it will usually be located closely to the other cluster. Take for instance the term 'experiment', which belongs to the blue cluster (Cluster #3) and links strongly to very many terms within that cluster (e.g. 'dust', 'ignition', 'mixture', 'air', etc.). This terms also links strongly to the green cluster (Cluster #2), and hence is located closely to it, linking with green-colored terms such as 'temperature', 'reaction', 'flow rate', etc. In Fig. 8, no axes are shown as the visualization applies a normalized distance between the terms, where terms located closer to one another are generally more closely related, see van Eck et al. (2010b).
Three large term clusters are identified, and the authors have given each cluster a name to provide a narrative interpretation of what are the high-level focus areas within the process safety domain. This is done based on the terms inside the clusters, and involves some subjectivity based on the authors' knowledge and experience of the research domain. In this interpretation, mainly the terms with a higher occurrence frequency are given more weight, while in the choice of a cluster name the authors also aimed to formulate a label which also covers the less frequently occurring terms. It is also worth noting that because the terms are extracted from throughout the articles' text, the clusters should be understood broadly as narrative patterns rather than research topics specifically. Hence, it is possible that some terms in the clusters are not associated with research results per se, but rather with a discussion on the need for those.
Cluster #1 is given the name "Process safety risk management" and includes 525 terms. Cluster #2 is labeled as "Chemical process safety" and includes 416 terms. Cluster #3 concerns "Fire and explosion process safety" and includes 368 terms. As seen in the top-left image in Fig. 8, Cluster #2 and cluster #3 are located close to each, which reflects that the work in these is more closely linked than the work in cluster #1. Generally, A brief interpretation of three clusters is presented below.
■ Cluster #1 Process safety risk management This cluster, shown in red in Fig. 8, is concerned with the occurrence of incidents and accidents on the level of an integrated system, and on interdisciplinary management-level approaches to prevent losses. It focuses on the management of risks and safety, for which activities such as accident investigation, maintenance, safety management, risk management, and inspections, are in focus. Specific focus topics include process hazard analysis, protection analysis, and consequence analysis, where event occurrences (e.g. causes, failures, operator and maintenance related issues) and the associated consequences (injuries, accidents) are analyzed and considered in a decision-making context. Case studies are an important focal point, and methods such as HAZOP, fault trees, Bayesian networks, and analytic hierarch process are used for this purpose. Quantitative analyses of risk constitute an important narrative in this cluster. This cluster is strongly interdisciplinary, and includes knowledge from natural sciences, engineering, and social and organizational sciences.
■ Cluster #2 Chemical process safety
This cluster, shown in green in Fig. 8, is concerned with the safety of chemical processes, and has a more disciplinary focus compared to Cluster #1, with a more applied science and chemical engineering character. Experiments and studies on process parameter settings and optimization appear to be the focus of this cluster, aiming to build and discuss knowledge related to the safety of chemical processes. The cluster includes terms from chemistry such as solution, adsorption, reaction, degradation, and process-related terms such as temperature, concentration, equilibrium, flow rate, pH-value, and catalyst. Various chemical products and elements appear in the term cluster, for instance H2O2, TiO2, H2S, NH3, iron, and nickel. This cluster has linkages to the "Process safety risk management" cluster but is more closely linked to the "Fire and explosion process safety" cluster, especially through terms related to experimentation and experimental conditions.
■ Cluster #3 Fire and explosion process safety
This cluster, shown in blue in Fig. 8, focuses on the fire and explosion related safety of process safety. As Cluster #2, it has a more disciplinary focus than Cluster #1, and appears to have a more applied science and engineering focus. Experiments and studies about the conditions under which various types of fires and explosions occur are important narratives in this cluster, to build and discuss knowledge about safe conditions of process operation and about the consequences in case fires and explosions occur. The cluster includes terms focusing on the type of phenomenon under study, for instance flame, deflagration, dust explosion, vapor cloud explosion, detonation wave, blast wave, and BLEVE (boiling liquid expanding vapor explosion). Different substances or products are the focus of investigations, including dust, methane mixtures, ethylene, and propylene. Experiments are an important focus point in this cluster, but also modeling work and numerical studies are included. Issues such as ignition, cloud and air mixtures, boundary conditions, flammability limits, overpressure, and physical layout of chambers, pipes, and walls, are the key topics, which are important both in experimental and modeling contexts. This cluster has linkages to the "Process safety risk management" cluster but is more closely linked to the "Chemical process safety" cluster, especially through terms related to experimentation and experimental conditions. Terms overlay maps of the process safety research domain are shown in Fig. 9, giving insights in the temporal evolution of the research domain, the topics with high research impact, and the focus topics of each of the three target process safety journals. Fig. 9 has the same structure as Fig. 8 and provides further insights in the developments and impacts of different topics within the three clusters identified above.
The temporal evolution of scientific attention to topics within process safety is shown in Fig. 9(b). An overlay of the term co-occurrence map is applied, showing the average year in which the term occurs. A blue color represents older terms, with 2010 the average year of use. A red color denotes 2015 as the average year of use, i.e. more recent. This overlay clearly shows there is ongoing activity in all three clusters identified in Fig. 8. Overall, cluster #2 "chemical process safety" shows most recent activity, with terms such as adsorption process, aqueous solution, wastewater treatment, degradation, and optimization recent focus issues. Activity in cluster #3 "Fire and explosion process safety" shows a mixture of recent and older activity, with specific recent focus topics including explosion overpressure, flame propagation velocity, and methane air mixture. Cluster #1 "Process safety risk management" contains comparatively more older topics, with an initial focus on incidents, incident investigation, and safety management. Contemporary research frontiers in this cluster focus on Bayesian Networks and fault detection. Naturally, focusing only on the average year in which a term is used may give a somewhat distorted view, especially if there are terms which have a large standard deviation (i.e. topics which were in focus early on and continue to be in focus to the present). Nevertheless, in scientometric analyses, it is common to use the average value as an indicator to compare the temporal evolution of activity on different topic area, see e.g. Li et al. (2020). Fig. 9(c) shows a terms co-occurrence map with an overlay of the average citations of the papers in which the terms occurred, providing insights in which research issues have attracted significant attention and are influential in the further development of the field. It shows that terms in cluster #2 "Chemical process safety" overall have higher citation rates, i.e. that it is not only a more recent area of research activity, but also relatively more impactful. Hence, it can be concluded that cluster #2 is the current hot research domain within process safety. Cluster #3 "Fire and explosion process safety" contains few very impactful research topics, but terms related to dust explosions are comparatively more impactful than other fire or explosion related topics. Within cluster #1 "Process safety risk management", probabilistic approaches to accident risk assessment and consequence analysis appear more impactful than safety management and incident investigation, with especially methodologically focused topics such as fault trees and Bayesian Networks being highly influential.
In the current research, the data from three process safety journals is collected and the combined terms co-occurrence map is shown in Fig. 9 (a). It is also instrumental to obtain insights in focus topic areas of each journal compared to the other ones. This is analyzed in Fig. 9(d-f), where an overlay is applied indicating the relative occurrence rate of a term in each journal compared to the overall occurrence rate in the Fig. 9. Terms co-occurrence clusters of the three target Process Safety journals: temporal evolution, impact, and journal focus topics, based on publications in JLPPI, PSEP, and PSP in the period 1999-2018. complete dataset. In these figures, when a term's node color is close to red, this means that the term has a high occurrence percentage in the target journal. For example, the total occurrence rate of the term 'experiment' (located in cluster #2) is 693 across the three journals. Of these, 53% is contributed by JLPPI, 40% by PSEP, and 7% by PSP. Hence, in the figures red areas signify topics where a journal has an important contribution to the process safety research areas, whereas for blue areas, the journal has little or no contribution.
It is seen from Fig. 9(e) that JLPPI has a significant contribution to all topic clusters, especially to cluster #1 "Process safety risk management" and cluster #2 "Fire and explosion process safety". Considering also the temporal evolution and impact overlay maps of Fig. 9(b) and (c), JLPPI however has almost no contribution to the currently important research topics related to chemical process safety. The analysis shows that PSEP has more focus on cluster #3 "Chemical process safety" compared to the JLPPI and PSP, and that this focus on impactful topics of contemporary importance is an important contributing factor to the rapidly increasing journal impact factor as found in Fig. 1. PSEP also has important contributions to the other clusters. Finally, PSP has a less diversified area of research activity, with most of its publications located in cluster #1 "Process safety risk management", and to a lesser extent in cluster #2 "Fire and explosion process safety". Moreover, the highest share of contribution within the process safety research domain is on topics with less contemporary attention or impact. These topics are located on the right end side of cluster #1, and concern incident analysis and safety management. While PSP also contains a relatively important share of the research in cluster #2 "Fire and explosion process safety", which is more impactful and of recent interest, this does not suffice to lead to a significantly growing journal impact factors, as found in Fig. 1. This analysis clearly shows that different leading journals within process safety, which in principle address similar topics within their journal scope, in fact do have markedly different research focus areas and associated impact. Such information can be useful especially for journal editors and editorial boards to position their journal within the research domain. It can also be useful for prospective authors to select a suitable journal for their work, by seeing which journal best aligns with the focus topic of their work, and how active journals are on the topic.
Highly cited sources
Highly cited sources are journals, books or other media that are frequently cited in process safety research, where journals are the main source being cited in scientific papers. Highly cited sources reflect the main knowledge carriers that support the process safety research and can be regarded as its intellectual base. In this Section, highly cited sources are obtained from the reference lists of the 5295 papers. A total of 36,745 unique sources are extracted, of which sources with more than 500 citations are selected as the target for constructing the journals cocitation network.
The sources co-citation network of process safety is shown in Fig. 10, where the nodes and label sizes are used to show the number of citations of a source. In the network, the three target process safety journals are marked with a circle, and arrows indicate the citations to the target journals from other journals.
Four clusters of sources are identified based on the sources' cocitation strength: hazard & environment, chemical, combustion and fire, and process and system safety. The highly cited sources in each cluster are listed in Table 5. According to Fig. 10 and Table 5, JLPPI with 8192 citations, is the most cited journal in process safety research based on number of citations in the period 1999-2018, followed by J Hazard Mater, PSEP, PSP and Reliab Eng Syst Safe. Thus, three of the top 5 highly cited journals within process safety research are the journals in Fig. 10. Highly cited sources in the three journals JLPPI, PSEP, and PSP, 24 journals with more than 500 citations extracted from the references lists of these journals. focus in this work, indicating that these are well selected as a basis for providing insights in the process safety research domain. The results also show that J Hazard Mater and Reliab Eng Syst Safe have transferred more knowledge to the process safety research than other journals. The blue cluster in Fig. 10, labeled "process and system safety" can be regarded as the core group in process safety research, and it contains the three selected process safety journals JLPPI, PSEP, and PSP. The strengths of the co-citation links of these journals show that these core process safety journals strongly interact with Safety Science and Reliability Engineering & System Safety. The cited sources can be regarded as the intellectual base in process safety research.
The top 10 highly cited sources of each target process safety journals are shown in Fig. 11. This is used to show the intellectual base of sample journals in process safety research. It is evident that the most cited sources of each journal are the journal itself, except for PSEP. J Hazard Mater is the most cited journals in PSEP papers, which means that J Hazard Mater is the key knowledge source to support research published in PSEP. JLPPI, PSEP and PSP appeared in the top 10 cited sources in JLPPI and PSP, while PSP was not listed among the top 10 cited sources in PSEP. This means that the papers published in PSP cited more papers from PSEP, but that this relationship is not reciprocal. In JLPPI, Fuel and Int J Hydrogen Energ are listed in the top 10 cited journals, but these are not listed the top 10 in PSEP and PSP. PSEP includes six journals which are listed only in its highly cited sources list, i.e. those six journals are not significant knowledge contributors to JLPPI and PSP. These differences in journals as intellectual bases for the three target process safety journals confirms the findings of Section 3.3 and Fig. 8 that the three journals have different focus domains in the topic clusters within the process safety research domain.
Highly cited references
Highly cited works, here defined as publications with minimum 20 citations received from articles within the dataset of articles published in the three target process safety journals, can be considered as the intellectual basis of process safety. Using these criteria, a total of 127 highly cited references are obtained and identified from the 101,599 references listed by all articles in the complete dataset. A co-citation network of these highly cited references is constructed and shown in Fig. 12. Here, each node represents a reference, where the size is proportional to the number of citations received from the publications in JLPPI, PSEP, and PSP in the period 1999-2018. The main label here shows the first author and publication year of a publication, where the sublabel signifies the name of the journal or book. The links between the nodes represent the co-citation relations between these highly cited references. The width of the links gives an indication of the co-citation strength between these papers. The colors show different groups of these references, with clusters based on the co-citation strength of these references, using the algorithm by Waltman et al. (2010). The top-5 most highly cited articles of each cluster included in this co-citation network are listed in Table 6. Fig. 13 shows an overlap mapping of the highly cited reference clusters, indicating the annual number of citations to these publications from the dataset obtained from the three target process safety journals.
This analysis indicates that there are five clusters in the highly cited references of the three target process safety journals. Cluster #A, marked in red in Fig. 12, contains significant works addressing dust explosions, and is labeled as "Dust explosions". The most cited work in this cluster is the book by Eckhoff (2003) on the identification, assessment, and control of dust explosion hazard, which focuses on the activities, testing methods, and designs for safe operation, as well as insights in the different physical phases of dust explosions. Significant articles include the study of pressure generation mechanisms in vented explosions by Fig. 11. Top 10 high cited sources in each of the three process safety journals, CSJLPPI ¼ Cited Sources in JLPPI | CSPSEP ¼ Cited Sources in PSEP | CSPSP ¼ Cited Sources in PSP | CS3J ¼ Cited Sources in 3 journals. Cooper et al. (1986), the work on coal dust explosibility by (Cashdollar, 1996), the work on flame thickness in dust explosions by Dahoe et al. (1996), and the study of cork dust explosibility in methane/air mixtures by Pilao et al. (2006). Significant overview or review articles include the gas explosion handbook by Bjerketvedt et al. (1997), the work on dust explosibility characteristics by (Cashdollar, 2000), and the overview of cases, causes, consequences, and control of dust explosions by Abbasi and Abbasi (2007). Other impactful work in this cluster concerns the CFD simulation of gas dispersion near obstacles by Tauseef et al. (2011), and the work by Amyotte et al. (2009) linking dust explosions with inherent safety principles. This cluster is closely related to cluster #3 "Fire and explosion process safety" in Section 3.3 and Fig. 7.
Cluster #B in Fig. 12, marked in green, is labeled "Process safety and risk analysis methods", as it primarily concerns techniques and modeling approaches for analyzing the risks and safety in chemical and process industries. The most influential works in this cluster are the books by Center for Chemical Process Safety (CCPS) on layer of protection analysis (CCPS, 2001) and the guidelines for hazard evaluation procedures (CCPS, 2008). Another influential book is the book by Reason (1997) on the management of risks of organizational accidents, where the 'Swiss Cheese' model of organizational accidents is outlined, and the book by (Rausand and Høyland, 2003) on system reliability theory. Early reviews on techniques and methodologies for risk analysis in chemical process industries and industrial plants by (Khan and Abbasi, 1998) and (Tixier et al., 2002) are influential. Impactful original research is the work by Zadeh (1965) on fuzzy sets, the application of Bayesian theory to the estimation of failure probabilities by (Meel and Seider, 2006), the comparison of fault trees and Bayesian networks for process safety analysis by Khakzad et al. (2011), and the method for mapping bow-tie analysis in a Bayesian network by (Khakzad et al., 2013). This cluster is closely related to cluster #1 "Process safety risk management" in Section 3.3 and Fig. 8.
Cluster #C in Fig. 12, marked in blue, is labeled "Loss prevention and domino effects". It contains a mix of comprehensive overview publications focusing on major accident hazards, and more specific original contributions on modeling approaches and strategies for analyzing and managing the risks of domino accidents. Important compendia works in this cluster include the book on explosion hazards by (Baker et al., 1983), the guidelines on vapor cloud explosions, flash fires and BLEVEs by CCPS (1994), the standard work on loss prevention in the process industries by Lees (1996), and the guidelines on chemical process quantitative risk analysis by CCPS (2000). The research by (Khan and Abbasi, 1999)on the common causes and consequences of a number of major accidents in the process industries which occurred during the 20th century is also highly influential in this cluster. The most significant original research contributions in this cluster concerns the work by Valerio Cozzani and his collaborators related to domino effects. In (Cozzani and Salzano, 2004), probit models are derived for domino effects caused by overpressure (Cozzani et al., 2005), present a procedure and software package for quantitative risk assessment of domino effects, and (Cozzani et al., 2007) links domino effects with inherently safe design. This cluster is closely related to cluster #1 "Process safety risk management" in Section 3.3 and Fig. 8, but also refers to knowledge of cluster #3 "Fire and explosion process safety".
Cluster #D in Fig. 12, marked in yellow, is labeled "Inherent safety". It contains a handbook for inherently safe designs by Kletz (1998), which has a second edition authored by (Kletz and Amyotte, 2010). Early influential works in this cluster include the work by Edwards and Lawrence (1993) on the relation between plant costs and inherent safety, and the method by (Heikkil€ a et al., 1996) which combines process rules with safety rules for process pre-design (Khan and Amyotte, 2003). provide an overview of inherent safety principles and campaigns to raise awareness and interest in the approach in North-America, describe available tools, and discuss pathways to its more widespread use (Khan and Amyotte, 2004). present new research on the Integrated Inherent Safety Index (I2SI) tool, which is extended further to include economic considerations in (Khan and Amyotte, 2005). This cluster is most closely related to cluster #1 "Process safety risk management" in Section 3.3 and Fig. 8.
Finally, cluster #E, colored in purple in Fig. 12 and located in the center, is labeled "Process safety reference works" as it primarily contains compendia works, such as the book by (Fisher et al., 1993) on emergency relief system design technology, the standard work by Lees on loss prevention in the process industries as updated by Mannan (2005a), and the books on chemical process safety by Louvar (2002, 2011). The article by (Townsend and Tou, 1980) on thermal hazard evaluation using accelerating rate calorimeter is not a compendium, but the technique is very influential in chemical process safety, and can hence also be considered as a kind of standard work.
Conclusions
Research on safety in the chemical and process industries has a rich and varied history, and a very impressive body of knowledge has been created to increase the understanding of various hazardous phenomena, techniques and methods to analyze their occurrence probability and consequences, and technologies and processes to reduce the risks to human life and the environment.
In this article, a retrospective overview of the process safety research field has been presented, through the lens of three process safety related journals: Journal of Loss Prevention in the Process Industries, Process Safety and Environmental Protection, and Process Safety Progress. A scientometric analysis of their combined publications in the period 1999-2018 has been performed, providing insights in the structure, main themes, and influential works in the process safety domain.
A first finding is that all three journals have gradually published an increasing number of articles, with especially PSEP, and to a lesser extent JLPPI, having seen a marked increase from 2008 onwards. Concurrently, the journal impact factors of these two journals has rapidly increased since then. The geographic distribution of countries/ A terms co-occurrence analysis has shown that there are three major topic clusters in the process safety research domain: process safety risk management, chemical process safety, and fire and explosion process safety. New research frontiers are being developed in each of these clusters, with the chemical process safety cluster showing most recent activity, and the process safety risk management cluster least. PSEP is active in all topic areas, especially in the recently most active and influential chemical process safety cluster. JLPPI is less active in chemical process safety but has a very strong contribution to the fire and explosion process safety and the process safety risk management clusters. PSP is not very active on chemical process safety, but contains more work on process safety risk management, although then primarily on topics which are recently less at the knowledge frontiers.
An analysis of the intellectual base of the process safety research domain has revealed clusters of journals from where each of these core process safety journals obtain knowledge. This analysis confirms the different focus topics and research profiles of the three journals and identifies Safety Science and Reliability Engineering and System Safety as the most closely aligned journals on safety risk management. Journal of Hazardous Materials is strongly tied with JLPPI and PSEP, whereas several fire and combustion related journals such as Combustion and Flame, International Journal of Hydrogen Energy, and Fuel provide knowledge to especially JLPPI. Several more environmentally focused journals such as Bioresource Technology, Water Research, Chemosphere, Environmental Science and Technology, are more closely linked to PSEP. PSP receives most of its knowledge from JLPPI, PSEP, and Journal of Hazardous Materials.
Finally, an analysis of highly cited references indicated five dominant clusters, which can be considered as the core intellectual bases of the process safety research field. These clusters concern dust explosions, process safety and risk analysis methods, loss prevention and domino effects, inherent safety, and process safety reference works. All these clusters contain a range of highly influential handbooks, compendia, and authoritative guidelines, which shows that the process safety is a mature research field, where an extensive body of knowledge has been systemized by leading scholars. The clusters also contain various impactful review articles and original research articles which have pushed the boundaries of the respective subdomains. These results, together with insights from the developments in focus topics and journal networks, can also be useful as a basis for making narrative reviews of the research domain or its constituent clusters. Such reviews could provide further detailed insights in the contents of the research articles, further supporting the high-level insights obtained through the presented analyses. | 2020-04-23T09:04:01.242Z | 2020-05-01T00:00:00.000 | {
"year": 2020,
"sha1": "9d81cdcf713a49652721127ae03e8396a7b680fa",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jlp.2020.104141",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "26eae5c7e5cd5b5ab0e5e83282b317dfdcdf92ec",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": [
"Political Science"
]
} |
21684106 | pes2o/s2orc | v3-fos-license | Panax ginseng extract antagonizes the effect of DKK-1-induced catagen-ike changes of hair follicles
It is well known that Panax ginseng (PG) has various pharmacological effects such as anti-aging and anti-inflammation. In a previous study, the authors identified that PG extract induced hair growth by means of a mechanism similar to that of minoxidil. In the present study, the inhibitory effect of PG extract on Dickkopf-1 (DKK-1)-induced catagen-like changes in hair follicles (HFs) was investigated in addition to the underlying mechanism of action. The effects of PG extract on cell proliferation, anti-apoptotic effect, and hair growth were observed using cultured outer root sheath (ORS) keratinocytes and human HFs with or without DKK-1 treatment. The PG extract significantly stimulated proliferation and inhibited apoptosis, respectively, in ORS keratinocytes. PG extract treatment affected the expression of apoptosis-related genes Bcl-2 and Bax. DKK-1 inhibited hair growth, and PG extract dramatically reversed the effect of DKK-1 on ex vivo human hair organ culture. PG extract antagonizes DKK-1-induced catagen-like changes, in part, through the regulation of apoptosis-related gene expression in HFs. These findings suggested that PG extract may reduce hair loss despite the presence of DKK-1, a strong catagen inducer via apoptosis.
Introduction
Hair follicles (HFs) are complicated organs composed of multiple layers of epithelia of the outer root sheath (ORS) keratinocytes, the matrix and its derivatives; the inner root sheath and hair shaft; and mesenchymal cells called the dermal papilla (DP) (1,2). The DP, which is surrounded by the dermal sheath and the hair matrix, is considered to be essential to hair induction because of secreted diffusible proteins that regulate the growth and activity of the various cells in the follicle (3,4). The ORS keratinocytes of the HF surround the hair fiber and inner root sheath. The ORS keratinocytes is distinct from other epidermal components, being continuous with the surface epidermis. The ORS keratinocytes consist of several layers of cells that can be identified by their unique ultrastructural properties (1). Hair growth and the cycling of HFs requires reciprocal interactions between the human dermal papilla cells (hDPCs) and ORS keratinocytes (5).
Apoptosis can serve a role in follicular miniaturization, but its association with androgenetic alopecia in males is controversial (6)(7)(8). Apoptosis is a complex process regulated by the Bcl-2 gene family (9). The family members act as anti-or pro-apoptotic regulators that are involved in a wide variety of cellular activities. Bcl-2, an apoptosis inhibitor, and Bax, an apoptosis promoter, show tightly regulated, hair cycledependent expression patterns (10). Normal HFs also express high levels of anti-apoptotic protein Bcl-2 (6,7).
Dickkopf-1 (DKK-1), which is a potent antagonist of the Wnt/β-catenin signaling pathway, is inducible by dihydrotestosterone (DHT) and promotes catagen progression and the apoptotic cell death of HFs (11). Kwack et al (12,13) demonstrated that DKK-1 is secreted from hDPCs in response to DHT and that it promotes the regression of HFs by blocking Wnt/β-catenin signaling and by inhibiting the growth of ORS keratinocytes and triggering apoptotic cell death. The reports also identified that, although DKK-1 treatment rapidly changed the anti-apoptotic protein Bcl-2, DKK-1 promoted the pro-apoptotic protein Bax in a dose-dependent manner in ORS keratinocytes.
Panax ginseng (PG) has a wide range of pharmacological effects including anti-inflammatory (14,15), antioxidant (16), anticancer (17) and anti-aging (18)(19)(20)(21)(22) effects as well as the promotion of hair growth (23,24). PG contains many other ingredients such as sugars, proteins and lipids besides ginsenosides. Ginsenosides are a unique component of ginseng that is found only in ginseng, while sugars and proteins are common components of other plants. Also, various studies have indicated that the pharmacological effect of ginseng is derived from ginsenosides (25,26). Recently, the authors reported that PG extract, which is a ginsenoside-enriched PG extract made using the repeated fractionalizing method, significantly enhanced the proliferation of hDPCs, potassium channel-opening activity, and human HF growth via a mechanism similar to that of minoxidil (27). Usually, ginsenosides of commercial PG extract are 3-6%, but a ginsenoside-enriched PG extract are concentrated up to 20% using the preparation method used. The major ginsenosides detected in the ginsenoside-enriched PG extract were Rb1, Rb2, Rc, Rd, Re and Rg1. One of them, ginsenoside Re showed the highest level among the six ginsenosides and its content was approximately 6.23% (w/w) (27). In the current study, the authors investigated the inhibitory effect of ginsenoside-enriched PG extract on DKK-1-induced apoptosis in HFs in addition to the underlying mechanism of action.
Materials and methods
The preparation of PG extract. The authors conducted experiments using the same samples as PG extract, which had hair growth effect in our previous studies (27). The root of PG was obtained from Geumsan Ginseng Market (Geumsan-gun, Korea). The dried and crushed roots of PG (300 g) were extracted with 70% aqueous ethanol at 50˚C for 8 h. The extracts were filtered and concentrated under reduced pressure at 60˚C. The residue was dissolved with 100% ethanol and repeat filtration and vacuum distillation.
Isolation and cultures of human ORS keratinocytes.
Non-balding scalp specimens were obtained from patients undergoing hair transplantation surgery (IRB:DKUH 2013-08-012-001). The medical ethical committee of the Dankook Medical College (Department of Dermatology, Cheonan, Korea) approved all of the described studies, and informed written consent was obtained from the patients. HFs were isolated and cultured by the previously described method, with minor modifications (28). Cultured ORS keratinocytes of early passage were used for the experiments and were maintained at 37˚C in a humidified atmosphere with 5% CO 2 .
MTT assay. Cell viability was determined using an MTT assay that was performed by a slight modification of the method described by Philpott et al (29). Briefly, ORS keratinocytes were seeded at a density of 2x10 4 cells/well into 96-well plates and were cultured for 24 h. Prior to treatment, the cells were cultured for 24 h in a growth supplement-free medium. The cells were then treated with PG extract and DKK-1 for 24 h. The samples were assessed by measuring absorbance at 540 nm with a Synergy™ 2 Multi-Detection Microplate Reader (BioTek Instruments, Inc., Winooski, VT, USA). The cell viability rates were calculated from the optical density readings and are represented as percentages of the control value (untreated cells).
Reverse transcription-quantitative polymerase chain reaction. The total RNA was isolated using TRIzol™ reagent (Invitrogen; Thermo Fisher Scientific, Inc., Waltham, MA, USA), and 2 µg RNA was reverse-transcribed into cDNA using SuperScript® III Reverse Transcriptase (Invitrogen; Thermo Fisher Scientific, Inc.). Quantitative real-time TaqMan PCR technology (TaqMan Universal PCR Master Mix, part no. 4304437) was used (Applied Biosystems; Thermo Fisher Scientific, Inc., Santa Clara, CA, USA). The cDNA samples were analyzed to determine the expression of the following: Hs00608023_m1 (Bcl-2), Hs00180269_m1 (Bax), and Hs02758991_g1 (GAPDH). Commercially available these probes were purchased and used in Thermo Fisher Scientific, Inc.
Terminal deoxynucleotidyl-transferase-mediated dUTP nick-end labelling (TUNEL) assay. A TUNEL kit (In Situ Cell Death Detection kit, Fluorescein, Roche Diagnostics GmbH, Mannheim, Germany) was used according to the manufacturer's protocol to evaluate apoptotic cells. Briefly, ORS keratinocytes at 2x10 4 cells/200 µl were seeded into eightchamber slides (Nunc Lab-Tek; Thermo Fisher Scientific, Inc., Roskilde, Denmark), were serum-starved for 24 h, and were then treated with PG extract and DKK-1 for 24 h. These cells were then fixed in 4% paraformaldehyde for 10 min. After being washed with PBS, the cells were incubated with 0.1% Triton X-100 in 0.1% sodium citrate for 1 h at room temperature. After washing, the cells were treated with the TUNEL reaction mixture and then were counterstained with 4' ,6-diamidino-2-phenylindole (DAPI) to visualize the nuclei. Representative images were taken with a fluorescence microscope (Olympus Corp., Tokyo, Japan) at x100 magnification.
Immunocytochemistry assay. ORS keratinocytes at 2x10 4 cells/200 µl were seeded into eight-chamber slides, and then treated with DKK-1 and PG extract for 24 h. These cells were then fixed in 4% paraformaldehyde for 10 min. After washing with Dulbecco's PBS, the cells were permeabilized with 0.1% Triton X-100 in PBS for 10 min at room temperature and then blocked with 5% BSA in 0.05% Triton X-100 for 30 min at room temperature. The samples were incubated with Bcl-2 (1:200 dilution, sc-783; Santa Cruz Biotechnology, Inc., Dallas, TX, USA) and Bax antibody (1:200 dilution, sc-6236; Santa Cruz Biotechnology, Inc.) at 4˚C overnight. They were then washed two times with PBS and four times with distilled water followed by incubation with a Alexa Fluor™ 488 antirabbit (1:200 dilution, A-11034; Thermo Fisher Scientific, Inc.) and Texas Red™-X anti-rabbit (1:200 dilution, T-6391; Thermo Fisher Scientific, Inc.) secondary antibodies in 5% BSA blocking solution for 2 h at room temperature. All samples were counterstained with DAPI to visualize the nuclei. Representative images were taken with a fluorescence microscope (Olympus Corp.) at x100 magnification.
HF organ culture and assessment of hair elongation. Anagen HFs from human scalp skin specimens were obtained from patients undergoing hair transplantation surgery. The medical ethical committee of the Dankook University Hospital (Cheonan, Korea) approved all of the described studies. A total of six HFs/well in 24-well plates were cultured in William's E medium at 37˚C in a humidified atmosphere with 5% CO 2 in 500 µl basal medium supplemented with 10 µg/ml insulin, 10 ng/ml hydrocortisone, 2 mM L-glutamine, 0.1% Fungizone, 10 µg/ml streptomycin and 100 U/ml penicillin according to Philpott's method, as previous described (30). Each experimental group contained at least 30 anagen HFs derived from three different human donors/volunteers. DKK-1 (50 ng/ml) was determined to optimally induce catagen-like changes, shorten hair shaft length and hair bulbs with minimal other histological alterations to the HFs. Either the PG extract or MNX was added at the concentration, respectively, of 20 ppm or 50 µM. The incubation medium was renewed every 2 days. The HF elongation was measured directly at 2, 5 and 7 days of culture using a light stereo microscope (Olympus Corp.).
Statistical analysis. The results are expressed as mean ± standard deviation. The data was analyzed using a Student's t-test, and the two-tailed value of P<0.05 was considered to indicate a statistically significant difference. The data was processed by SPSS software for Windows, version 22.0 (SPSS Inc., Chicago, IL, USA).
PG extract stimulates proliferation and inhibits apoptosis in ORS keratinocytes.
To investigate the potential role of PG extract on the proliferation and inhibition of apoptosis in ORS keratinocytes, the authors performed an MTT assay one day after treatment in the presence or absence of DKK-1 and PG extract. The PG extract and DKK-1 concentrations were determined by a previous study of the authors (data not shown). The results indicated that the PG extract enhanced the proliferation of ORS keratinocytes (Fig. 1) compared to untreated negative controls and enhanced ORS growth cultured positive controls with growth supplement medium. Treatment with DKK-1 (50 ng/ml) significantly inhibited the viability of ORS keratinocytes compared to the untreated negative control. PG extract significantly counteracted the inhibitory effect of apoptosis by DKK-1 on ORS keratinocyte (Fig. 1) viability.
To confirm the inhibitory effect of apoptosis by the PG extract in ORS keratinocytes, a TUNEL assay was performed. TUNEL-positive cells undergoing apoptosis significantly increased when ORS keratinocytes ( Fig. 2A and B) were treated with 50 ng/ml of DKK-1. The PG extract decreased TUNEL-positive cells despite co-treatment with DKK-1.
Previously, it was found that there are many kinds of ginsenosides in total extract by high-performance liquid chromatography analysis (27). In order to determine which of these ginsenosides is most effective, we selected three representative ginsenosides of PG extract. (Rb1, Re, Rg1) and conducted further experiments. As a result ( Fig. 2C and Table I), it was demonstrated that each ginsenoside inhibited apoptosis induced by DKK-1 to some extent. However, each effect was found to be insufficient for total extract. PG extract regulates the expression of apoptosis-related genes in ORS keratinocytes. To further investigate the relevance of the anti-apoptotic effects of PG extract on DKK-1, changes in the expression of apoptosis-related genes were examined by RT-qPCR. ORS keratinocytes were treated with PG extract and/or DKK-1 for 24 h. DKK-1-induced apoptosis was accompanied by Bcl-2/Bax expression in many cells, including HF cells (12,13). In ORS keratinocytes, DKK-1 treatment significantly decreased anti-apoptotic factor Bcl-2 expression. PG extract alone increased Bcl-2 expression four times more than the untreated control, and it reversed the DKK-1-induced inhibition of Bcl-2 expression. DKK-1 also induced the expression of the pro-apoptotic factor Bax. PG extract significantly inhibited Bax expression in ORS keratinocytes (Fig. 3A) despite the presence of DKK-1. In other words, PG extract promotes ORS keratinocyte survival and increases the ratio of Bcl-2/Bax to further inhibit the cell death (Fig. 3B). Increased protein level of Bcl-2 and decreased protein of Bax were confirmed by immunocytochemistry (Fig. 3C). These results indicated that the effect of PG extract was mediated through Bcl-2/Bax expression on ORS keratinocytes.
PG extract abrogates DKK-1 inhibition of hair shaft elongation in human HF organ culture.
In order to examine the effect of PG extract in the presence or absence of DKK-1 at the organ level, the authors performed an ex vivo culture of whole human scalp HFs. Minoxidil (MNX) and vehicle served as positive and negative controls, respectively. HFs treated with PG extract Figure 1. PG extract promotes ORS keratinocyte proliferation and abrogates DKK-1-mediated cell reduction. Cells were treated with 50 ng/ml DKK-1 or 20 ppm PG extract or with both DKK-1 and PG extract for 1 day, and an MTT assay was assessed on day 1. ORS growth media served as a positive control for ORS keratinocyte promotion. Results were expressed as mean ± standard deviation of percentage change compared to the control. Statistically significant differences were determined by t-test ( * P<0.05, ** P<0.01 vs. control; # P<0.05 vs. DKK-1-treated control). PG, Panax ginseng; ORS, outer root sheath; DKK-1, Dickkopf-1; TUNEL, terminal deoxynucleotidyl-transferasemediated dUTP nick-end labelling; DAPI, 4',6-diamidino-2-phenylindole. grew longer than the negative control HFs at 5 days, which was similar to the growth of HFs treated with MNX. This result is consistent with the authors' previous study (27). A low dose of DKK-1 (<50 ng/ml) produced no significant impairment of hair shaft elongation compared to the vehicle (data not shown), but a dose of 50 ng/ml DKK-1 significantly inhibited hair shaft elongation. The authors observed a narrower hair bulb in HFs treated with DKK-1 at the 50 ng/ml dose, which is reminiscent of catagen-like regressive changes. They also measured the anagen/catagen ratio (Fig. 4A), and DKK-1 treatment resulted in anagen-to-catagen changes in the HF organ culture. Thus, DKK-1 treatment at the dose of 50 ng/ml was used to establish an ex vivo model of HF catagen induction. With co-incubation of the PG extract and DKK-1, the PG extract significantly abrogated DKK-1-induced growth inhibition of cultured HFs ex vivo (Fig. 4B).
PG extract regulates the expression of hair growth-related factors in HFs. There is evidence to suggest that several factors such as cytokine, growth factor, and apoptosis-related factor are involved in the hair growth cycle (31). In the above results, the authors already confirmed that PG extract regulates hair growth-related factors at the in vitro level. To confirm the inhibitory effect of apoptosis by PG extract at the ex vivo level, changes in the gene expression were examined by RT-qPCR using HFs treated with PG extract and/or DKK-1 for 2, 5 and 7 days. The DKK-1 treatment significantly decreased anti-apoptotic factor Bcl-2 expression at 2 days. At the same time point, the PG extract completely abolished the effect of DKK-1 on Bcl-2 expression in HFs. On the other hand, DKK-1 treatment significantly increased Bax expression in HFs, whereas the PG extract strongly inhibited this induction of Bax for 5-and 7-day HFs (Fig. 5). Of note, the PG extract affected only Bax expression and not Bcl-2 in longer ex vivo culture experiments. These results suggested that PG extract antagonizes DKK-1-induced catagen-like changes, in part, through the regulation of apoptosis-related factor expression in HFs. Figure 2. PG extract (A) and three representative ginsenosides of PG extract (C) inhibits cell death by apoptosis in DKK-1-treated ORS keratinocytes. Cells were treated with 50 ng/ml of DKK-1 or 20 ppm PG extract or 1 µM ginsenosides or with both DKK-1 and PG extract or ginsenosides for 1 day and were stained using the TUNEL method. Corresponding 4',6-diamidino-2-phenylindolenuclear staining (blue) is also shown at x100 magnification. TUNEL-positive apoptotic cells (green) were counted, and data are mean ± standard deviation of percentage change compared to the control from three independent experiments (B). Statistically significant differences were determined by t-test ( ** P<0.01 vs. control; ## P<0.01 vs. DKK-1-treated control). PG, Panax ginseng; DKK-1, Dickkopf-1; ORS, outer root sheath; TUNEL, terminal deoxynucleotidyl-transferase-mediated dUTP nick-end labelling.
Discussion
The major finding of the current study is that PG extract antagonizes DKK-1-induced HF changes, resulting in hair loss. Previous studies (14)(15)(16)(17)(18)(19)(20)(21)(22)(23)(24)27) revealed that PG regulates a variety of biological effects such as anti-inflammatory, antioxidant, anticancer, and anti-aging effects, and of course, the promotion of hair growth. Recently, the authors prepared a highly concentrated ginseng extract with the repeated fractionalizing method and found that the PG extract contained 194.8 mg/g (19.48% w/w) of ginsenosides (27). Its ginsenoside content was ~3 times higher than that of commercial ginseng extracts for oral supplements in Korea and 14 times higher than that of conventional ginseng root extract (32). This newly prepared PG extract for treatment showed a significant hair growth effect with cultured hDPCs and HFs, which was comparable to the growth from minoxidil (27). Despite previous studies of PG and its effects associated with hair growth (23,24), the mechanism underlying the apoptosis response, particularly the induction of DKK-1, has not been studied with respect to PG. DKK-1 is well known as a WNT antagonist. It induces apoptosis and inhibits the proliferation of cancer cells (33,34). DKK-1 is also inducible by dihydrotestosterone (DHT), and the level of DKK-1 is increased in the scalps of patients with male-pattern baldness compared to normal levels (11), suggesting that DKK-1 is involved in DHT-mediated balding in androgenic alopecia. It was also found that DKK-1 is highly expressed during the anagen-to-catagen transition. This implies that DKK-1 promotes the regression of HFs by blocking Wnt/β-catenin signaling and by inducing apoptosis in follicular keratinocytes (13). The authors supposed that DKK-1 might inhibit hair growth and cell proliferation via apoptosis. As shown in these results, the viability of ORS keratinocytes was decreased by DKK-1. A TUNEL assay Figure 3. PG extract regulates the mRNA level expression of apoptosis-related genes in DKK-1-treated ORS keratinocytes (A) and their ratio (B). Cells were treated with 50 ng/ml DKK-1 or 20 ppm PG extract or with both DKK-1 and PG extract for 1 day. The levels of Bcl-2 and Bax were examined by RT-qPCR using a gene probe, respectively. Each relative mRNA level of Bcl-2 and Bax is presented as mean ± standard deviation from three independent experiments. Statistically significant differences were determined by t-test. Expression of protein levels of Bcl-2 and Bax were confirmed by immunocytochemistry staining using specific anti-Bcl-2 and anti-Bax antibodies (green) and corresponding DAPI nuclear staining (blue) (C). ( * P<0.05 vs. control; # P<0.05 vs. DKK-1-treated control). PG, Panax ginseng; DKK-1, Dickkopf-1; ORS, outer root sheath; DAPI, 4',6-diamidino-2-phenylindole.
demonstrated that the decreased cell viability was mediated by apoptosis.
During catagen, HFs undergo apoptosis, and there is a decline in the apoptotic protein Bcl-2 and an increase in the pro-apoptotic protein Bax (35). The ratio of these factors is important in regulating the hair cycle. In previous studies, DKK-1 treatment rapidly changed the anti-apoptotic protein Bcl-2. DKK-1 promoted the pro-apoptotic protein Bax in a dose-dependent manner (12,13). Therefore, the present study investigated the effect of PG extract in ORS keratinocytes; the extract induces apoptosis by DKK-1 in these cells. The PG extract alone significantly increased cellular proliferation. It was correlated with the mRNA level of Bcl-2 expression increase, and it also inhibited Bax gene expression ( Fig. 3A and B). Furthermore, when the PG extract was co-treated with DKK-1, the effect of the DKK-1 was inhibited. This suggested that PG Figure 5. PG extract regulates the expression of apoptosis-related and hair cycle-related genes in DKK-1-treated HFs. HFs were treated with 50 ng/ml DKK-1 or 20 ppm PG extract or with both DKK-1 and PG extract for 2, 5 and 7 days. The levels of apoptosis-related genes (Bcl-2 and Bax) were examined by reverse transcription-quantitative polymerase chain reaction using a gene probe respectively. Each relative mRNA level of the target genes is shown as mean ± standard deviation from three independent experiments. Statistically significant differences were determined by t-test ( * P<0.05, ** P<0.01, *** P<0.001 vs. control; # P<0.05 vs. DKK-1-treated control). PG, Panax ginseng; DKK-1, Dickkopf-1; HFs, hair follicles; MNX, minoxidil. Figure 4. PG extract abrogates DKK-1 inhibition of hair growth in HF organ culture. A total of 150 human scalp hair follicles with intact dermal papillae were obtained and treated for 7 days with DKK-1 (50 ng/ml) or PG extract (20 ppm) or with both DKK-1 and PG extract. The cycle of each hair follicle were presented on days 7 (A). The length of each hair follicle was measured under a microscope on days 2, 5 and 7. The relative length of each hair shaft is shown as mean ± standard deviation (B). Images of representative hair follicles at days 5 and 7 are shown (C). MNX (50 µM) served as the positive control for the stimulation of hair follicle growth. All values were expressed as mean ± standard deviation. Statistically significant differences were determined by t-test ( * P<0.05 vs. control; ## P<0.01 vs. DKK-1-treated control). PG, Panax ginseng; DKK-1, Dickkopf-1; HFs, hair follicles; MNX, minoxidil. extract may abolish the apoptotic signal stimulation of DKK-1 and help ORS keratinocytes to survive.
HF organ culture is now considered a useful tool for evaluating the effect of hair growth ex vivo. To further confirm the results shown in the above data, the authors investigated the effect of PG extract on DKK-1-induced catagen-like changes in cultured human HFs. It was shown that catagen-like morphological change was induced in DKK-1-treated HFs. Hair growth was also inhibited by DKK-1 during the incubation period. The PG extract significantly stimulated hair elongation, overcoming the inhibitory effect of DKK-1-induced hair growth, and finally, the hair was significantly more elongated than the untreated control.
In summary, PG extract has the potential to protect apoptosis in HFs. These findings suggested that PG extract may enhance ORS and hDPC stimulation of HF growth despite the presence of DKK-1, a strong catagen inducer via apoptosis. | 2018-04-03T02:06:05.921Z | 2017-08-25T00:00:00.000 | {
"year": 2017,
"sha1": "682813fbbc7601ab80dae5e850bfde1ed8a32a0c",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ijmm.2017.3107/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "682813fbbc7601ab80dae5e850bfde1ed8a32a0c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
} |
235217336 | pes2o/s2orc | v3-fos-license | Inclusion of multiple climate tipping as a new impact category in life cycle assessment of polyhydroxyalkanoate (PHA)-based plastics
• LCA applied to PHA with improved bar-
Bioplastics are a diverse group of materials which have been in the focus of research as alternative to conventional plastics (Spierling et al., 2018;García et al., 2019;Kookos et al., 2019;Pavan et al., 2019;Rameshkumar et al., 2020). Bioplastics consist of three sub-categories and can either be (1) "fossil-based and biodegradable", (2) "bio-based and biodegradable" or (3) "bio-based and non-biodegradable". The last two categories can also be combined with the term bio-based plastics (Endres and Siebert-Raths, 2011). The market share of bioplastics is predicted to continue to grow within the next years to 2.8 Mio. tonnes in 2025, an increase of 35% compared to the production capacities in 2020 (European Bioplastics, 2020b).
This study focuses on production of polyhydroxyalkanoate (PHA) from sugar beet molasses. PHA is a bio-based and biodegradable polyester which can be produced by bacterial fermentation (Chen et al., 2020;Moretto et al., 2020). The PHAs are produced as intracellular highmolecular inclusion bodies, which have a role as carbon-and energy storage compounds within the bacteria. The selection of different microbial production strains as well as adaptions of the bioprocess allows the composition of different PHA types (homo-, co-, ter-, and quadpolyesters), resulting in up to 150 different PHA structures which have been identified so far. Based on their carbon atoms used for the monomeric unit PHAs can be differentiated in two main groups: (1) shortchain-length PHAs with 3-5 carbon atoms and (2) medium-chainlength PHAs with 6-14 carbon atoms. The most well-known and common PHA type is poly-β-hydroxybutyrate (PHB) (Koller, 2017;Kourmentza et al., 2017;Troschl et al., 2018). Additionally, for the production of PHA, a wide range of feedstocks can be utilized. E.g. renewable materials like sugar, industrial waste and by-product streams, as well as CO 2 , which can be utilized by cyanobacteria types (Koller, 2017;Kookos et al., 2019;Moretto et al., 2020;Wongsirichot et al., 2020). This makes PHA a versatile plastic type within the bioplastics and bio-based plastics.
A number of international patents about plastic materials based on PHA have been obtained (Elvers et al., 2016 and references therein). Yet, global production of PHA is relatively low (25,320 tons in year 2019), accounting for only 1.2% of the bioplastics market (Rameshkumar et al., 2020). The reason for the low market share of PHAs is mainly due to high production costs (García et al., 2019;Kookos et al., 2019;Pavan et al., 2019), that are estimated to be 5-10 times higher than the cost of traditional polymers (Kookos et al., 2019). The latest market data for PHA predicts an increase to 11.5% of the bioplastics market till 2025 (European Bioplastics, 2020a). Currently PHA is manufactured worldwide at both pilot and industrial scale. Manufacturers are based in Canada, Germany, Italy, China, USA, Japan as well as Malaysia. On these scales mainly first-generation feedstock like sugar (e.g. from sugar beet) as well as vegetable oil (e.g. canola oil or palm oil) is dominant (Kourmentza et al., 2017). Molasses, a co-product of sugar beet, is often considered as feedstock for PHA production (Baei et al., 2009;Keunun et al., 2018;Kiran Purama et al., 2018;Remor Dalsasso et al., 2019). Molasses contains about 50% of the disaccharide sucrose and is commonly used as an energy supplement to livestock feed. PHA exhibits physical and mechanical properties similar to those of conventional plastics, such as polyethylene (PE) and polypropylene (PP) (Kookos et al., 2019).
One potential application of PHA is as food packaging material (Khosravi-Darani and Bucci, 2015). For example, bioplastic is considered for packaging of high quality bakery products, replacing fossil based polypropylene. Yet, high permeability toward water, oxygen and aromas makes PHA a rather poor packaging material (Kassavetis et al., 2012). It is thus necessary to improve the barrier properties to lower permeability if PHA is to be used in contact with food (Struller et al., 2014). This can be done by lamination with (poly)lactic acid (PLA) or metallization with aluminum (Al) or aluminum oxides (AlOx) (Kassavetis et al., 2012). Until now, nothing was known about sustainability implications of lamination or metallization of PHA, which improve barrier properties but may impair biodegradability in their end-of-life. Environmental assessments of PHA production from molasses generally focused on the fermentation and PHA recovery steps, without considering other important process in the PHA-based plastic film life cycle like post-treatment and end-of-life (Leong et al., 2017;Kookos et al., 2019).
The purpose of this paper was to assess the environmental performance of the whole value chain of PHA-based plastics with improved barrier properties. We considered lamination using PLA or metallization using Al or AlOx as two viable surface treatment options. The environmental performance was assessed using life cycle assessment (LCA). To provide additional insights to the metrics of climate change recommended in the EU Commission's ILCD Handbook (ISO, 2006;EC-JRC, 2010) (the global warming potential, GWP 100 , and the global temperature change potential, GTP 100 ), we present the inclusion of a new life cycle impact category, the multiple climate tipping . It accounts for the contribution of GHG emissions to trigger multiple climate tipping points in the earth system (up to 13 tipping points).
Scenarios
Molasses, a by-product from sugar beet refinery rich in the carbohydrate sucrose, was chosen as a feedstock and the Gram-negative bacterium Ralstonia eutropha as fermenting microbe because they were found promising for full scale applications (Baei et al., 2009;BioBarr, 2019;Remor Dalsasso et al., 2019). Both pilot and large scale were modelled and compared. They differ in capacity (about 100 and 9000 tones PHA recovered per year in the pilot and large scale plants, respectively), and in means of how feedstock is collected, pre-treated, fermented, recovered and purified. In addition to testing the influence of plant scale, we considered differences in (i) geographic location of the PHA plant, (ii) conventional use of molasses, (iii) composition of PHA-based films and other packaging materials, (iv) yield of PHA in the fermentation process, (v) thickness of the PHA layer, and (vi) fate of the improved bioplastic in its end-of-life. Production of poly-βhydroxybutyrate (PHB) was modelled.
Italy and Germany were chosen as two representatives of countries where production of PHA is currently conducted. Differences in electricity grid mixes and waste management systems between the countries were considered. The waste management systems were modelled according to country-specific rates for recycling, incineration and landfilling of plastic packaging (Eurostat, 2017). Bioplastic packaging is currently not recyclable. Thus, it was assumed that the remaining fraction was treated proportionally to the treatment of non-recovered plastic waste (that is, 50% landfilled and 50% incinerated in Italy and 100% incinerated in Germany). The conventional use of molasses is as animal feed, however, it can be also used for ethanol production (Takriti et al., 2017) and these two scenarios were also explored. We investigated optimization potentials for PHA-based plastics made from molasses, which lie in selection of the material for lowering permeability of the PHA (either PLA or Al or AlOx), optimizing PHA production yield and reducing thickness of the plastic layers. The PHA-based plastics were compared with fossil-based alternatives, namely PP and PE. Comparisons were also made with PLA. Merits of temporary carbon storage are often debated for bioplastics, and the end-of-life stage of the plastics life cycle is the only stage where temporal carbon storage can occur. PHA is generally considered as readily biodegradable (the actual duration of degradation depends on product dimensions as well as environmental parameters like temperature), but it is currently unknown how improving barrier properties influences biodegradability during landfilling (Emadian et al., 2017;Meereboer et al., 2020). Thus, fast degradation was assumed in the baseline scenario. Delayed biodegradation may be caused by differences in availability of water and oxygen during landfilling (Meereboer et al., 2020). Moreover, combining PHA with PLA has shown to reduce biodegradability of PHA (Meereboer et al., 2020). To explore sensitivities toward mineralization rates, different biodegradation rates and extents of lag phases were explored for the landfilling scenarios. We conservatively assumed that no PHA plastic is lost to the environment owing to generally sound management of plastic waste in Europe (Ryberg et al., 2019). In total, 53 scenarios were considered (refer to Table S3S3, Section S2 of SI for an overview of all scenarios).
Literature review
Parameters and installations for fermentation, recovery and purification of PHA at pilot and large scales were modelled based on parameters retrieved from scientific literature, identified through a systematic literature review (see Section S1 of the SI for further details). The review encompassed studies focusing on both technical and environmental aspects of PHA production. It was carried out using Scopus in March 2020, applying a set of keyword strings. We retrieved those studies, which: (i) report parameters relevant for the PHA production (feedstock type and its water content, producing microorganism, plant scale and capacity), (ii) are either at pilot (as defined by the study itself or between 10 and 1000 L fermenter volume) or large (above 1000 L) scales, and (iii) either report PHA yield (kg PHA /kg feedstock ), or sufficient data to estimate the yield. In total, 25 studies were retrieved. We found four studies which use disaccharides, and report data on resource consumptions (e.g. electricity, water and chemicals) (see Table S2 in Section S1). These four studies were used for extraction of parameters and bills of materials needed to model PHA production installations in our LCA.
Overview of PHA installations
The pilot and large scale systems differ in how feedstock collection, pre-treatment and fermentation, recovery and purification are carried out (see Section S2 of the SI, Fig. S1 for an overview of their installations). The remaining steps are the same and represent large-scale systems. The feedstock is transported by truck at pilot scale, whereas at large scale the feedstock is transported using pipes. At both scales, the feedstock is sterilized by steam and the sterilized feedstock is cooled down using a heat exchanger. At pilot scale, the sterilized feedstock is fermented in one 10-m 3 reactor for 80 h. At large scale, three 102-m 3 reactors for 54 h are used. Electricity input for aeration and agitation are different for the two scales. Fermentation yield is higher at pilot than at large scale (0.360 and 0.268 kg PHA /kg substrate , respectively). At both scales, PHA is extracted from fermenting cells and purified in a sequence of steps, involving centrifugation and spray drying, but electricity inputs are higher and consumption of materials generally lower at the pilot scale. Hydrochloride is used for extraction at pilot scale, while hydrogen peroxide and enzymes are used at large scale. The obtained PHA powder is compounded and blended with additives (plasticize, nucleating agent, stabilizer and reinforcing filler) before extruding it into PHA pellets. These pellets are subsequently extruded into a PHA film. This film is either laminated with a layer of PLA, or metallized using aluminum. The aluminum layer can be optionally oxidized to aluminum oxide (AlOx) to make the resulting film transparent. Details of the parameters underlying LCA model are presented in the SI, Section S2.
Life cycle assessment
The environmental performance was assessed using life cycle assessment (LCA) conducted in accordance with the requirements of the ISO 14044 standard and the guidelines of the EU Commission's ILCD Handbook (ISO, 2006;EC-JRC, 2010).
Functional unit and reference flow
The primary function of the PHA-based bioplastic in the context of this study is to protect dry food against environment during transport and storage. We choose a croissant as an exemplar of dry food product. The functional unit was therefore defined as "Protection of one average croissant (ca. 40 g) against migration of oxygen, water and aromas (according to global and specific migration standards BS EN 1186 and UNE-EN 13130 for migration of aromatic primary amines, pthalic acid, crotonic acid, acrylic acid and the elements Al, B, Ba, Cu, Co, Fe, Li, Mn, Ni and Zn) during transport and storage for 30 days". This functional unit was chosen as it allows a consistent comparison with alternative plastics used as packaging materials. The reference flow is equal to 0.06384 m 2 of PHA-based plastic film with improved barrier properties, and the same reference flows apply to other plastics fulfilling this functional unit. Yet, differences in thicknesses of the PHA based plastic films and other plastics result in different reference flows when expressed on a mass basis.
Modelling framework and system boundaries
Production of PHA-based bioplastic with improved barrier properties and its use in food supply is a relatively new technology and its implementation is not expected to cause large scale market consequences (for example the need to install new power plants). Therefore, consistent with ILCD's recommendations, the current LCA is considered a microlevel decision support situation (type A) (EC-JRC, 2010). This implies that: (i) system expansion is the preferred way to solve multifunctionality, and (ii) average processes are to be used to model the background system of the study. The consequential version of the ecoinvent v3.5 database was employed to model the background system because it prioritizes system expansion rather than allocation (Bjørn et al., 2017). However, this consequential database systematically uses marginal processes rather than average ones. Therefore, to make the database more consistent with the attributional approach, some processes were adapted to be based on average rather than marginal mixes. Details on these adaptations are presented in SI, Section S2 (Table S9). For example, as in (Bohnes, 2020), the marginal electricity grid mix originally included in the consequential database was adapted to represent the average mix of 2018. The use of marginal data was considered negligible for other processes in the bioplastic life cycle and their adaptation was not deemed necessary. The product systems were modelled in SimaPro, version 8.3.0.0 (PRé Consultants B.V., the Netherlands). An overview of system boundaries, specifying processes included in the LCA, is presented in Fig. 1. Background processes include (avoided) conventional use of the feedstock, production of energy and chemicals, construction and disposal of equipment, and treatment of biological waste. The use stage includes transport from production site to the customer. The end-of-life stage comprises waste management processes according to the waste management of system in the country of interest. The foreground system comprises all processes, as presented in Fig. 1. Molasses is a residual product from production of sugar and therefore no burdens are attributed to its production. However, environmental burdens occur when the molasses waste stream is diverted for production of PHA, rather than its conventional use as animal feed. Consistently with system expansion being prioritized over allocation when handling multifunctional processes, this animal feed has to be produced from other sources, like barley grains.
Life cycle impact assessment
Environmental impact scores were mainly calculated using ReCiPe 2016 as LCIA methodology, applying midpoint indicators and hierarchist perspective. Impact scores were calculated for all ReCiPe impact categories, except climate change which was replaced by the approach of ILCD (2011) combined with updated GWP100 values from IPCC AR5 (IPCC, 2014). The ILCD (2011) approach was preferred as it gives credits to delayed emissions of greenhouse gases (GHGs), which are particularly relevant for the end-of-life stage of the PHA-based plastics. In addition to the GWP100, which is the default metric in LCA and addresses short/medium term climate impacts, we employed the global temperature change potential (GTP100) and the multiple climate tipping points potentials (MCTPs) as characterization factors (CFs). The GTP100 is recommended for use in LCA, next to the GWP100, as it focuses on long-term impacts, representing global average temperature increase of the atmosphere at 100 years that results from the emission (Shine et al., 2005;Levasseur et al., 2016). The MCTP is a recently developed metric for climate tipping impacts , building on earlier work of Jørgensen et al. (2013). It specifically addresses the potential contribution of GHG emissions to trigger multiple climate tipping points in the earth system (like loss of Arctic summer sea ice or the El Niño-southern oscillation intensification), considering in total 13 tipping elements that could pass a tipping point with increasing warming. The contribution to tipping is measured as the share of remaining carrying capacity up to each tipping point that is consumed by the emissions, using Eq. (1) , and is expressed as fraction of depleted remaining capacity in parts per trillion, ppt rc , per kg GHG emission: where MCTP i (T emission ) is the characterization factor for GHG i emitted at time T emission , j is the jth out of m potentially exceeded tipping points, I emission, i, j is the increase in CO 2 -equivalent concentration caused by the emission with respect to tipping point j, and CAP j is the remaining capacity of the atmosphere to absorb this concentration increase without triggering tipping point j . Given that the MCTP is sensitive to the timing of emissions, the metric is particularly relevant for the end-of-life of PHA-based plastics, as emissions are distributed over time and could contribute to crossing tipping points . The three climate-related sets of indicators are complementary to each other and represent three different impact categories. Details of calculation of impact scores using these three approaches are presented in Section S3 of the SI.
Sensitivity and uncertainty analyses
Sensitivities of the LCA results to discrete parameters were evaluated in a scenario analysis (see Section 2.1). Sensitivities to PHA yield, which is a continuous parameter, was also considered for selected scenarios from Table S3S3, Section S2 of SI. Quantification of inventory uncertainties is currently not possible to carry out with the consequential version of the ecoinvent database as attached to SimaPro. To compensate for this limitation, we conducted a qualitative uncertainty analysis discussing limitations of the study considering the specificity of the inventory data.
Results and discussion
In the following sections, we present an overview of life cycle impact assessment results for selected scenarios, identify factors which determine overall environmental performance of the PHA plastics, and identify optimization potentials.
Environmental hot-spots in the PHA value chain
To identify processes with the largest contribution to this burden, process contribution analysis was carried out on PHA laminated with PLA at pilot and large scale systems in Italy (Fig. 2). Refer to Table S13, Section S4 of the SI for tabulated impact scores for the two scenarios. Irrespective of the plant scale, incumbent management of feedstock had the highest contribution to environmental burden for most, but not all, impact categories (up to 94% of total impact, depending on the impact category). As explained in Section 2.4.2, we consider that when molasses is used for PHA production rather than its incumbent use as animal feed, this animal feed has to be produced from other sources, like barley grains. Thus, relatively high contribution of incumbent management of feedstock is explained by burdens associated with production of animal feed from barley grains. Negative impact scores (indicating environmental benefits) are observed for the climate change impact category. They are a result of fixation of CO 2 during cultivation of barley grains. These environmental benefits are, however, outweighed by the burden stemming from the fermentation itself which uses energy and emits CO 2 , treatment of wastewater, and incineration of plastic waste in the end-of-life treatment.
The fermentation had relatively small contribution (up to 8% of total impact), except the three climate-related impact categories where its contribution ranged from 21 to 64% of the total impact. The posttreatment processes (recovery, purification, compounding and pelletizing), however, altogether contributed up to 60% of the total impact, depending on the impact category. Previous studies on PHA production from sucrose (including collection, pre-treatment, fermentation and PHA recovery), reported global warming impacts which were higher (1.96 kg CO 2 eq/kg PHA recovered in Harding et al. (2007)) and lower (−2.58 kg CO 2 eq/kg PHA recovered in Kookos et al. (2019) owing to energy recovery from bagasse), compared to 0.76 CO 2 eq/kg PHA recovered in the large scale system of this study. As in Kookos et al. (2019), direct emissions of CO 2 during fermentation were important contribution to climate change burdens in our study. Harding et al. (2007), on the other hand highlighted steam and electricity use as the most contributing process. Further, while our study showed that surfactant had a high contribution to global warming impacts (in our LCA modelled as nonionic surfactant), neither Harding et al. (2007) nor Kookos et al. (2019) found that the surfactant was the hot spot. Compared to the former study, the consumption of surfactant in our study was 16 times higher, while Kookos et al. (2019) applied a negative GHG emission factor for surfactant based on data from Akiyama et al. (2003).
Relatively high contribution of recovery and purification was mainly caused by the use of steam for spray drying in the pilot system and surfactant in the large scale system. Surfactant contributed to 55, 24 and 15% of total freshwater ecotoxicity, fossil resource scarcity and climate change impacts. Negative contributions to total impact scores on freshwater eutrophication observed in our study for recovery and purification and filmmaking and functionalization are unexpected, but can be explained by system expansion mechanisms occurring in non-ionic surfactant applied during recovery and purification and ink applied in filmmaking and functionalization processes. Electricity consumption for processing of the recovered PHA into PHA pellets explains 23 and 19% of total impact for climate change and human carcinogenic toxicity. Negative impact scores for waste management systems for several other impact categories, indicating environmental benefits, were due to incineration with energy recovery (61% of the packaging is incinerated in Italy), substituting production of energy (in this case, electricity and heat for reuse in municipal waste incineration).
Effects of upscaling
The large scale system has slightly higher impact scores than the pilot scale one consistently for all impact categories, except climate change and freshwater eutrophication (Table S13, Section S4). The largest differences were observed for freshwater ecotoxicity followed by, water consumption, land use and marine eutrophication, where large scale production shows impacts from~1.5 to~2.5 times higher than at pilot scale, respectively. This finding was unexpected, because upscaling of technologies is often associated with decreasing environmental impacts perf unit of output (although generalization across different technologies cannot be made) (Gavankar et al., 2015;Owsianiak et al., 2016). The different result in our case can be explained by differences in environmental performance of; (i) recovery and purification steps (all impact categories, except climate change and stratospheric ozone depletion), (ii) fermentation (all impact categories), (iii) collection of feedstock fermentation (all impact categories) and (iv) incumbent use of molasses (all impact categories, except climate change). Increased impacts for recovery and purification were due to higher consumption of surfactant in the large scale system, particularly so for freshwater ecotoxicity where the large scale systems shows 7.7 times higher impact. This increase outweighed benefits from a lower electricity and steam consumption in the large scale. Increasing impacts from fermentation were mainly due to a higher electricity consumption for aeration. Furthermore, slightly lower yield in the large scale system resulted in higher consumption and collection of molasses per unit of PHA output, increasing impacts. Similar, the lower yield at large scale increased incumbent use of molasses and impacts for all categories except climate change where increased amount of CO 2 fixated reduced impact scores. By contrast, reduced impacts from pre-treatment were due to lower consumption of steam, but these reductions were generally insufficient make the large-scale system perform better.
Influence of geographic location and incumbent use of feedstock
Impact scores decreased for 12 out of 20 impact categories when (large-scale) PHA production and functionalization took place in Germany instead of in Italy (see Section S4 of the SI, Fig. S2). The largest differences were for the climate change and multiple climate tipping impact categories (decrease by 32 and 20%, respectively) followed by fine particulate matter formation and terrestrial acidification (decrease by 17 and 16%, respectively). For climate change, the reductions were due differences in waste management systems between the two countries (the majority of plastics is incinerated in Germany, while landfilling is the dominant treatment option in Italy). Incineration is seen beneficial over landfilling because it does not result in emission of potent GHG, methane (71% of carbon is assumed to be released as methane during landfilling (Rossi et al., 2015)). For fine particulate matter and terrestrial acidification, lower impacts in Germany can be explained by a lower portion of oil in the electricity grid mix in Germany (3.7% and 0.9% in Italy and Germany, respectively), which has a high contribution to these impact categories. Impact scores increased for 13 out of 20 impact categories when molasses was used as feedstock for ethanol production (scenario 5) rather than for animal feed (scenario 2) in Italy. The largest increase was observed for impacts related to mineral resources, human noncarcinogenic toxicity, terrestrial ecotoxicity and global temperature change (increase by 64, 36, 31 and 28%, respectively) (see Section S4 of the SI, Fig. S3). This was due to generally higher environmental impacts from production of ethanol than production of animal feed (per unit of molasses). Substantial reductions in impact scores were seen for freshwater eutrophication, land use and marine eutrophication (decrease by 365, 138 and 94%, respectively), with negative impact scores for the first two categories (−1.6 × 10 −5 kg P eq and −4.0 × 10 −2 m 2 a crop eq, respectively) and low impacts for marine eutrophication (1.1 × 10-5 kg N eq) in scenario 5. These negative scores were due to handling a waste product from ethanol production from maize by system expansion, replacing soybean meal. Similar observations were made for Germany, where both increases and decreases in impact scores were observed when the incumbent treatment of molasses was as feedstock for ethanol production.
Influence of PHA stability
Impact scores of the PHA value chain for the three climate-related impact categories are influenced by mineralization kinetics and extent of the mineralization lag phase in landfilling (Table 1). For all three indicators, lowest impact scores were consistently identified for the very slow degradation scenario (scenario 51). This was mainly due to incomplete degradation over 100 year time (GWP and GTP) and over 94 years (MCTP), where only 1% of initial plastic degraded in this scenario, resulting in lower impact scores. Plastics with fast and medium mineralization kinetics generally performed worse according to GWP as credits given for temporary carbon storage are lower compared to more stable plastics. By contrast, climate tipping impact scores increased with decreasing mineralization rates, because the probability that a significant portion of emissions is released in proximity to tipping points, where MCTP values are the largest, was higher for the more stable plastics. This was even more pronounced for cases where a mineralization lag phase of 20 and 40 years was assumed (scenarios 52 and 53 in Table 1). In those cases, a larger share of the emissions was released close to the year 2050, where MCTPs are the highest. Mineralization kinetics was not found to matter for the GTP metric, because this approach disregards any benefits from temporary carbon storage and does not account for when GHG emissions occur in the life cycle.
3.5. Making PHA-based plastics more sustainable PHA-based plastics can be made more sustainable by optimizing PHA yield, thickness of the PHA layer, and choice of material for ensuring barrier properties. Fig. 3 shows the effects of these parameters on environmental performance for selected impact categories. Comparisons were also made with pure PLA or pure fossil-based PE, and pure fossil-based PP. Increasing PHA yield generally improves environmental performance of the PHA-based packaging. For MCTP, fossil resource scarcity and land use, impacts decreased from 87 to 28% if yield increases from the minimum to the maximum values reported in the literature for PHA made from molasses (i.e. from 0.083 to 0.245 kg PHA raw /kg molasses ; scenarios 3-18 in Table S3S3, Section S2). However, only a small increase was observed for climate change (by 2%).This relatively small increase was due to the fact that the decreasing fixation of CO 2 (hence increasing impacts with increasing yield), was outweighed by decreased emissions of CO 2 from fermentation and reduced amount of carbon-containing wastewater to be treated (per unit of PHA output).
The results also showed that PHA combined with either Al or AlOx (scenarios 7 and 8) were more sustainable than the PHA combined with PLA (scenario 2). Impact scores were consistently reduced for all impact categories, except for ionizing radiation (Fig. 3 and Table S14 in the SI, Section S4). The reduction was, however, modest (up to 11% for fossil resource scarcity). Despite relatively large differences in environmental impacts per kg of each alternative material (e.g. higher impacts for Al when compared to PLA), significantly less Al or AlOx (10-nm layer) than PLA (20-μm) is needed to fulfill the functional unit, explaining small differences between PLA and Al (or AlOx).
PHA-based plastics can also be made more sustainable if thickness of underlying materials is reduced (while still allowing the packaging to fulfill the function). However, the extent of required improvements is relatively large. For example, thickness of the PHA layer in the PHA/PLA alternative needs to be reduced to ca. 20 μm for this alternative to be able to compete with pure PLA of 91 μm (in terms of climate change and multiple climate tipping). If PHA yield increases, these PHA-based films would be able to compete with PLA of 50 μm thickness (again, assuming that their functional performance parameters are the same). Irrespective of yield and assumed PHA thickness, however, packaging made of PHA generally does not perform as good as PP-and PE-based packaging does (scenarios 11 and 12 in Table S3) (Fig. 3). The differences were by factor of 2 to 5, depending on the impact category, even if high yield and low thickness of PHA were assumed.
Limitations and data gaps
This study presents full life cycle inventory and impact assessment results for PHA-based plastics with improved barrier properties. The main limitations of the study relate to: (1) variability and uncertainty in parameters used for modelling life cycle inventories, (2) the choice of LCI database for modelling background system, and (3) deficiencies in impact assessment method. Table 1 Impact scores per functional unit (f.u.) of the PHA value chain as depending on stability of the PHA plastics in landfilling conditions (mineralization rate constant and extent of mineralization lag phase) and the climate-related impact category. Color shades from green, yellow to red, indicate increasing impact (per impact category). The scenarios tested for stability and degradation are; fast kinetics: 90% degradation in 2 years (100% degraded in 100 years), medium kinetics: 90% degradation in 31 years (99.9% degraded in 100 years), slow kinetics: 90% degradation in 105 years (89% degraded in 100 years), very slow kinetics: 90% degraded in 22,798 years (1% degraded in 100 years), delayed (20): degradation delayed by 20 years, fast kinetics, delayed (40): degradation delayed by 40 years, fast kinetics (see a full overview of scenarios in Table S3, Section S2). First, we modelled pilot and large scale PHA production systems basing on data retrieved from the literature, but several parameters are known to be variable or uncertain. This may influence comparisons between scales. For example, PHA yield varies, but is an important parameter which determines performance of the PHA vale chain, and the large scale system would generally perform better than the pilot scale if the PHA yield was in higher range of possible values (0.245 kgPHA raw / kg molasses ) (data not shown).
Second, biodegradation kinetics of the PHA-based plastics in the environment is highly uncertain (Emadian et al., 2017;Meereboer et al., 2020), and furthermore it is unknown how surface treatment may influence biodegradation kinetics in landfilling conditions. Our sensitivity analyzes show that this parameter is important not only for the endof-life, but for the performance of the whole PHA value chain (in terms of climate change and multiple climate tipping impacts).
Third, the surfactant in the current study was modelled as a generic non-ionic surfactants, which consists of ethylene oxide (66%) and fatty acid (33%) derivatives. Impacts of surfactants vary considerably (Schowanek et al., 2018). For example, if the fatty acid derivate was used, freshwater eutrophication and ecotoxicity impacts would decrease by 114% and 45%, respectively (data not shown). It is therefore important to address this data gap in future studies on PHA.
Fourth, the consequential background database was consistently applied for background processes, with the exception of electricity processes, which were adapted to average grid mixes rather than marginal mixes. The sensitivity of this was tested for incumbent use of molasses and found to have a high influence on the overall results. Although contribution from other processes of the background system is expected to be smaller when compared to energy and avoided incumbent use of molasses, there is some uncertainty as average mixes (rather than marginal mixes) should ideally be used consistently for all processes in the background system.
Finally, owing to the limitations of the ecoinvent database, indirect land use changes (ILUC) were not considered in the PLA value chain (PLA is made from maize). If they were considered, impacts of those PHA-based plastics which include PLA would increase. According to Ögmundarson et al. (2020), e.g., climate change impact for lactic acid from corn could increase by 14% if ILUC are included. Hence, including ILUC would further favor those alternatives which use either Al or AlOx as barrier materials.
Implications for PHA value chain
We showed that PHA-based plastics with improved barrier properties have higher environmental impacts than alternative packaging made from PE, PP and potentially even PLA. These results are not surprising given that PHA production is still relatively immature when compared to the aforementioned alternatives. The largest optimization potentials (which are also challenges to PHA technology developers), are: 1) reduction of PHA thickness while maintaining functional properties of the PHA plastic, 2) increase PHA production yield, 3) increase the energy efficiency during compounding and pelletizing, 4) decrease amount and change type of surfactant used in recovery and purification processes, 5) consider feedstock other than molasses, that do not have a highly beneficial alternative treatment and use. Industrial wastewater could be considered as feedstock, as it avoids incumbent management of the wastewater (Heimersson et al., 2014). Furthermore CO 2 could be a promising alternative feedstock for PHA production (Troschl et al., 2018), but separate LCA would be needed to evaluate performance of PHA made from other feedstock. 6) consider alternative end-of-life options. Although the biodegradability of PHA offers aerobic and anaerobic end-of-life pathways in comparison to conventional plastics, recent research results for PLA show that also recycling (e.g. mechanical recycling) is a potential option which can offer additional benefit from an LCA as well as circular economy perspective (Maga et al., 2019;Spierling et al., 2020). Our study may suggest that that Al (or AlOx) is the preferred material to ensure barrier properties. However, unknown influence of the Al layers on biodegradability of PHA in the environment Fig. 3. Impact scores for climate change, multiple climate tipping, fossil resource scarcity and land use as influenced by PHA yield, and type and thickness underlying materials (scenarios 7-48 in Table S3S3, Section S2 of SI). Yields are based on literature data, where the minimum yield is from Kookos et al. (2019) and the maximum yield is estimated from a theoretical yield from Yamane (1993) and assuming that 95% of the accumulated biomass is PHA. The yield in the x-axis refers to PHA-based plastics only. Impacts of PLA, PE and PP are shown in the figure for comparison, but are not influenced by PHA yield. warrants further studies. Finally, we stress that we have addressed environmental aspects of sustainability, but economic and sociallyoriented analyses are required to make more informed decisions about implementing PHA with improved barrier properties as alternative packaging materials.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. | 2021-05-28T06:16:55.600Z | 2021-05-11T00:00:00.000 | {
"year": 2021,
"sha1": "6b6d7be455892689c350b76f38d523f8adff193c",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.scitotenv.2021.147544",
"oa_status": "HYBRID",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "e745dce3747e13fbcda64f59b82b402b2549a776",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
} |
253601448 | pes2o/s2orc | v3-fos-license | An alternative equation for polynomial functions
In this paper we prove that if a generalized polynomial function f satisfies the condition f(x) f(y) = 0 for all solutions of the equation x2 + y2 = 1, then f is identically equal to 0.
The following problem was formulated by Benz in 1989 [3]. Suppose that f : R → R is an additive function satisfying yf (x) = xf (y) for every x, y ∈ R such that x 2 + y 2 = 1 . Does it imply that f is linear? This question, together with a similar one for derivations, was answered in the affirmative by Boros and Erdei [4].
Motivated by the question of Benz, Szabó [14] posed the following problem: suppose that f : R → R is additive and f (x)f (y) = 0 for all solutions of the equation x 2 + y 2 = 1 . Does it imply that f is identically equal to zero? The solution was published in a joint paper by Kominek et al. [9], where they proved that the implication is true.
The purpose of the present paper is to extend the last result by providing an analogue of this statement for polynomial mappings. In order to formulate such a generalization, we have to introduce the related concepts.
Let n ∈ N. A function F : R n → R is called n-additive if, for every i ∈ { 1 , 2 , . . . , n } and for every x 1 , . . . , x n , y i ∈ R , i.e., F is additive in each of its variables x i ∈ R , i = 1, . . . , n . Clearly, an n-additive function is also Q-homogeneous in each variable.
Given a function F : R n → R, by the diagonalization (or trace) of F we understand the function f : R → R arising from F by putting all the variables (from R) equal: If, in particular, f is a diagonalization of an n-additive function F : R n → R , we say that f is a generalized monomial of degree n . It is convenient to assume that generalized monomials of degree zero are precisely constant mappings. If f is a sum of generalized monomials of degrees n 1 , n 2 , . . . n k , respectively, and n = max{n 1 , n 2 , . . . , n k }, then f is called a generalized polynomial of degree n. We note that the degree of a generalized polynomial is not uniquely determined in our context as we do not exclude identically zero terms. This approach is convenient when we formulate our statements and arguments.
For more information concerning these notions the reader is referred to the monograph by Kuczma [10,Chapter 15.9]. Now we can establish our main theorem. Proof. Given a generalized polynomial f of degree n, we can associate k−additive and symmetric functionals A k : R k → R for k = 0, 1, . . . n with f in such a way that for all x ∈ R. Now, let x, y ∈ R be arbitrary solutions of x 2 + y 2 = 1. If α, β are such that α 2 + β 2 = 1, then it is straightforward to check that the following identity holds true: (αx − βy) 2 + (βx + αy) 2 = 1.
Next, assume, in addition, that α and β are rationals, u is a real number such that |u| < 1 , and x = −u .
Take Clearly, for every pair (α, β) chosen as above at least one of the foregoing expressions is equal to zero.
What is more, we can find infinitely many distinct pairs (α i , β i ) such that α 2 i + β 2 i = 1 and both α i and β i are rationals, so let us take Multiplying both equations by (i 2 + 1) n and introducing the functions we have P n (i) = 0 orP n (i) = 0 for each positive integer i . Hence either P n orP n has infinitely many zeros. On the other hand, both P n andP n are polynomials of degree not greater than 2n . Therefore, one of them has to be identically equal to 0 . So either i.e., f (u) = 0 . We have thus proved that f vanishes on the open interval (−1, 1). Now let us consider an arbitrary non-zero real number x and any rational number r fulfilling |r| < 1/|x| . Then, according to the representation (1), we have . . . , x) .
The last expression in this equation is an "ordinary" polynomial with respect to r. Since it has infinitely many zeros (as the inequality condition for r is satisfied by infinitely many rational numbers), it must be identically zero, hence it vanishes at r = 1 as well, thus f (x) = 0 .
We note that our last argument can be replaced by a reference to Székelyhidi's regularity theorem [15]. Namely, since f vanishes on the open interval (−1, 1), it is, in particular, locally bounded around zero. However, Vol. 89 (2015) Equation for polynomial functions 21 every generalized polynomial which satisfies this condition is continuous. Therefore, f is an "ordinary" polynomial, and thus f = 0 on R.
Remark 1. Generalized polynomials are obtained from polynomials of the form f (x) = n k=0 c k x k by replacing the monomial terms c k x k with generalized monomials of degree k. Another way to generalize such a representation of f is obtained by replacing the finite sum with an infinite one. This idea leads to the well-known concept of analytic functions. We can, actually, establish an analogy of Theorem 1 for analytic functions: If f : R → R is analytic and f (x)f (y) = 0 for all solutions of the equation x 2 + y 2 = 1 , then f is identically equal to zero. In fact, according to our assumption, f has infinitely many zeros in the interval [−1, 1], hence it is identically equal to zero.
On the other hand, the regularity assumption that f is analytic cannot be replaced with a weaker one. Clearly, every mapping f : R → R which satisfies f (t) = 0 for every t ∈ − fulfils the condition f (x)f (y) = 0 for all solutions of the equation x 2 + y 2 = 1. Therefore, there exist infinitely many times differentiable functions which satisfy this condition and are not identically equal to zero.
Open Access. This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited. | 2022-11-18T14:35:47.617Z | 2014-03-23T00:00:00.000 | {
"year": 2014,
"sha1": "46dbc4cb8d87122810363c5685498ae677628f84",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00010-014-0258-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "46dbc4cb8d87122810363c5685498ae677628f84",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
} |
252773992 | pes2o/s2orc | v3-fos-license | Atypical Carcinoid Syndrome in a Patient Presenting With Pericarditis and Supraventricular Tachycardia: A Case Report
Atypical carcinoids are more uncommon than typical carcinoids, and carcinoid syndrome in general is quite rare. Mediastinal atypical carcinoid is a rare neuroendocrine tumor (NET) that spreads aggressively and rapidly. Morphologically, neuroendocrine tumors are classified into typical carcinoid, atypical carcinoid, small cell carcinoma, and large cell neuroendocrine carcinoma, and the latter two are high-grade tumors. The incidence of atypical carcinoid is rare, and the prognosis is poor, which makes larger trials difficult. It may affect the liver, lungs, or mediastinum with or without metastasis. We present a case of a 47-year-old male patient who presented with chest pain and was found to be in supraventricular tachycardia (SVT) on initial presentation to the hospital. A repeat electrocardiogram (ECG) showed widespread ST-segment elevation. A bedside echocardiogram showed a moderate pericardial effusion, and the patient underwent a coronary angiogram, which showed normal coronary arteries. A computed tomography pulmonary angiogram (CTPA) showed a right mediastinal mass, and the patient was referred to oncology following a discussion in a multidisciplinary team (MDT) meeting. He was commenced on neoadjuvant chemotherapy and has been followed up since in the outpatient clinic. This case is unique due to the initial presentation of supraventricular tachycardia and pericardial effusion.
Introduction
Carcinoid tumors are mostly found in the gastrointestinal tract (GIT) and are neuroendocrine tumors (NETs) derived from enterochromaffin or Kulchitsky cells [1]. They secrete serotonin or other chemicals into the bloodstream that gets distributed to the various parts of the body through the bloodstream. NETs arise in most organs of the body, including the lung, thymus, GIT, and ovary [2]. According to the WHO classification of NETs in 2015, lung NETS are categorized into four histologic variants: well-differentiated, low-grade typical carcinoid; well-differentiated, intermediate-grade atypical carcinoid; slightly differentiated, highgrade large cell neuroendocrine carcinoma; and slightly differentiated, high-grade small cell lung carcinoma [3,4]. The prognosis for atypical carcinoid is very poor, and larger study trials are not possible due to the rarity of the condition and the fact that patients diagnosed with this condition are in different parts of the world. Typical carcinoid most commonly occurs in the GIT, followed by the lungs, and it has been reported to develop synchronously in both lungs [4]. NETs account for only 5% of the total newly diagnosed lung malignancies [4].
Well-differentiated lung NETs (typical and atypical carcinoid) account for about 27% of all NETs and commonly develop in non-smokers or light smokers. They are capable of lower mitotic rates, necrosis, and genetic abnormalities in comparison to high-grade NETs [4]. NETs are sometimes also classified into secretory and non-secretory NETs. Atypical carcinoid has a higher male preponderance, and the male/female ratio is 3:1. The average age of presentation for atypical carcinoid is about 60 years, although most cases present between the age of 39 and 60 years [5]. In contrast to most NETs originating from the GIT and lungs, mediastinal NETs are very aggressive and metastasize rapidly. The exact origin of mediastinal NETs is still debatable and is not clear, although thymic tumors are the most found NETs, accounting for 2%-4% of all mediastinal tumors, and are mainly found in the anterosuperior mediastinum [6].
We present a case of an atypical mediastinal carcinoid patient who presented with supraventricular tachycardia (SVT), and electrocardiogram (ECG) features were suggestive of acute pericarditis. A coronary angiogram showed normal coronary arteries, and computed tomography (CT) of the chest showed a right mediastinal mass. Biopsy findings were consistent with the diagnosis of an atypical carcinoid tumor.
FIGURE 2: Computed tomography pulmonary angiogram showing a right-sided mediastinal mass (blue arrow) and a small left-sided pleural effusion
A computed tomography scan of the chest, abdomen, and pelvis showed simple renal cysts and left brachiocephalic and superior vena cava thrombus, and he was commenced on treatment dose tinzaparin. A positron emission tomography (PET) scan showed a moderately avid anterior mediastinal mass, with uptake less than the liver background and no evidence of metastasis. The patient was ineligible for peptide receptor radionuclide therapy (PRRT). Ultrasound scan of the neck showed multiple small lymph nodes with the largest measuring no more than 7 mm in size in any dimension, and biopsies were obtained from it, which were consistent with a carcinoid tumor. The patient was reviewed by hematology and oncology teams and was commenced on prednisolone 5 mg once daily (OD) and omeprazole 20 mg OD. In addition, he was also commenced on cholecalciferol 1,000 units OD for vitamin D deficiency, metoprolol 25 mg twice daily for SVT/chest pain, tinzaparin 14,000 units OD for SVC thrombus, and allopurinol 300 mg OD prior to initiating chemotherapy.
Following the multidisciplinary team (MDT) discussion, he was commenced on neoadjuvant treatment fluorouracil, carboplatin, and streptozocin (FCarboStrep) as the tumor was found to be locally advanced, currently unresectable mediastinal atypical carcinoid, and Ki67 was 5%-6%. The patient developed chest pain, fatigue, and shortness of breath after the first chemotherapy cycle. He became quite unwell with similar symptoms a day after the second cycle and was diagnosed with lower respiratory tract infection and was prescribed Co-amoxiclav 625 mg thrice daily for seven days. He was then commenced on a 5-fluorouracil regimen including carboplatin and streptozocin. He still experienced chest pain radiating to his jaw on chemotherapy, which was likely due to the spasmodic effect of 5-fluorouracil chemotherapy. A repeat echocardiogram did not show normal left ventricular ejection fraction with no regional wall motion abnormalities (RWMA). He is on ongoing chemotherapy with an aim to offer future surgical intervention, and he is currently tolerating the treatment well.
Discussion
The majority of typical carcinoid tumors occur in the central airways, leading to airway obstruction due to recurrent pneumonia, and account for about 2% of the total lung tumors [7]. Carcinoid tumors are most commonly found in the GIT but can also be seen in other organs such as the lungs, larynx, bronchus, liver, pancreas, kidneys, ovaries, prostate, and thymus [5]. The reported incidence of carcinoid tumors in the USA is 0.15%, in England is 0.79%, and in Scotland is 1.46% [5,8]. NETs are epithelial neoplasms, and mediastinal NETs account for no more than 5% of the total NETs and have an estimated incidence of one per five million people [9,10]. Mediastinal tumors are mostly located in the anterior mediastinum, although cases in which tumors were located in the middle and posterior mediastinum have also been previously reported [9,10]. Thymus tumors can be typical or atypical, and typical carcinoid tumors histologically have uniform epithelial cells with basophilic cytoplasm, salt and pepper-like chromatin features, and diffuse expression of neuroendocrine markers on immunohistochemistry [10]. The mitotic rate of typical carcinoid is less compared to atypical carcinoid tumors of the mediastinum, and most tumors are not encapsulated and can grow aggressively. Almost half of the typical carcinoid patients have localized symptoms such as pain, cough, and shortness of breath, and about 30% of patients have features of paraneoplastic syndrome due to additional hormonal production, resulting in Cushing's syndrome with and without cutaneous hyperpigmentation, and also produce parathyroid hormone-like substances, resulting in hypercalcemia and hyperphosphatemia. Occasionally, it may also result in primary hyperparathyroidism in the context of multiple endocrine neoplasia type 1 (MEN-1) syndrome [10]. About 50% of patients with typical carcinoid tumors show local or distal metastasis, and bones and the lungs are the two most frequently involved sites. Surgical resection is the therapy of choice in operable typical carcinoid tumors, and there is a lack of reliable data on the role of chemotherapy and radiotherapy [10,11].
Atypical carcinoids on the other hand account for about 40%-50% of the total thymus mediastinal NETs, and they differ from the typical carcinoid tumors due to their slightly increased mitotic rate and focal necrosis [10]. Atypical carcinoid tumors mostly affect middle-aged adults aged 48-55 years and account for about 40%-50% of all thymus NETs [10,12,13]. The genetic alterations of typical carcinoid tumors overlap with atypical carcinoid, and at least 25% of atypical carcinoid tumors have metastasized to mediastinal, cervical, or supraclavicular lymph nodes at the time of diagnosis. The five-year survival rate of atypical carcinoid tumors is slightly worse compared to typical carcinoid tumors [10]. Several patient case reports of atypical carcinoid tumors have been published in the past, and the presentation varies depending on the site involved [6,14,15].
Carcinoid tumors are quite picked up as incidental findings when patients present to the hospital for other reasons. Xuan et al. [6] reported a case report on a patient who presented with a one-month history of worsening chest pain on exertion followed by cough and hemoptysis after 11 days, and high-resolution computed tomography (HRCT) showed a soft tissue mass in the left anterior mediastinum. The patient was treated with chemotherapy and radiotherapy, and surgery was rejected by the patient's family [6,14]. Kosmas et al. [14] reported a case of a 66-year-old male patient who presented to the outpatient clinic with dyspnea and fatigue for a fortnight, and HRCT showed a mediastinal tumor located in the anterior upper mediastinum. Fluorodeoxyglucose-positron emission tomography (FDG-PET) scan did not show any metastasis of the disease, and cytology confirmed the presence of a neuroendocrine tumor that was surgically excised. The patient was diagnosed with an intermediate-grade atypical carcinoid tumor based on the WHO classification and did not receive any chemotherapy as part of the treatment.
Early diagnosis is key to the management of NETs, and neoadjuvant chemotherapy should be considered in certain cases, although most tumors are unresectable at the time of diagnosis. Nevertheless, the overall prognosis and five-year survival for patients diagnosed with NETs have improved, and follow-up should include CT scans and hormonal assessment. Patients with carcinoid tumors should have 6-12 monthly regular follow-ups to monitor for carcinoid-related heart disease and adjust therapy if required [1].
Conclusions
In conclusion, carcinoid tumors are rare but aggressive neuroendocrine tumors that originate mainly in the gastrointestinal tract but can also originate in the lungs, ovary, and mediastinum. Patients with atypical carcinoids may be asymptomatic or may present with a wide range of symptoms. Patients presenting with supraventricular tachycardia secondary and clinically significant pericardial effusion have never been reported before, and this is the first case, to the best of our knowledge. It is therefore important that patients presenting with pericardial effusion get a further assessment to rule out serious underlying conditions.
Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work. | 2022-10-10T15:43:05.363Z | 2022-10-01T00:00:00.000 | {
"year": 2022,
"sha1": "c18e49f600d021c62d63b826be97196bf093ad0b",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/116172-atypical-carcinoid-syndrome-in-a-patient-presenting-with-pericarditis-and-supraventricular-tachycardia-a-case-report.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c42c6c130a3779c02c20034b2e8bd3e1c9b0602",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.